qgam/0000755000176200001440000000000014146767672011223 5ustar liggesusersqgam/NAMESPACE0000644000176200001440000000136413615323377012434 0ustar liggesusersuseDynLib(qgam, .registration = TRUE, .fixes = ".C_") exportPattern("^[[:alpha:]]+") importFrom(plyr, "is.formula", "llply", "mlply") importFrom(doParallel, "registerDoParallel") importFrom(parallel, "makeCluster", "stopCluster", "detectCores", "clusterExport", "clusterEvalQ") importFrom(stats, "pnorm", "coef", "qnorm", "printCoefmat", "predict", "qbinom", "make.link", "dlogis", "plogis", "complete.cases", "na.omit", "quantile", "runif", "cov", "optim", "anova", "optimize", "residuals", "sd", "gaussian") importFrom(grDevices, "rgb") import(mgcv) import(shiny) import(graphics) S3method("check", "qgam", "check.qgam") S3method("check", "learn", "check.learn") S3method("check", "learnFast", "check.learnFast")qgam/data/0000755000176200001440000000000013736643447012131 5ustar liggesusersqgam/data/AUDem.RData0000644000176200001440000123001013514360373013762 0ustar liggesusers7zXZi"6!X])ThnRʠD2S;%>/m2-GO"m Ƨ?6s/dTA2 P=LJ>sTUÔRJyPɡI+HJ`c@ ;mVSp}"@DNֿ3D O ,=4gYn%W{GDO Xzpv%&]٠ m;m\!q*YzY&^va+#ӡ`SF봸 |t ``@WRKlY.*zR &m:nwd N&tUR881+yN犕Zj@|.}(~jn} YHM֭x͒H[{(\%&j/ +&?yi QN'K, kAK.'BdɆpzdm"S-*ne4vhapKl@ ,[S)pP˭8/[k3Q*h1MEr-l^a.l<> '15r>'iL^%./vbSF |%aM׳$0KEr3ONjis (+AZm هF7_sKdPz(ȓVU CჇ! fz阓w~j0\?EB<+^}Iԅ?"LѼ~1|.g9Gd[ 6=Ԭ&G)ph*𶱭u+mD'Y u.7NELߢVԼUG0r[-qtqJc aPdjJ"JŦN25N߫ŏe|řI |lǁy^~7 bEYa2X꓿ܬ έxFacG-tM`—|6q@ c]0^RӫŽӎ-9:oŠC&RQqh^%NpskAxv šN O}!F1zi=%N~:Ǡ;_}J/;LP+`Y̌KZ/HoLLL U-&+d.<(}D:.]4ϯiR( 1swiہ PY5Y`EN ؆X.]"%,@eRxmQx@/xKP(jv^}sj&Ѡ9cu}D]UP " )Lp&ho0Dٵ"c.8@ VNR)%OAmVgWJCDO4gR /?['KڰΝ}Bv0EW/7>G2 ,^^M:,򐅞p(eDݎG?U,ҝD _@NS ,$,?T~dιYH 5gPk#coc 471ǻOOF*ZJ+;[G#\ƛwl6p^ԶsYrVodt,( :kBj׾] h. /p1I}0R$TB4׿Ivo]O,J8 6labilw;_71S2YkyS\i?0++f9۪dC%|0\Olq{#MZ};)Z<40‚A^w ] jn ߠ.̷-Lj./L5'DHb8'm$ HA— `.R7}anb/(K,k^/0`:/^2}lQkkQl#˴h&rZ¤{%ȋ4qQal ?NV͇ecaZv,<5_bu'[}ҧVOr;޶)8UoKmd6Ò[(tLݰs u55h,qD{꓀dsnO6cY?>OJ4y#\_Ad_O/&Ÿ,9lo}S~ν:l8ăRdFA<I#k]kz9 W^8~U t(:[#|/*d^u{Z&tMqU1kV;X`1s$݈b@!Oy=>C e] _A{K筠a.rmW*pW>Bm%hR4pMo.rX0=Prp,u0W*C!f\6;KRԁdKY2$y n"d\/.:T4,OOidc]>BQhf`g)a]3Dc'.\z"DOE*`p{dz9w/w8#ec{eo@ןZU/XjrI36MR^e sjTjX HYq$5|Esfc~jj@BB"6XCܭFރRdK8H3t!NumA,q߽v~s!s Ӧ@?9\1F6;j_!RG= (P"?P+wHwsDi 3yx$kS2!r\rC0~_Y^Vʲ &p%II^|P庫Ww,+̝ %q T!9^Z~Evbo_ Y*8P$I'˒ ᑍA Nk,('|(+źtyP }(a_Nma:c0!P<:Agq>yOegkjE`b1Xlgy-+(-w8߆eS$,"q0QQ 3s?JF^9o܅ƝPW|}2Oe>ؙ0$Rc%Ș ώrS l~П a`n3mc픴F){۝g#E7yKx';m7zDQs~f]Rb8O72Lw-۪>K|R k^]` m|*0̈́uyzKMYYSڬYG:(%lHPʕW-*&ewS3 .V~Ǧ'd270`9ʟZvbkAT e틠c>^:^K|Um 0j7"(M'4$oHqA/#'~EL5.<0Zhm +Ĉ(BjVϩaܒ3Xcc](ȹyIe5rfޢqVRp&9}CE?TWv%FYߵŸ\Nrd`0oۦ|%&XzvHVvp\L/Ǥ^}f3d}D(_Rx{> wΧ:`=B3oj9 N!9پwI++Y4aa^OSVL'4=xe-SMճNq7-5 5R>o;.TK OU ciʸfcZ"DC7#37XΒْwmMTX`^·sq0 a:C&Y+5+E]Zltng<XkTUy6.#;>` :PApg/\J:s&XPǔ uCԤh("ġ"ÍaA2.C}Ӳ+v'Su阹*S2sfVX<4d PUTlKRV7=x\Ɉ2x/4(:UPWs5Etd23~kd-,0a숲kkQ/rgմ*5}jCn.[tٖ2Rى_!cէ;\;-]VoI@FYaYptA4^8[#(j~ذ)' ~z^diz@0x{׶h-8EP{! Li 9$/^)S^ݒLx h$HMzhuB~%g?ݹ]D`v1#5hְ}Sۡ*.A;>[u^% IBjrŠtw%NQN )Hh̃-o2p8-x&)5mq #lhh K#9{ɉE*..ѰRyPx[rP⡿D@' t%UQj %@8ws'N.d.'W;EG6Pt1ǞDۡݽ9'xf췩CyB0Bɒퟵv/GtYæ8V Bϡ z}Q::rl'W A[BImW 0};ԇMZSFE? OV:><{a$"~]B͚K/ȅ3#fրojgfkkcg+!\6lXtbY2tnB&Sg(E@%njgP{U֊@LUva >#2]T~*¡ORhB$ (}U.ܸ]0"G`+&.aX}<ǎxm]~ 󕪘W3SYJY {YKsbe=B!GqH]N.fҟqVRu2So%~L($5L g5[|iwS$[ eBS - ߦXaSR=@<T;w$p A}kaWKGZ:݀ߣ6Ha4opאָU_JPkxLS*?9#ϦӴj{}%;NrdJq$S4wms@R(R)uT@e"!41@N$Cd P>~w!$xM갘y;Rkg9:4W} JX<ِ%{qsRȠYbn+{v ,s`+}ÒDKJ6GQ%'cZaXtS]u^vDNyBCºǂFs^?vo9vPnQxuyA0d5 ͵A} /Y2yخ bd9s{y%!iS)iFWFc82Tǩ1fA͜i3p1jD, Udp2Prmhk)R/~ф, {7\x'`8eAT}ILׁ+=Cc:حu9XE_n;Y3F6?>t pdJed64`*JZ\i9\"+)?\沰_CrR) 麫,8zu0'J"jы Qo7❿Fr>'Țbz)^}7`^Oӎǩ㞯8@jP'ZhБ*X ܐ{5S!D}Rv[#EfIsY;^G4'J{dExJ5VLCA%Z@PV{<] sذ/4UKq" Nmf|D`u1#-Ewc\grS;x{o_'FfiPO|!HF_LS}~&؁j]/%?#^3Jd8ďkzx(/?xDv?!ص)s~b3B}R?~Xebu!6$rD$'R-5CRz{ǎ.R' bCPK5j)Ltޘ_u RR0Fȧ`pL106lw? fi8vzSٮ&Ow̽sRqEs\!l2\|TfNL hˮKwzw)1<- fV;nl;1TN]zS4[лRois i卺d'IXim  6îP#3=lD[vnI)"*Mu`?:0*'ȷ`2t,k109(~.tç۟Rf|ADŽ-r'[ބ.Jz5K%=ӽwCZȄ J"k? HH?YGTwH%&5OŠ4)N}% ,k4%9бY3̾"zkn\Uh;ռS;) JAk"<۰DagN.'ƻo?&=#Rh2Hυ*\L-ѱcܘӞ^Iz&Ӹ kKEدћ^o))aEgF%W$čE7RPwF`Sq0C%z+S8MHiP!p2dag0nԭdz۶*=4yέܙ4]J]>8kUi?͓f &{\ytSNX?S3tgl`jkP&ۨx7dw8l7:h>ڛxSe qd¿@qK$92i !L%Coݨ4+MsY:,] ,Uy4G9$56M׈R[J܇ œBeviPW-\-yr#..ξku):p,~0dwI'jut&7;x9]r!!R4dX-VЂf[\ f-1AKOC )O OĞ}]oݖbmHIT)6'<CG,-D=qA2LNp5JZ.o8#O1ߍE^G|wH{ pqO{ w{iso85^$N% *ܶvnHnymN=z42˙ćr媵=]`Vpf@ȼ ҰJUl ".? GBt>*,A4Z^ɭBqYk&>T73×'G؆=]x2z4$SP4urJ!Ӳ62Qy$NhSa4s=9 %^SmomSZ g ^^3}"XK/w2@~m8-eωpD0:Osk jNnK? p<[2}k%D~LxL>[hnѲB8 `2ayl얏E+T(IP/rYӠ|5&3XkmY~A%7P[vQ'X7y v ϪK벼+_!иJ5(Uel̩V女Klɳ U=Pt.W&.qڏ66q>?V2+$Y |4t5?kuN9 iE_L%̍NSIV yT:x5扙 䬤}AۛlB`Xnp?l Rؼw<Sԃ]}vE[_F[3D.`r`wgD4n]@*5zi#C 4\\)3PXхj{,am v8z}ߣYA 6/4rEVY6؜/r]w ^<2pW^ˆ˜>;jL;=WQ 6/EsR57 6h-o%cJ_}ЖvM&@;דi'O{?"_3o~PpsϬHVg* 5P`_)՞~7:С klI޳hDw6 <Χ­ m"?30:)O"< ײxͪ'MP_럗vѬua&m ؖ nj)*q^=TS#?W55RPsHd$v?wr,w&Zݛy$pM9 <8<&Ig.~5þDq#/+5.)<q=-Uk\ctUu"Sߐ0/53kdzF0U]ߟoǖ9$XF5.-&:v7(0{f> Td/30 h)b * 1 :# +A 1]| ,UO 阔)t&qgy(CvݴIf_#ĈlIDI Uz_\yۯv){:XP3* _qqdJUʀc[A2ۺ_w:QW{f GDTr|ԫmѺ;6gPQ *pY)Z!=[!6JtQZt@$tw)LKe !/pHO3.0S",zr@#P$'D0 ( co^wׅs?>L"^ѥ@ʙjRvO˰yL@w%0?âyѝ@h{m, V Ob+Bp jW{+na>uŝ۹|Oxid$xB'^ri?rR.j)߸"wql.~rMjLe@|xsxMYBj=b㣘`8U:F- [,6WQ2 &;2t\ϑFvRęQ4Gt@vV0E7:ƎE b O-^c\NӦIw.MGc cq HQSJkBQ FPC= @0Ln c_I4VMeQg)0ONR_98.)5gqa4k qj>[3Z1?ҝ3NK|UQ$oD5s{}Qw 0h!fwhC6w*yTnED1nfq^ EEõJ3o*="ӈURdX "J9L.dZ>m ><]cu7ot3jj)*$/~.N^Jbar CwalEr1COU0Zj@QL~ [3ܮv*0X](ک>TֻzyL?'\r r4$[䯐*!\iwl *jPܜL UVMFAUr$\ke7 qbC]I|80iO!!WE7ґ9۬JKR8KQ#8 {t1 D?U"YG|cY*Λ]cRD:}l@!s6URvRbԟh9&>?b8 s^*JAJBV#·rcY wZf9y7}Mo5Ֆ߫Ѕ{x庡BUnaXp3Ld$5/!-+PQ7u+C!0]DR-G]8]5N|Jwڔw hAMd|Ӑ {+(2^>:T<\Du9G^5e+Tނ;I֝zv_(-(NRTB6q`SK,IqtH~H9n we8֪ tOܯk"UGr`xVxg)59P[v: _r*`jDSf--M !dp4{Ц!WH0`V[ M&M@5[Զ S?􊌃æYz-HY諜# ԙ -VIVV{ UA$)=X/&tDB?_N"Xpd%Y}Eǧ% in'n%u=t)6߄a?w0Sg+^UCӜflN;)yP:_axOm dxTlu@fLeH3F~JcjIR;z: 독-&0bDZ]/ NmtJ}BYZn\sg[k[&Cx7O4C']FSdiRw^پ9t@[%[2h_'G5c\ L2$hqn`lXgT (W5P5rE<Ga TIggI8 /B0tQ3/]fNh3] G9RfL@r]s Gۡ0W~:D꧳2ǟO_$ve@eg6 im_0F`쫄 ,*AGTXi#EX:hg`cJKv_("DnPDC.&wKQ`H-lA 47ɜ™ NV&ZŒps<@m݌i0GYw8p7KDNWco#iQsb|\pvCBR oG V6R}eIpu>q ި7e6#aV`PTm̭!I9]/GyuKvv>Dնb^`$Ds^z[`HNuBKSA$6#t6[L&OrHBc9Ns/A:pNၧҿ#VndtVe٨=3w.PT\iM aF _Xry=(%|q}4!Wk<To档F/2d"AmbV/qV3i/qA E$ n;\#00&UGBK2CP25ʧuL2Kw埈9XjĸCL.l_t0lߦ A2*b*vgLa9t:ۗN\ &Px'gLJ"XV5 7h,eu5Mb{[mU ]L~}cSɸa%Б,euKFZѝF@ }Ԓ[28s;ղ`li~1eP\;t.+-]E翕ůusv0 g0RԶpÇvM1wFz@ֈ@~:mUkA9Lqʣxvg_2 >4o6n/`d[;ףM x|4 ]/Qڕe]}@I/6rnn_.!sfc$ֶ:>BܶT|n`'4kS~\yrŅ%;8O)B g\ 2~,W5EJ9%{LH/2|!!FίϤAo6!VڼNôI0me뤔ޤ^,])MdKÆCGrH=ﲒ5j7 9GbD-n)=le%nd.C۾ į&9c_[JiA?#БU 69ۇ\ '!8`93tv $}{P=|+]FVulPb)T-iCL,P5VGXU]e ~Իv|8h2QW1r1-6_4 $5Rs^Sub=Vfe e,v|:@5x* ϯAKٺ6dh|G5!4(rYOcnMaQГn( MDhߦYh9~D}=WsH?*2-mrl^5D~>Oo" TM{#$w _0kmAgs1?f@hMUƭ/^:H+uJw  iNdPxЫ^u<9FGO9|5l)G$.Yvmrk}uuDu_0lQ,oVh]mN5R\w ~>:`XK8G,(v+y^W~b hM ǘfK[gZ'-CŸ -z硩U逸Ќ^s+:کBN[_K+4C%:JcB޹&K*KuiOyK^9/)=5GAk_D"?s{RI0hre#@s_%$BԊV1*rVw }oy6ڎhój;;d6]9or D3TZNH#JapǐAallqЅ.电deP/ɹa121'c lzŚO\}3:wqדLawم8O\o֢_I`Q ZE=JXZ4`Ic*sxS''MjtB`5`dwn?\rڽ[Fܺتl;?̤D=%^q3*^0=1@ڣT]z;™T<<޿1(_ O[%Qi+^J /$Xj4xYYUܵl,v>^P:Wf3&OQmlڊ!ʓq})-:C z#~eI@<&]<&.jcQ Ci gmmDyu%P]rp5P_H1 Wůx X w lK*FQ}ZrI'W$L*Id7cEJPڂśB1Y)#4ڔFw.L3Bu.sQ[y!ٔ0:6[qc%L`i;Bč~Iwu$n NSh9ͳ?)ƚ_mu~-FJ5U02JLrm'#.y4MR+vgoRH:յ@f2F_/vvN1DNkTE0qɘf$M}Z{IV ^vYͧ1EL!-3Dso:^/T`h Pט {n;ZDTL2jh@m:I]5PBaSmЍvI}G8 "퀌M: ^AU*C3*٘x{yHgx/b>Sb9;NqN5h)1u1t^{/ l*s4 L1XFAz&F[sA־sG23 O 1g^:ל[KZn#YdF8L@+5 Yg,I 0XǮي" ZŃ_Z$ fYXWtmXGQ2 L3 l܉^qʯvj2S/V'v ҚG?8g0pzYxq2o2zFL/ |n>0_S ԗ1 ZRdl=d) NՊK3`|"*fS>H?x0to_@RT^Gt2FA `1Ѿt}EF̒i ~ ~udU(\}6MΊBf;; l%.C>6/hwˉ4Q<*`ꓸa6\(>Zaa|ghVrqbNi[-hglF@.jaM}0TT?Rwy3x\C*bL^TLӼ2:TsD 5 [i6Ѓ?]^]H#ѩ. [8:F 63!xj EM.% id]8ǻ2#Ul* r$[ڶ{q18%!7eⰇ[W9K#O>~TdtڀmGv"M/NUp|لn^䎰*sx!y!sЦ|O&ەёto>c1Ѱ X 2O Lk1Ej1F~n`{&xoy9͇,'*P2P1(!nME=DG㊽jccVq:GH[@OJGk~R/U$tUϝF މ,6Vݳ ɱ y1nAohZ$E˒jy%8-BeY*bzК:i9/'8F,)3H{`b:OϠmP%6d8έ< ʍT>L'c(9XC#QMD2>l[H1s.ddT($STlygb< A(ߗw{Rj +R )n-x/g n%Or 9FRp[/Ü?L9}SjŊe7?& L$F5e>D8ʃN?=!z ߼D{cx9kLS<{~byF0:4XlL͌Dm}j@(Pľ pHU$@ k(͕迉 Řqj;`Bd_2AGXí}NM` ngɐ})B5͏#LB ҷoe= '_*P\6;u$ڮ= u6dޠka.XI3*M(njjUDv*k\tw!AIzn;9S}S&r moJGF2PaA'Mp:9#" HY[SXROAFq?`SWkoN7}!S71su%]v賙ЍԊ e?:A{D&1}9d)Uɂ1FXL@tIS iE-<.d-~ϒe'0v VrͺU;m:{D)7@͠A?Sǹt6.}aI=[ήrԘYie }A'JhqyY'ZvN[ qoO6_;pb$Oq~W{(N+FP~-g3ê!?\L¯¹Q@p \!bFІZR<YymW D | $G l8I:y/1Zinϰ.VkQNNB|(訠/^-XMJCwkolAau+j8k38 | :npm!D{oR][,T"/]@Xr] =dd{=bavH4{MnϾ T (+Exn4_hi6\˾PiIbh5N&\z,a/U=b/52n=tJ"oo;n h%X=(tGUZd@H: zFx͠8*S )Yc+ԬJѲL1B-d~#F4 <_kH'^ k3/7"D #vRi'/n3dNiHHj[_֠@EU-%vh- LE#I]+%#\RєQڣ-7(hMh/N^L&)l cσB}Cq "tJϔvT \_syde`vŖ~k:SwXhAB>'nAKM߻Wv{Y˖Uqڬtq@ޫkTN$Q<~(t/#~k-c k LÊ T"\F]}AL}1Z0i di IJ@B&19̜@pLBS(3ηɧY͜"E%Jͭ>&?u8l7tZ(C62Yq8Lcyikۗ~wqUfA'mYK#ԽT~X.<3(yj" K:8F/&n 8}1SЎ'Wty2_vK9~5-;Ca("ĸ>1z@zsŕOTs% H+P78}B+;`yDk.6S!tFpr$mb!2eonzzgKjl; K $݆d}#.|vG i=Jt  (yf2yP9WB@;Vg[~f\)HqYT@v(8 hcժ ̼I3 >̫=&ߋloR슂qz wND5'-NFB2O8 Ҹ=,h|J%h.H$";PohʙWSЄ><ׯ;x~HW2cDEҏƗ+MTqԱ&I {WGZ xd*(ltɶ70YRy K1bF旒ar@}Cr%|U=p ^8iVl?|pf Eno$MW^Z!!{'GN 9Љ;,fnBbDBǑ1 .Ir*6 t TˣfTW~"f!&˨ݼl^YAqL;ѱ8(Y|%iێYV2܆*'uYeJI:< *4+%u9&lN\v]6CCWvpԙD{N}쩗C&@d&=pܾyd{#LQ:)@815ꎯu CIId?WkdnKMխ>CmWtp!7Hzh"wyrFw SHv1lc3M)} 3ZI2:vࡆ99^"՗GsTb:Z,],HajЃ=r8u\Ԗ{s>591}ǃaQk@hss>Dv^ExIE_\z8ݣ~%Gf~n\CI+qok3[PM|a*8NnWO\u 3EU^+Bmd{;o2:B,l3qsYb4I*i}r1i#xH}za=֚eJ뼧 vZ9EP0Hbsc^^)ď`L'6_<nT %?xT `Lg7-Qמs(dd=qk?MG͍Ÿ}*%+Տb(${DrIJdLa\|83Xn*ty.hү7B2R_ r)Rx3яw%#jܼѨ{(:J <͖9wl;%ׇ'L؊ 17qaȡ8]lJX͉[JM_}= L9pi.C^ ʎ ĵmtĮ2"=~N xK=h[L"VX2تOvq"$tkz;'.+3lpV)$s[em H^.{;LE)X!Yz1i%Frc1~6ךucyk_]n{ld(ۧnj\Kb.)M4FWР_7mcVsꆁ}]\2+ ?VDYZ"=Gڃ9`O9 KI΍!6Ϧ&{/⿉|jёRt_<-kem $e>A!enEr&qpo\  w%VcEQZ!*?ܙ ' 72T^:le#8Sե+26'UHSXfz'y12+k=ASn1M+e6NQ.=24PӶ>Ptp +݋VfR,M8¯o^B-9{v֯%m:Ύ toX ('j, i-96WtέBߴIq~}*& ؏fuW00wMv5c!Fkq7Ǟy5gU7 aGkd!R_Zp_%Ex"X*ە ]| VMgdnAKҨXIMQԦm|9v+ o<.e3#B߲p8}ٍr.R̎qQ9AgbM2 \n i# F>2H#Mw. !4*I}byţtHJQ^7 v$XѬ]h\îl"ThnM}^N`bC%_Wޤ3}m=W {|I ,itXM 踧ufċwnxM 7t@4f*'_9 as^C 4c-2g{G)ga_:AVuf!<23T;UOhJK9k9 aj#ukcDRT=L;/^[πkD̩5k\ "k^D`K^c\spȱ8QxK#$;h~X]ߴ뾿A0"#D vpDŽBL4bN,F^ԯȌYk ڷ*vpᅷdFetŌ֧iDa6 q9^*Ym8=B-V~Cm,*!N$;ռDGC8dvoE(2V#ߋ%`짻N!7{@\޵‡ )6$*Dc̑TIMĚC"V؁q`g:8Eza1hQ}}$9.̓+1+[;BAز~aPbNaz!sJ|\ x@bQۻDmh4)n\FEwRPny-O8ʚQ 6*}>ј[ApL0PO~u艩YulgD u9* Bax;>lv  ̒pvhVk)Ǽgu*$kcͨ>eڷvJj~."Iфԯ~(_٬t4ʩu?/&jLUXZG$+lB*cLV #ęZ~ ~rPr1o)+Ji68p9S:KTa(N̈ߛ=%e~Z/򵑪cc>嗱A>u. ?[#4F|xC`IT"X|pf૴(5G sϣhcu֚_ФrZ!4brW_C|HM)rGu% bHUj\茁tNJ5 B:Z]qʡʖo 3r'Lh(䪟mCkwl5\KnWa8׸nHD JjCM;H-TxiĮ%6kS F%zhZsF[SD.|m\;LM Зgq 4HUioߟ;-=FMgG!,UP"ӜN[ٺ*c9&O =z!k]{t4TWUnWT4Ӷľ* "DݰT;$%]?4wg+ܠL*)U]Z]yh~mĦwBSR X^UGwm7L*;5ɢ>LHO 8G>g#tjJ,q?4 %-V["˵ŤgW ]P|×R ڀ*繞tK֗G-L <0ڊ`J{TPO! %[z9yzb kgGN.õ'ʍ.7mP*_daAj+̶3( >-ZQ\p;-3/&2,E֤K %hlü<4x ˼V8 [+;zWa|s̾O.~w r޵S#j Ew\('a="i3aK;.<WĂݷ]g$1[+SKFyE35w2_^j~g8XnRѝ 5q.>GuB cڣl,䲫{q.qDz3jg$&z7voPkNF5 DoĮk '])s?؃bZfmW>?n \QA!vcX"U]^_fSk 7?WQzD6OD78hnDo4ljKZ2rE;ۣR} Cؼs$b8sy&MRF:(_Ͽ49 >sw'Cf*ѣl!ʷ$(ָcSDԹmMнL" W̿/]5K­~A9[3ߛޝyd] ӐvQݛ`T znd V/޳nEnԭ8GCq+^WX+c 92p梢Tg,yRb"m`h"__~Hscp)ܟ}G?/3pF**.s2q@$ / 4@xp eqdAVSypSDҌZ[n33vR0Kf`@A`|~ ,cLą%Ǧl>&uQTdptEܯ0AC3Grjئu%itL UM_ܹl  )^ M[6SFw oN:u5QDzReB҃aoi$KؼɌ'&]"/|`?5q!W2_Ѿw`9~Ilڈ谓lLᶼ9y $؆uښ"\18,zn+(c{dސ;6)ؓXyomٍӦU+Xa왲3ϵY Y2^$ r]AϺ 8*tjD;aT91 Cy斵r4dvX;Qzڦ6dt~ЮǖA~W_~cLRKd&CWR!'`*ܥOZ5)&>&y},1T]xHOcL >kD:OKFA\žXgç%F <5Tf%(Zֽoa8mg4[*gk,y /8jyY'2RqS0%pwPd|c\ר,;P҃W_iT㽺ctIWP6V)]*, V+9!pKiu"Cc@:Y}S Rz+ dF|:/{V I } ;eèQ8K-*;J[^ctlD?.[~+? y4_I9:}B^X'+9pRw9bI̔i?23 DTlFVE?Ryg֍ JI̗WX$tQ4c t7/((IaV_FjJZ*"x^ٰ=<!xG aEFlhE=DZZ*Gd֕ÞM(47DMui;('*6FkdgVZpH]n*_0*f7E,riXT3^aF(u5JKk[{v?ww'N^>\ WYh ;ҕL?j zn رZ}ZMi_NA]x$zrDp#5|?kݾlgZG^FaΘ[!a:m /O@;csR] _ Ёi `rܕPo e5ءZs"SPv*Ty1*V)a19vP bm,O.[bء%kl?}vQAMǗs'4rq=`/D_j}((.$ / ?Mu&B`l;*v8fF1+q91bIϙDvNx&mײK~[> Kr!UHj._WI,+آPWќ:KkS7Uͤ1~T0Q\7C7IJztWev9#|Jj^QWŚML1.)pQdVz2|B[_P ć 4H1|Fќlv VtPڮʦvnWRaӳǙFbN 2KIR}jsG!П*ogՊ~P*rTX >׉xOP R[KH4{A`DHkc`j!??r5);'c>#5cW 7 m.m\%_cء!{ƌ`5R.;QZ/XcIMyW8q8VF ?j̅l :{i|la|t9! !̨yr^bEՕzWUʁ+V#OA@Hu]o鄔ϳ"M-?Ck3}X6ٿ@z2l!$"5qP-(`])l7P`ȩK*k*b:^&=f3;}:^ gQW`6s K?Cte=t7؂dtP%#Ux>c9ډ4pz6Ƿ83ܽZfSa4gG7(,!N,ݝBrK~/8+YsfUu4"TKZEOZ !-)$łN_B hIoZQx{I~wtө.*G<|ݮ5CP.k&Coo 'Jt_{e)ۼr$3IjdANHx&W:ʑ%ce e7ѢS;Z.4NJڪ#2Tx43ڼ@WWd7eT`vp{ gtϿGu߉CE8aVX_9(O`cș1h))U( ,JN3!2g,=B?8?I1g c "hSh $ѐTzp·8IQtBEϙr*FV SDEZ0lAU.!_ζ`<_h&2x+Aٌx Xy{maQ79Pd%.W|ݐ1="ï7/gIyo2™_N~-x7ЉzCws9g},۶+[3pfn7+mU ZnK ƝvJЬW(I^Yz pgGXh_p漢NW'lxOt hZJpMiZ &A Q 0+rvޮR{_)ngȯΝ2P6/V3Aq}jVB",V^K*؈2'aaNƹVֻ~Tcz3Fd7 y_d'~SG@95%`SLurnMzoT|*GZk(-WYp/[h9Id(ZfIUe=,o؉07^VbLuE)P eīNX^zȖP^! =祃8n+6K . ivpcၢ/*]]3^_ aOXIh#)W6p caDbUr/5ѽR {dt GKk*N溉lS.{=,B83J9Mr ӚU`xt>A/̘%~R6|HC5x5jtASM`;3nTC ߳d3V)@_AswRRn+ \sO [_a0F@Pni3K/+[|־s =={_^%IFS\:[A$k~v7DУbR@gX&YD|͟nh=5[~1w6jD{@_G׺o2_؄=鄞ܫ+L&ͅ:@:Y؜dU^+N9(_DX/ QmzA{ܴoEP v@p | 8>zSAyIOZw}ie(Q.Y?݌ Iߕm¤`4x2M#ͳ̄]Z TȢa|( u@Ԡ'uk{zk9Ҡ8ՙ=_d~JrJ1j10L"&#r[KoyCYc*1, 0r6Ki։ҀKh0d~0ޫ6}2ɖ3 8]b@k!d/ꭋ~E!p#0PEX~AIӡ;Y" DqC9G׾:mPf2ks7/!U2 v!'E@-s^Nb9W0ю.98/5 q1"9(!SKY&&Ȱ ևEG`>S(pC'Z+tV S2{řQ?8n4ƿ„>##-DNv5F?6eKSa`h-C)`@ F qё~_㻅5_Вa[ sT̃/tǧ@PhWtM#?FЂ x d3)3dyȓzZs&fO*}4W]oRzVCV@Ӡ!.Ph"w"c vTǷm۱r2)fC:d;5@ {#]7XMn`,u 驡;] QjP4+5`u7D%V[T1SAπQ`:,D#C0r$ 3cŊY_98-jEf'*zDn x'|ςhM"Vnqvj5#{N2 \E#hN&> T8r <Ǚ÷Wi7[OYO<=x}~0X: C˚j&̎oJXE;͋Ƙ7/['ERќ#Zd\<}"/" Br"z6jjGZծasK1ɚ}5|igs%+" 堳amJ@Ci"ZbMcȄgPPW!V3LfpRYY=ۛiH=RR<"&m#"$~a=S,K)0BL"\i4(Kz.V'O&4bb{BokQnmCT`k1me!$]YyP+'Wˮ+DsM=I]q( a3$\j/! ֡x!eU,m6g Ğ~r Dy^=6$NTvԆݔMu 6TlGW %@R ץ{66};`u EugbIJ1\eLuƲK[ ԑ9bRn!S 1WZOƇ=X {Cܵ?_w#@Y /sKq;>C+o)BO^-k]a+_TRG$…=$3ya0ӦWd5Lk0&s6J_^Zi> ?3X2^gI5 r*_y#3܂1&gJ1F5CҠBwK%vٔYgs ,r}gkM $q%1k!0C r+AQnFn)i@gdGh8S7@QneQgE o0d75C~6}΅}C>~Hn} ̗eV5E!zu;{zJ7~DJNu+}1c8Eۺ 2#JO웑\T7ZHZtv B wI܏j:72zԈuE!-+e M ZwW<3j) [5yUd6iXG.::{LFFI,RFg؎80oa ?Vʰ5q ثتhNsoGva7թ FN89 x~ ʸذJɧ?r.^jcy g.uNxޛp' ̲!$铵'008,2a)UU{1|dp"13tH~ɛ/絯^iX%%Y[+3\鱟;Rp*NN2d~ ]0GR\Vi#@i(kS4-(:o-껃g%N;:/EaM ݜ֍QՍ@]2SѦ#;1@ %4B8iY0%yW@m ;"kv 2&ͮEhLo%-EgWfպ1$t,iLhrT.lI%$ &^&x&p'oz^w^`G"~9j8sWrGoѬGט-'HILe"4܊!9Z㾳E!4rA` nd\ϻ??ʀoP0IN8\p:xZpR[Z/+V7|CL=i#TXv!ҥԵZp0aiWЏX{7,>yY.z?f?TO-syT֯ H"V?Vp% Y k[ oyKEī/h=*mz?;Lq,ȳVj`!BND(.'xR9H.KSR^= q} '_E$*-_2"0IE+IUZyUGbͨ* [s^Kʧ#21iSjLkF㽡R^"["xeQj\{a1KCZE_?\\XOdz!Z/Aٓ"Rȱg"T /lߏ }^K )Z<ůpד#%SaMyP̵WE8ؿDuȵq 9z;3 -UXo#'f-q}70ZpGRn='Fh ּ e0QhV9g0ϊyXWʂyDmx]L$m:u٨\y̯xcprE>|_w,@ fmvju}j}W[3/6#K01`:Z[+jI@d^&ʈ "^[wgyf^v=kSK}e!id uj}$thyLBjg}.YW0&i$dFZw.w Nfҍ@z_F}Yy6m8 m8Ax=t ӆ\I޽o^匹Ðd)8INƖs/BKJac 1~³f;fU[BhUڏD? Ytm?k\Cx ~AXtqFLM觘vvz$'րЭ vL.Lb7`UIZu5lVv$,MWK|y>3}>P&Vq_d/&˪B^Dt֭uVx9y8gd8zoy'U ;,M{KB9LnDyuׯą |6%1躊%Rz%-rբ[@V \=*l=rI.yuMZ1)++l"m۵b!!u-@|GVj+?q,_lMj +\،n)kbGH,LUn,$#HxN9eH&%VMw \) jL/S!ݝ&'Hi.b]?:#[1h{RV:M?hŢ,-/yޏ?Ėfq9 1Uקm:~oFm0e/8Dޤ$4#!V=3R&ZUߐG؇xO3oH6^V?> 5۠L_=YjQYw׈jAndR噑PxG_`Q(*@sƽ.IÂXnːRΊfl~ͣ/#iTLC/7IJXuB9n{_&[LC]dLZ1dqi(GKF(I*pD~*{_XGʴ yɑv2d Qom >ZЅ i,tW2s ܃j|.r.{ a/:żP\2.J/!9԰݌DC `7Sk֡"\.Dzq5t i-;=1wfL";p-ü!:H].[){枫*%I!u끇>fKtk$ҐUu1 PuDìf| Ѱ/5nz>(lP_Ṝ;WePr PICjmv @q1Ϳ{7:}:ojn qyHAB~G<EhG̵ "˓{y #Pt?]nbO~+,u=A33;" nAL S8xRK*}CbN~J]_4l=3m+oryXhX®T tA[b[s\޳)8M2e5՞eU|qn;96ܢf$!p{+%P"^ gQ]oQkEُoKoo&#M)XŒr'H{hW[7Ut~A.sh\P±MjsoUGty@IF"\`}p!Yʇ`5R$nMbVRmV7.'')[B"..E `7: !P{&§|}Erj1*_PΈIXBZjߙtƵ=ԭW7CbfB *Xl00~9YjaG2i'5Ս~DQ˶O$##؆6[*ƺq%+  Lr,. ,Fsܫso1A'Q" yNT^I|c>t9t\T9!` hI?[\;fW>a> q9|{+sʊWQ}D[7ӄуmC֌Gܹ̐Iƛɰ6NWhU:+FΜ}yHQEr'L1$Gu)P?;->,iQFB?<6 2B83&"Wwu)[l!Պ}*& {SUϷ&T)kZ*܅ St仫:vzȪlVĨmf:1"F"Ҵ#VR5&?9/NpsJgL& u$ڙ+kʕ&O)_p[`xၮ&=; LӒkYH^dRk~tM/3TtD6!DR&ȮC\݉ng@yB,S*l5![[uyT岲hfpn_`.So9I =DMWV  2H״YxPgVCkt}>dk1՞zhN}"leX22듼;1m[Rٔ/-z|tט\9IiX|3tc~'UHqfCCFw0Bb}Րm~ }8Tl3;aa.GN'j N o`|m"/OD,?)Lq_X$^pg֡t#O}MVZ blM_Y[=Bv,$$!D('w2SӅ &lQjO   GHj#\ Jd$3WiǣWzL*a:#7egGM 4"S4]Յ~,甂-c*ssLe yN^^۝u}-l 2xZDvfD,l 7JZmT !/}ȥ 9Vyx,>I&%zﳹisoyE|0<( աJ5Y2֗}JMWѿnǏaNi+{ )c҂?"-s=tCbjP)$:a6l"x8ï&ʞ5)Rep8.} ek1Ѓ3$u Qb ?fs#)d롼UufED93ҩ7TªǪW, G`񅂒%T:֖؄c?n@QL9f|ݢMik@ <߆>+LPe2)ֲ8I#oCô9+OΚɴ9 n&܀ /`qQ׶eoǫӅ DkrB V"/1 E6?̅JM}D+,":J<*r5wLj+f>U~ ` Z3?+Z&6iJj`>e2fHwiPAwjq)-KgC >/]al6s*1ʏv,"ނ^rMoZ iRB}Hxv6Sk =&%åR=OSqI3وZ,y2{F vO=RZ˭b{֚gT{ f*\[ʚV X{VGijBIeoh&|Z/*q FUTV~ƎyX (}J*)G}ZYҡ  VrIs\='3'|f8dwHJBM>ҿ5tk5Ђy(͜i};0JvJ[b~ԥm`?%/-eKcajxz$>L9K!Gu{H:g7v>)_d+^'Fw' 44a=F_#Z>Nvbo mXq:ޠ?1oakTR#]?MOJyXV'c\AOq duz/;2潗RD}/k׳Y &2xc|#&u55x-ɶfsŖ@ )mFLgJeb jM+-w"{N{}oQ5[3iS-*qCCahi};VQI1HdeNwZuV Vka=:ڔl_}t%iRqCAA! /% Ǣ]xڤI{Cms;Ò,~2;J)ma|1t}&pPX76y;<)f4mQ]OU6Moԍ;CitVXɉ( t ^ꯁd}kRۙ<B.Ok2it9SLB94bl ¢>_W5Ίv~Kn9{GghMJ{^x k/:Z ~3f]BMqIr0[3#"攪V YZiuْi9 yB:8fuw .Ddcp=B@aK %+SL?pɬɢ*\wH!3?bJ*Y6z=9{`SʺѦjdw<ܪnLȒW)5&LVyk@xZJ7Q9Y?'<h6 9F)톫U.M?-qShn+YiP9)揽n44/W^y=>'p=RLl 7n50:u>W'J&lHƗe 8=oV#QxPGբXZO^3WK`MC5Vo'L𺕲j˝$KWЍؿ;Oű Ôi!p[Χ!|& *7c@ػW%DzTq{)7fNqST:1OO1v:<i = G)&3H5ECz'֔kh.q̦v nk,fxM~xOUÊ$I# 1SU֌.yn($,[-%7w06 Jp*FjHMC[v/KXlQ%@U#OڎQϿ#br+ |\P4j~<''/jOΌ] `N~BW< eʊ5ēm{}L8wQ#P2L"}r|/r0rmj! |J%j%l!~AUʿqJ/K})[vyBgCXź|7?JS>|1:TGC=Hdq°̑YJh"饖3J~tjc'd!fu$'sDcԚ%mk$mQMfg2Ymb OeW?NoO|iHD4mSN-`d;bCM?|wv2n:gGvQ%A p[׿ 5h[ji$Ï7Ne@]?Kϱ1aX--p68[oeyv)a=D9銃WuչJFm1L'0$51'<Ȟo`/4+l9q" ISk6wC?,{Z@JUn0y1j#W7(!`")ȂO6Pm+E19,,Ċ uLʐ35*Z->t%_5'q`z%tL>'MEʆutR/8T9Px$@Pڰ%ۦJsXwb?E'kc}S:C.ǭ./4cvi`{-1x <bDP|n.SWm5yvlь4ӊF! (b&q,[<7-ez PH wK%~OX<绤 vw"z7` N"ﱆD'.^u@DK `IFKO.Zx#oLW\6Y(*6vzËyߊ+^=:that?AӞtA, 99D |SYYwP˽pa.a&t4йi:mXօT](T*୸oEݟ)i+!o>Q|>KJ >/̗39?L ycJ ,7 xktsTREΓٞ&rS;43UӲVcM -fCl(b ʵ5Vo) K8mUK=OαN?bu5y&7^6i^cI ܂>z6ԝ#tx8R,tJ6gɒ5$0lUb,hyT`o©͙Lޔ>Qub+U<=u%ns{vKldJ" [ҧ8EIT3C.4(n2d!hxUV& :c=Fgs\~4Ts}~bhEwup, l$տT-̈́ʕW#̻M外ᘎtЦr}SA;C5=@ҹ%-f8&(,+tj6N#rlg ڔ1pxy(eIR@mN-`JP ge>5~4%B8 jEkH˺K_J-2ﻪҀ=v;m ~ACA~XO~-&c&~*DyQɅZ8AU0Y >95rG`(`=i`N0`MFi(}5N:^gu ^:a"}f r_yoTYĀw+v>;)# XR+ÌAG@:8:*F_ 4"1f-&elW#r`Uy^y2MT}Jc;u9 ܪD ykH8aE?DQ wFDcp G~kn%kg[h^qڒT9WRxOiP@n pV灧mռb46 0j&4Շq i2ҝcWD<$B[u֑[ָ sn~%s>vԍpxF~YVmI|^, Lnx"$YfXdiyÏ,Lk^@gu,7p Ǫ{r7"yb@zVC6ڀ)]v_ޮb7Ԃ>0G)GKR,ɷψ/TPU \&h PQ:UݠV?꛸@IH o $OpZ[M]|aZj:Xܺ(Nȡ'& GNj.6uf,ܪ^TS r6XXqf6'˗ ip[R2Ragw֨$@gI Bgɾm;L?y:8Kutq#Acb|Rlhg}mQgc(b;Kh4ؕ?:@́& HNbE5J<Cbgz2z-Tлr{;a.. 075巀jix"|o&kz7f@4%ۚq{JBiʀAPm`UOI$:L~?3(٥2T#j~ .NN foږKW*$I^6P p\j@3"ǥ[NW6iX%zTYpb(A$iΩ')PM:ʁa|\C&h߆su(_lƠuucrO۝dN!N0K ES]?;YV6HJ>g`ũWF4{#zk?ZfUdmSJ2 wh8›l"W DyNRk 5iɣ8ad/+Fr1RϞ3P.H=,gacsWz!fqEM{uWQ]JMӗ^TfU xVb|3L ,R_IԞFmV޼c(3Cd`fVnJ(jAFֳOZnal 32a D 0m;yŻe2s7伆)< NstJE{%n>e@4qpO9u;3rýbugFq/dg(;zԬ͝>DPnsW'F=7^wR0^znaF$P >eAœ~vTmS:26= e դ Ț &pizR #(rOn8A//n(5˶Kƴڙ8!履ϖ%;0֚nvϯ>|["*r 9-fP*Hpeh6IÔxNKIlvsJ\#<|E թ}-J9%h ~m'Y(Xָd"ni}@i+F[܉:5}y{Ngs&3 hhK ͇i2P=SB+zÙoL3bD/"f y22{@GۧBG`UX:erT'$i/}];JJɇ1\}zU[>E!GgNeæ6m"m|`6T8PBc0cJ_=m4K[:f%M퀾=&l{sŤx/xNyZPK4 !8?"nr+>Zt34U;LN$B3aDFXlUwԎ`˙Yd1BiD~GȃAa&O2) (& 󔬻+7ދ^͆M2ܻQ'LNtM=hCo`9Ha6Mط;9Pc/[Ѳׂ_pBN*⸽Qvt/[G4TJ˪oּ?ГWl(Dg <`0|6^K๛nHE5 rc#΁Fέڀ*  :g'hyf4Wqn&»p@` T VkTYt~._pg\dUWd_ڒ>|SړM6@y)6B:dWVЫ 04Mx%D:nYܬk\ g:1AY:A][Ov݂bL Vqx+Re&_&`m{zbt5wbW|'S`3it$gv4gzo8JSq>HM$IP9TGbEq`Kg1/j}܎{ۧFyqSϕq zUe)ZO!yϹMe-;2-'Jږñ,; ~`ff@ Əh-UJ!LU?ᒼF@T# F7>KiO:||,7&)m0}v_/6_i=@j ";1eB|*L^8!CH̺ƾBYf"A(%x 9X]='ѱʟg `wO,H]W>8eTT@HyLJz "Lt-}S$jG])r \a}]3l˾ pbp]1<߫5Fxū >^> 7wȨ]5d/nQxLgFLQ\f ,Ӏϗ]Y?.{3I0sKZtKZhyU`0YU}A˗$IzL)+@wԄ:-wXv0E,ǐXSxݒ v &@Hx'C7vt[ b^*D*;PpN:a5s>k=AT%,(!ĀB+?I #Ѩb%rq3#xf2pYw;-bV7H4zL[m tNg`P8 iv_'8_VN@]8+|rpC$m]'!P{EQfz_b7Z'Xg(u^Q_ڦ>KHiʩp}G+>@|k3xE%K (V;(#73bɖFfU[Jl>K55;aI6d# P[$$r&m?OA0a OQeg ]S1ək)Dڠ6PM7q6@N-敢*E}u݌Ϸ1 a3*;y#X|҄,wTQ'/Q;齩FhשӈHU^q=q8lTTjrMsvu#;v X$V)hD]6-u"m((rFi焏"$CN{erʊwa_92ɓyЗIޝ:ӫD^Q/g_GG loQv-:,QҀ1&1R J,5n]I<)v}xGB̖3݁0l=_b#A~1B*E}$E*!/N]=,(9F;L`S7n k;ץgH8N#ٌ3A|w2Iىcj`fK.x+ 9u%l#4TՅZ`ktSc-RoW\#p=|s ܻ #njhdsgTwW\10\"4Ki?Ԓa\r8+OoO1G6oX*n2KCD 3_j-^p`d?AI8pc M}K(1d'Y59g2]' /#cBA'[KScT~") Ű݁,O|@?Cn<[G=H8\˯ )O_}v>r&b-i 5+ (q XΧ2;9KQ{ܔ݄PNȍڊ`ժerb.t djkS<*V+ bNG=mGJ~bBx6_,^@[ܝ J Z0"qiHt1M&fl2{V?Tct,ڝmƼ.j$ƾe𯩳'lPJ$TrV|0ڀ9*Ul_I>Dre1ŠĒi Ö`,wn$/X|C &j~! Yqw]NW=A K2Ւ"^OÜ^1㲬!#C T/2M0m4H76﹤sC:>2\~ǣMr8 \Q(NKTpތ3W:!3cPr'1L],.N<{=K1JPժed|TV~΄R,/Z$_J R,j~J . K@G 胬gcYUM0a.QZvj*s=Vu40SEZkD4f2N3L&.饂DRͲMU!h+G+_XrB߼C0lzGhډA ]J.p ̷TH]¢%78Q~|tp|r_rs%Z8{gbdC7yɭ-Nq~N.on&m~ÅNW_#4,V6I@x7rgBk9yYNRh{uJ"gS~sIУs _} zr[Б-ƸsQiYDMwh8MӨ_x'|OɵͥuJ(J_Jgy*? U ~#nEdkIuΛ^BWcXЭ.^Psߚ\^R|,VE4-蔞пGX+pH.<DptbDbqNB#|%=pէ6eYDKmobF~m l.zx,\H#S_\7ͧSW|뛓]NU[OkDGY J>:ݖ}v|{rmi5p)KmS "VTˣs z@%!%ǃƜ1mfPᇧG'8f̶e BjW.S5}qPv# ^U$7j`fLXyFU^u;hr2A茵KR0٫2 /Qse}VCC=& :u ϼ햨N Fn$({bHd|#~M~THJSokb9,(!/:|RRñ!PLe[S<ȕ n9 +/%kP~;t/% ޟŇ8qy^NUwy_=y(T`'#m/QS?Rlq"ձ/RF"p᪆"hж+aj?pʂ]E(@nw򮴿EWOJHN򪽴D 9vxyB$IGb:IP5pieY'>+m X'A-pL0ő=3^W{r:tLW(H &lg!~nq얉 FCGӨL e\. (+ sr 7@DuGGʵy3^ ,%Žo X$bM^hJEr3L Jkf#}"dۯS,>n"])NsGu9=fTg?7`-Z$"g."ϒm_=a6ާ y㵍gWC;أ0 3*m9uPVCS:FJg"[-":zQ|1v15W§ ڌ)<ϖqxmi@gPUhe{cEZ[MiPzH*WW>{,OO5쫙#UNKLGgVk&Z@B纆E4-iywJz t'N(bAGb}EMML>3̴Dxuݣl&6 *Nnۺ|Fw$Q -T ےɅʊ%|9j1n#@e;4@2-fߨ"@k( }?ўz=]^eBkAA~&9cS+iݼEPFwQ HѴfsJk^=$^ 5 G;Bv3CXv'"{ƊǕ &N'3 [S*$nAgu>aD؄jRzooĽUvxҺ[a qN\\@E/>0wy\;4H!3>1( 1O gjL )< ?!n#dGc!kpӏ~_:K4w:IW!8OM=81>gr(U{Wya /| \2 tqC횠wU;d!I "z|ꗘhױ],z$;=LZ†08ע1qD]Jj<MDK*m5'eɊ9ǻ%cZ ˖BŞaf}#?e {rW?pzNd~uD\ј[7X5O;N7vMT|)v{K&ӑݚYP+cUL6f`wb \@ h`$W;ZHC#2jNwWeKISG2ũ30 N$ yRW4~-eItvk&쮦1uc$GP_=57?]ыbMWm1{[h)HQjS4+b}7+PLU]䁠Ѵ5UD%L",dz D.IN&o/Rq<7tc0oύňx eiE􄂤τ$>=WHkU{$ᐏ 1~a"k՞~tliU\?pMOA = f6L  y脆j{lHp|6GOC4Eij-ENZOٽ|om[7Tnx}z& t+++:\L2=&)%jޝ_Beи "+94@|ϪYuLG7By|xLsgo/łFG E<~O'քRz :X?s䪃0V%#$+;oHVtϋV!` @) HFzU->D$)jq%M'XɉVHu2~ m4VAK +uJ@8 &&4F90E-bcF%鱪*&ɍfۦ ͭv/>q6!a6>jePm݇Z_N۾F.+g:ŗZ5AKzfh/B>> epn(9O5ue!Y!dsX0 gWX_bSxnB/"ѥ ̉(}ELfnIiaTAɼ(u%P|J$WKڭG42 $  ̈@/gS7>WxiTμ"|\&eHW.:] E` sLuxZ!9i1"mpa\[b϶0: ` oxe6LwG nd|N\M_.{iƿ w[BE CJ6h<De:;o#xj#ije&6AJ*'[Ob;ki/V fNVqVt%EY-qOKMȭ8#Cͭ1H~0wT뾽^)g@# j@YsС1L Dl|', PПjъ.aM``wv=:yɀrZ4¿2 +AP:!{y0wI Jp gJ2 c! LY=xǾ1qkuM-Ȏ9ݸΕ"Q];>^VeKdIT,J1l)Gr|-#kLSNqF&>[Խ'qÁ OQPդ ;Krwm:k]C䒯 Ŭ]7R59F[9鄘& y(kr@1v K_\"Vqsק |CX2K2LG*%U1ű'Јi)f ڢ;Lc?"B%shBdԚM>6ӄ%v(6QPa`o`.*Ȗ#rBB? -bgD ݿqQm'Ĺi$AY xa\׷* ['S 2=h_ހؒ*q ::;  D Z_!o>`'&0Xkn}pʩ`!UY,^yiJsf>JOj'nz7D=`Y!W7hz#k|9d~,_lN19:0{¥^gOoΟ%,lƗaeam5TM*~|uKov&Ïnb>9~OBQY9di}( T%|T .t!kG[0qەM}'=]\O1A5y~w# qܖb$?2 &d] ]*5#CqojC+,B΍G'EY=WOrH4]Th:= I gD l! ɕ]gmP4[|M1|PuVdYD 8S " iX'_0iSZZ` `+f\hyprPᤋ1E+ x 2@-!s `n=ӊ$8 `ӸOFuRWn ޘt0 AEx,Zz#$}x=PF8@b1[4l.lWnޞ(st^sPGNjiג2TJii+RQNO lMq S9hTjx-αxzldLEZ4^:D>!D3g4 !f%S˲5`qO3*6B!̑27p(6XXgLKmTi@sG4.S >N1NOEV/'yQR!!s.E[gbdT MFSC=ݜL\T:0Nא kuT3|jXDGf(y6bZhwJOxv ڤ=쵸0FU\2S4yZq)HF#4&>>m?j9P#_$Ygս3Aq=D.GNR(GaG`UѽwچgM@.-Z(k $:6ǯbq)&֯[ݸLKis<:n'& ILfVSF;:L OTÒ k~^*T)pK?NF# v_Hj*e)ϾrTv(;;71D7$9k}iqfdw%m(򇝲W7\g|&=~0=[HUz%mR K |I}ac٢"Cv_!(Dnt$}qL 'UyuT쌑8lU5N71y7g*K c.j3i&~ڊ 5wbI.޻ĄvY$hrw54D׈!БrRɀOiD;C-zT;}lr6k4nӺ?DI6FV5=39g,pjՠU%x55L刯~EC}qm B3cEx $Ny!y!H ~uΏjV##k_ $-M^g'dfRN&HNš\iٔE'K+F6N~ ];lQކ./ 4Wf",jW6Stcb#0`\L(G";53:vDw* l o=n ٞ&}'>i,zx7jc0+LL`^GUͲ~J=aˡe= Y㽦F /Ea[υ3F7;=Nx>87J4c^[MBm@0PɄ"\Fmsj_%# a݁giւЩlk,PFC3N'Ą1a"ilBOL3΋㖏!mv; ~ʃ F180D˙9cY'3zKs/o+9x#8 *'LNU3 J[)SCeIun*bt}>!ef O`*C $ڲ,?էoiDQ5M\R6H4|YiK3r4yC+j̦nf9zB}~$6@acqgz[%Ҷm?+ұJrwZ7TG[VK4+%{ RlcBv^Ƃ u?&DdZ|3;ú%JG Qaee8Fצּû^=50Ho$op XJ"U90X3swex9zEq8ԁk3 b06Je2vgFcV-.ԕwO=O_8k'g%vƆ?H5OaDܒ[ BT'(w^,$ylG JtK"mػg@OwjSl0˧6 Eq 6{8~B٭j낛XqYtxRnz`(v ֩Y lm Κ^#}—\q&E.;MT^[7rX|g~}_T5R(ӀˡltNgc:]F}n&Fc2;n$<:>kAXSg'ֹ:2/A):SVG^%tv ۫_,g: s6uҨƍaqˤ8Rt'T {Vqԛ l=k. # A6 L89 nbx\?E|w wi +yzYel-{Y"1kb .WXI+n.$|!@'AyitG>7d"NYy.cF S+AMWj|Ec+oYcc$4cK qXN˞p ]&}mªSu_[dȯY>A Oozעr̕V1V&r@^]Zk[޲\iAF~ "ބ[ȇjTb m!7GlkRh аc0F24KֺpϕFKvEm:Wz!uI,N2ypÄlo ^Nd 7Bcjg ZwxmT_/u~vMd@Q7>-(jG3,#̾`Ix4?Wlo-R)Q VJ:J[$=='۱ xmG.$Zor{0ٲ! ݞҏa(\G:ۦ{Ѫ5(ĬvzOsWUwInXq%vlE{qkl{\yc+{~ `dY#&LSYge@D?l< 6beQǖR %y4_1#,rʆ޽NrM8VtduS-ßy[?ćKpMG(?sKUs>RfHDWG脚7Ɍ sraNV݅~c`i]b4m+7pSF_m|Á(sKu(3̹qE]v`N@(m):uCBWlb`~_qz;t6êhif׾.QَpLEfn1M 8#V{bKlH3m:/]H#16[4}A]Ѐ)OC%gkUߌS;(+X \ $3˃^Qm- U{mgyt U*}=r5ʼŴ\A;#]&*tvf~AЀ-' tܺ䳢= Th^9Mc)"MgR=xQlˊkMdYL'孶 _9M(|/[A](;x>:r`tf02_u)* a3N$#~~{T#֓Hy;X=w#,/!$eÇs͓] <;t_NJI*{'wc°}IuE1ћ{T؈] e®䭋kf>.l?l|"q˘ 'im&ޕxm0kЁ/3˓4g1M8 ?(8BC8ؠEOdz~8ʫ`j RˏlJ%mS: &ōKI tQR =ԇc]7;a%fz bkpxQH.әz˅O BtѬ{ncq={DʈZDԲzjsK2\=ltY2Sh[߻5 T'ryA Yas,ɬylp^E/S{X?l.*r}l%|&*+@nĮnيKq|!ټ y 9m/,r=.o%mM+,RN}Ai"4(U+)?7{K[7L3s!ʲ'*bԯ4,?.] jEXc#(91gU2J׿C%vhaatAnEW"LRfip4xFKr˸:c&*ʼni 9 ![ߨByՏ$~KW@>s j9AZ-^ժ)c [19S F5Q#PnX_mR s%u*5M wL6tx,9d'dY>/ 97ќ>u  ȝ/.OxAO\D4Iw8a]-rg?DTa)v7G"]]"[;[3(o= 5K5j0]C }TM&PΩ)3`::W6u|3Ȇ g^vYY:MR@Wב䉻M%i7Յ?/ @t.wlh]4vЛi=@{XjO 'a=dˏG,AIo$TW370m+S>!A,5 Cx?/b3#~ 0,W:ʹ Ʌ'.dZi&!t҂-O=a@HjuoG5CSx%<ƕzTAU* 04n'(# 2wbpJ)ъ=9*<ࢾ}kd:qzU(Mۿ7o q_1MhcV-z~=@öS,Z# _9DpBB(̌;"8SY3,CX >.ymHQ5X?}Mc4E=X>ⱜyS.TLtI29|wEx۝ª1^{/'=K\ӱҼOň@13dlp[Qr/-4:8NQLDPiljǖ=?{$fΧP֑xIa_55ꥴE/eU:IK _l2vx>EgP*~X=CmA,o%NY9ϟ[;Wv *0h\(L?LO^Kq3V ׻C'' 2^kN˔H%sxrT:Nװ_uޮ" 8Ji2&x&Rwcy4+X+lJU㾹0[3[Ih,1DRrC|Pq-@G4+Ŋ`}c;WgySPgC'a. \LmBn]'3$R~;^w} Яit m'lYX|st9'+S5B[ `܅ @Ob- 4v OdUj1|Ꞙ$kǠ7 UQ5g]pmDK*?J{Q؄]Zg k =[ $5Ά~Ux ^v ўRquC :ʬpLb#;H{2@&:{7}EqXzzneuΘ#Xd-;C悁)NQ`:=$q8jDs{DRnqxzغ9@6n0kl mMyOm^wBZ@J'?C`:e1g(4ɬmJ]!zUY5J@W|3QiPҡ6ҐL2RgTw`$FݏFd}xh'ĩ!Po{k4[QJ+7T4Y(hw:`wa#rȭP,FRuݷ m`CcSDs 4D nQ6-)vFۧPnV(5QEz085AhV}CnFǹmŁ8ces|>:Iu;>3u %-p)蔻Ķhb̂.&矁Qw;xƥr+^[n37ehC兦ˆ/LL9~\iZ}BU/7lOPCMuhFgD ]rm;AU2nԁF}:$e_0 |̷mH*; & Fdl M| PcOWN_:sCUQ(ט'WFm|3B:` eZUI"&zB4Cd&!^_j[ @FH4VsTW&3b~FezZ 2g f RA ,`k@~G kGmiv \*MϻE.h=qss(+SjiiK&ܔ}0{/ 7U{UQ<=_nmGt".bἪ"G_m/5n&:[͖aOBkz},/پueaceБL`6.7JwNj-(DO6< !6U4HOCaU\lWcyɵ쬭1EDw7,V#qY񇸀4O=I-ۻ1<97A[Ue%_i#dK-)Ֆe +NAv;I]Պ:?+A~@cD{;Xƴ$3=_g4 |cwRL 9bzB !5~*D;Wd-#NVLG37tF:H\1LKTi>e&[k  @TDIfpЖ\hmu&ܓH7nTUҔve_f*+0.1lW)e ,*7:NxL[ɋ蠏窛En76B(Ga˄0b.-EdhOe^V?!(>\,OsL_ /e@S'Z3PZ|S٦,,kͩL2&͚aEWN.x'b6xK}nN}3Ǵ}U;7Ytn"ܚ>!ۮۢ1aP)*0C IUwX"؃YpWS1ߊ6?Hs&~F좀vMX0?5%1&?'g$P+~ D~!רؙ<>Ub'LN伷{1qyrQAz:@ZS WPnN7QlxmK,e#<"r~SAC!t8CUIAf3/ڙW/c%8&W r5TfHqA cB b ym7@G2?YiV@.98uIiQ{o~>isr+W{vurqTbM<rV8Et|J𜘟KPw"F]۲Ǯ6-Gӆ:3MJAr&o)&l@v)K`5iZxC0H[@'|hTFu['`y;Zz Xq.lG"/b)@jݟHءp:SŶpV,iakvyӧCA?]ޣd}p];ExX2 s:`]B:LUT|D U\X؉:aMǵKE]{m5iu AD.CN+(λvHWϦJ;Ԇ@⶯.[&#-2MFDk=P 3^ t=w}锳8ƥ[֕ۮ!;?2U?0RZaRcybGP䶭5Ƹ@4>^I 8/CsAE_&!]w qʯD@L(-bQJl"5}n`s֜K-cō$ b@I4NWЧ3AkSc R,2"xLɍr|WM:ؿoݡCDjq47l'wspvvJy̭.X C4iPxuRb!2sq@MZҤBZ"G6N|qk≮E`S+&tFE"6cca r^ᫀR4)=L4A<ˢF)r20keO)s*t_f <~pȄxh{#t[1X`zM]Eg'h&6h+:G/6-=!JΚMg't T l6P&gHe">=Zq^|ҙǽ5AT 8ǞrzX1%`FQKa2̹|=ni]F3hTyz`tC|{}7ſVۜvwcxM!gnkx()#C{ x!Co[@X)^ZZ (ɦEۚ{85R@lqODfOo N}_uiS Bi$([' y"늚IX]@ ^kO:r0x!@[hN2`-o&aJDsYu5{| A=MX8[{DDjhnN`w꽚]) ύƒ  $!} vDZ lJuJpZ.&ŘOvDy'k8.d=E18-p+ h?ۖhh5Ɍ|nFB3핛K(NӂǺ>dp )۸ҷTQ/tp7wZst1BaU|@{v #'6tsY6V0P}'^|fq`c}cfd^~)mZCYR6V#xH,Ba]I3'.9s\ &wy ~_(z>r\dn_sK! ygfnm&Ō[r`xߴ$BBbhgs{CS\( 2Wg9a"奒`IFaB%Ӫ, R6\4Qy3<ȧJ| ̈)$| _LgjFgR#|Z$invý3O;o*#F-aFӌyh;~VQqu z~=k BtO {g,d1fc&E=(V-tA8a_HX ͓k64t?M t#s$Sze70tj˩'p'DZ04AT#vbbcɝJH""R摀뵍_ cm{^/ՠV >yqXC`ƔjhǔS%βY4ͳSu6.EdMZkVD[;/g5gf;4!Wt8ۼ)aԨ=z&^52x  4¦5pBNI\ Ze7Tgt)D `H ]rC*4 j#|cjH9%m|9"Xns9!W DYQKY:%=kQKH@m`xΦ˻d&>] p@bd#kiGM/|OYP37&pޟ=6Y>ϜШ0ckOFkQ=G- {Od(HEg;uFўj@+d4 MFx]9fgr$_*#d w$tRUE v@'D3,C8(ƶ^#]d]Wuxz!IEEcl,wГ!/UsesR6 Hƶ[+!_sZa%^^YZ8wV?¢ )!0kh&oqqk C͒.>i)>D><`9r'dQDz{YFqRKgaDhHD)@#/KdRDеU]Ö%Xx$tp22Pq*OG&=M$?2MM@lxs=-,y?W$26`Xa4OWQ\ElEIIK&COѾxp\d[' ~x]ح-N}HViv˹uFsɥ`Vjz7 vw+׽-xDZh+g\Fn4sTՒYv-?k:/hVOqxM- Ya_XQ-${3 a_% bA*]Xd~sf8ecMhp?6[g`1\&L,r/3:3 l D7e_9^صբ&\N)Y0Ѱг2<p5lzD)X/ۂk94>~qou V}@mh[ \s1PU|r8 X+"ێO|z9N5Sd%*rb"@ .w!֥~2HcW)Q1'J ѽ7"Mz朁+UXV;2 yGIE.6S|V>J11Kq+ h>N-2Hf/ΤiCs1Y M0Ydێz1iuf+F}vĬW_2pRNt(my.Yq@*v'lirیDU:R'6̡OCxk6jGܹ#=0TfE Nb9 TGoNĚ3CJY1]+fa}񎼵54Kb!K$Z$6JCl%)a߻~E["B^q8Ni0x6Y<64m cg7L<[O,2O-w^`S'Bn|g\µc-^=b>^@;# ^mAX/K#o+F}y@?# ii?;0tPݥqսo1`H]0@o3Chw_緂UޱMf,H|F$@U)h&@\~8cՕS7csI!QZĝC)n20Zhmc.=} [|Gv3~SPR-e']E^qT "7&xapZ(&WwL<!S+E^D=_Hp2CTX6$}VmǓcXeg]PZvU,肟&x|3K0REOS8A3Ywn;%m<*16tUg3kp$Cæpl.J1jx#2*%sζ#ԡT 4yPsͽؔcl+2pF\o:s䍿CmpC 2kT?dTBpzFmUG˓a/nq3ory lfwT RwKЂ ?d_,gs:ٚs6* NԽ1kf{A!fxݯjY9~u2 ԟh8(A_oy~CvC e3VJZby L&m7%G5UCK{Jpi`O G1֮N?{z4FF;O)L&Eg4Jj٦k~o5(f/r q'}iNi=ƴ&2>*|'^a|!wn&͇Ƭ*f8b[i_ޑM evFͦHw3'4]!|񥕒+ XɷO9ۮR>(:`G K{վ7[ % K#-H*wUmO~#=Ph}դ]P ~߸ PJ> =X5 1Xa-#+_Α}i>/8L+ʆ=5ONiְ%ܳch@[wBZ/(|.KYm (21x6lʍ1c L*t~Xs {z=ir8m"3 v|tf.X`/$K*v%#}|@ُ(mЖYo?΂%sj/MQ90Q {ib\ sls'!F@T~ڛDvSS=2Y-wW85Yo8VSkwBZM ,xFMo.̡ gD=e#El_'ꙕc Qϱ@0H-Ҏp] _\V&Pexʾ,;7 jN&QXˆhwn!+ [ru~co[a;eW/{Έs~I*L ͲLu4E`-\<“e>t_׶xƪ`1O[~)W$/܌"B`!~;mÎXy1 H-.(iv\Ico=NR|gN[|Zא4RX^VZ}d6e,qWކVU\z 5-#'t7<g4p/u=Vqp:UJu.E8Bp;E1hW6G8O\^ԑ~N:5j<6nv  ڧFWcO KS)h*KJC>_b'AtUz&7L8`ݬqdVF658h6]W?PܵoW/&OTݒ mlEZƨxsaFYiĵ< & t cvdq!LGpتXi-zOLhЭW@8qPWsX7B;"-N*CoZ@ܡb +NSzZ2%5pߺmCqBZk8ݜp{szb1CVdoL]wZx;ң†ׅ^SSiK rB&r)=ۻ|%GG-XDnڮSW vlf9Ct'4+Q'o;ZP0u8{:찭ܭ.YEJ~,-.qKpRvEZze·O/IvfwĶVPqKZs%,\De&K)bm+@Y[\w==t &ߋmĒԟhE)J?:l]bp]!G{5:;ɖGIGL*E^ݙ±gNh>'oL¹aPMpҳuYK N*WmAq=r;5S- hbHsn\Kf_9 L&;&Wo_`Hʍe?28rlμp8欫QJczr{* Z\A]8T2;CZT) 5`!3ʕM=x[*-m .xG%(9Rq z-rSW+ȡ}Pp<_ [e !<ch`]F"Mޙ 0X- '˦BH~l^ ><~ujxBQw7}ǍK 9f/Ց z[GN Ro%dQNehTe6–Ho+uHĊvpc;$C{*\ek8/! H&ąP*AduΆoVS$]_)0HeL\ ^J.Í ߁O{:ڿY<(*=;U%6ߣM}6| U|3MP R Eӵ(䳸t\? |V?^Uam8,s&׏}yW~MTrs;jߋ-TVh=.鿞Jl|`=KwL%sblŷvw@"sN 6\P2G d V6%wc9y ?,_ "QO1/Xwg\yNĥ)WK(=/2= y!3b=Z[2qXr]|zmZHȉ;,AQY)=~1hP~Yb^#< E|kR#_j*Dnֵ jԁB1ek. 7W ~)D vˑFc넝M>JE3ҭF xhB6APE+yNX-=GՓ L=rTzB;A-{w&i@He8aB D$-{&}:n`:>!3ݶKiLW s|̢FtOVЍmhܣ[ *=yf4]|k%'Fn:մ/7)J8r2D2)2#ۏsCɚFV:h/G}m ~Go |zq l F?bHwA}g+ElBV;09(1"δN<~zFy NY~iH|5U_ƅ`<gƏ ֜*VY-A'V1L]jq#`C1y^2GDɣUx8v! #/Dsj+Zuܜ(/*Kg^hgw ᬙ?Th|lDռpiSB8 @$s>S_ͼF6&nq" Pl!%H1#gYE^^Lʲ&/9Q\6a0t&@Bjne"fہ9-?s:y5뛮)^#CB,k362ʄ]Y wiqsz3I[a>lOzӈw ]{˩SNsvZ%An٥'}?3x9MnPHøNf"IӵTRU ˽g^k5CTEnxJ`M"q&m)*g&?`^.wu၇nzZMZ _`?Ӵ饁3RxxwR|WPQ8/6ʇ2CSu^yV@VrK׼m|I wXM̉ҽΨ/>~WI]O52 Z/d *^,@?:+@H̀4~g\n6t{feX@ lpEbj^˥=#VEjvDSzmIG~ cf#E&7Tz<ȣ1׹ID'6Hc@Ohfb|!4C$(Dp3Z9¤XyOT [Sh,b|@Q#\ MnH4~A^x?gm7Dޘr9GrvJgizDOJ= \&gC,V}[FSQв-ݔsͦ"fo_`v9HP 2F~7XE +y,y%})?) # :JӋG%mGfFw;aR[M)lQͥOF/rqi횝 _K;Y@v(0eCdyA#auwUΕhkNum"bZ:7Uڀh]|;0?Cڅˮ"_\ y3oQķ~=-&$&ZLy*!mv..`:P?~ bZKxJ_wkR¬.M^Z)QmìϘ l2 0O$ q(R8i~;sw>v,.Ϸ"a =݅=m2bHE8yKΆ >/|NPQ]}f:]12.IA*qC5eDG֭?M;*9A2f1A:![BWj>Rf]q'DZ+ D-q;:;ʕgIۮ j2tߧ;xii{Υ~ edE>3FNٷK 0F\y2[>^hqNJmrw@zP;pbP@G%X>Y1x09_lk! i~oj[Y #2 DB!@%c 0^ zȫ  7+py ɣ'g,"`r6ݪT ,2S;}( kdY)zE5]Dl3+ft\ ՃVmJH'a%b26&x|c3ڄ)q`ƋG2Ǹ aQRAم:VipMy V/K]#9BL6T6 ^ `랡*B3@rʏz ܁T ۛz5)ӄYw(o:.8lIGl{;3a}H1!|tA39z9uCM8fyu7fj?jit߳4t ʐ] YmQ m7X}L%I=/*tp ᤑs]NkAp<\t4pd T&Sr-ca1Ʈc^TNo8wM\|Rl1$< ?#E 9bw[LopKQꍣL٘`V\\4qRGW_Fj/vI,O p&wS*C[1,M~g{#:8¼w/(jgyV2WQas bυgh 6L(s ߭Uѡu9U Ѽ~ ah`=0fcHz\_㰬8"m b?-Z{@iU8E2pꕷEU%tzkV;B6&6Dg (>0 Mۭ|J ^fRH/G  nscSZvI!oUx} ع0vV:ڡ褉D\qʁx1Cj"gE|RB7_M"DJ1jCHcZ8I'F-'7 ܳJpBD| Ծ@\,_i4;vOfGF ƈ<@1cyPl[O1Ҁ5R+c,>2QAlb2u"ΔOrPzFg>t?*<-x~rv y4@hMn(c:AZv?jV>Ca/6H1f8ɒQU :le۟[Vxw,:P :9R|kV:I ;_rtۜ'{H\$>S+HP7r@g)w1x B)5~2;JdbmFey/A .a*uuk2r/ɆOYA~ [8Tlq/dˏ5+RA.ƕ^nA1JȘE p*T ptvOYSP=⽿aQg6 Vi./j;1ܞ^I]6(w Qݹzs7VFUCᦋI#;x%݇N :d-tNV_Z82i}/Wcm"(.io<;{ZKZ$M@j$ڱ0w$gKx9fǕ7DBmҺ+yq4mGWcN}V/wv"QYSQ{go#E(-?NLl"52f )t[R&_p(dh36ZTs!J!V _~?5=oC \%ikڈšZ #ʤcuLI5xTrpo]0VB c1d0z4G†ٝ@8B{rhsTg 1w O3C:GT;`\.%Eu}NۿXGY|{-_dؙ瑋w?r{y>eQEpT=YĊF%ĢL_. ulC4_y|"?u,S?"D ͤ{3x"f2:ZqE|ǚ3F $eMlrp ߥ:nh@ƶEK)GEڍBC~*4h) =[z0w1X?NXpnAb;oGnB7lnW1c{FVocM4灳?T_'9\Qtz7_Ј?T/~ih Vn@*-HV*mL YNBd3yfZpV .Uۤ69" CG:pg >sPB 5]@JOu*2칂F*("ax/&9a;JY`қdڮzRf7GCG%Z21)sqlN wWUR*oQw5GP%٠~nB&8 R@jUNF=KqKj3B: S,_vN5فSqьIq+؜z֥!쫸LTfϠ45؊$ 9#&GsdpxܽEСIFCBx^CwTl~8Jɫo_]@ eR`Y%Au45+'ul͔%ݪ.A* JH`.4-lCB(\4Sxй>ZrUa}. xk+o}CqMOaZK: {2K_ °(ӍW\-'JJ{8HML= Eyfʏ:ŦNY~Y|Ry=#UH*wVrL ⚸~ޘk#nhpg_a,el JD)~s.ͨW[8WDe=5-MJTTYH5 Iʥ$iں3W3~caaS zָ/Qk]&*$ެuCbڦBln39,L!x&iQ$N5䠮LYТPua\m֦ aAVp^>f`eoJӈpDg̹զdbAE2 M& yϓZ`FIHKk:MurPdϢ,XzLe6AbQ!WS *mx~L7k2<'N^ O+fVIg׎ Pq=ЦةDU#u$t+)j*|B8fS9ƣ1=K[@˟U1*?Xx֢m a4B$F*3J?~PSdPDsAՆpܽb"2G?;qt҂(H~*!2iwu Q}6+Hi&%?NiӗPRa$2>:ʿ9GUsZouޝHl iۜ?Àu+׻կylC'vb |I YL4H0;z:@|S{bWڑHxՁ)v:5\JGUa(z}|X›Ox@930e\2[:'649IZ ʾ<;}[FV6 \'g}`YHM! ַ!dpY i~4c^u VzygvBJW <cq=R\W2@ .wT˂}p#̿gX3Zl fu齐/dMA3%(֧+qeXR8:.,M]/cUbiU{rui!m\[D<|Uz̊˶bȥmVA9h #Xg\k>oJ4ڂ1l#@.'>L'(n;WoG>pk Be.6cFWQE *+v-}'PxN/n`t8$$A~?|?%B;3Uܑ uYd)\EȀ9EŧfԨZYE v<qL 1A};'jW&*ͱ>LǴC$(]yb}e w=`IVsXSWq^uL1{.[뙭fZ+\xk^ßL+<04GT7bXsze'} [abG4<YP&@*78s&PjkV'+i2V|-l1z쁛[~>fNU!y  ʥ`vxx9ZC{V^4^܍^" ojÛ=4P?aWA *a&PVDPZVt塨 jG̍NxycV=2P7: /0a?8tgħox;9mS:dBze:H:{ dȘ]:Ce)|ot2w#3}(r z&,Ѡ)dm| *\JjC p =QRU+~s@lNfEݲ6KQJj&Oҏ8ʛK %O;fy̙y:\SvŞhNiK"ա䨲(Зh-.?`(݊GcduP:&7YtM݂GژXԘ1ɑUResIW*tZJKH+pY!nx~`SylgI`=v%^UeW|/]E4'UV-(jˎMm2 hwTʉ({49rzF$x 滛.|/:}L,'n"V@.F:\{BF:,[my#C=r[<@kXPd")0Q.̎s[_/Rlo|r@/hƥJ. H݅\;Lq# I2 ?U43Xv#b&Pb,¬(n਄Hrp3GK5*"ʤm)yܰ7z[+@Y(n]e$5֙P7b[+1LaV+ )F&/ޤ%Lbaa }xlG6*Tf~\d'kPyJ]4?cE{:kV$X=[IbBPf܄% N4;(ĎAYa@[V//'(z ߫bQx =smY cd @Kn?iZ,ɚ0q $] =I[sT> #JlP=0s^\ـ'# ꠉw92BB沶 };{c~(!zY=23@U8YF:KvB4xv!*q9kK=%M~i.YhbhH=EԇQ#Y@Ypf pP?]`GlJ朧x}lwͶZLo6RTN-Sܑ̹,#ذ >֠=*mϪb7o{reVv cѐSAz'8]1-hg ԧxìBl'Kui+tY~(Œ~jH>O?N>3 ̋ == h!@ͦ8>S۾5KrNBLy/ne3=IÕF2*E+Hi"ҭ'ɽTHM?FRnvbTH_ #r{?wGRGmh;&gV0ڌlF tO۱0()t||{Ty\@gpv0Ken w_hKGd?8&<Qpdqab2i Q5;1㗌x'PuI6:5[<>xڨLbq۱w\`+ݒ9 q(MBmwB|S0yl!HΩG`&{ChYcVaYD̈ޜ[%Sgziޚ⯇#[{` ˝tIHME^#kÖN浼c%)C19d{u zy"aA/H( `_gm!K&>Z3k%a)ޕRpADM~|1ƭ7{9ef5rh#ee^_S ɜ^,5pASVj{Y:xѬ7g c<|H?@ \H8yF_iCs~LiaK 1~@Mb@uHHSGKxЏw;3vunNU zUi$'`CTb恽,-eA1 J:50W<3QnB:̵VȊ~A3Ed`$tz"rB@:q+|xhn3 ;K̤+! 7ZYP#fwYOpEm9? r䡣E?ܜ(_D:ln?2O4t"o"[N?"ZZyM\B/EwF;.%YHC Clp>O.\:sGxȏ@\NR 0%Kl쥝#~M>҄m-_"qdg_;I=Y<6OY9n\T\2ȓ0r6ly8\.}:J(%dlebm{C13BamݷHwH-3#w77F r;Ȫsvy ђ W(Ѯ8W$zۦAFPɤ8ОX)f?NXAӕy2OXE:&$&! p[&:UC˴kd-Q#Vy?/eQgU\z^iD¬)˚e`RDν $<~:PAdLꢢ.5aAN:]eLA[D8cƶŚ'[P̝5虩F%i<H~m2RG /QoVjR rö񒇤>bucѼUTpFPFp5ůX𓅃]$YPHkpAZ! n=0Iu; &9YU26VBs'=cba.7ZkBk o9 O:+\9|{MUEig4PϹ]I L %x$B9Z%;;(\le66rf(bzi1D>/.e+Faz/O6;0)֖Go~kxZ'd'FDYgvݼk-cl~@z b-n9עOjv0xMI> |[I҉Fh N[:NIy܄8Uf֌ O_` 혪qKKU}X41ڦ !ݽ IOR2Do2ue DsnrvTVYCUIʀhUD-`1ex.PM* CLԏ/gQof!Uld>0wCȆr',xEJ  @HʣѦ.^js6K/'3MvsѤM%x ߰X cTQ:lQXCZg&=Oڲb#CQPХiѯ6߬ti|-;zo{a#3n2&8^?c$\ծүtP \Pl9@AϪؐ/OS$3fC\LƧR/&=gw-.%[C-7LHT5X!&&J-S.-ሲ)@`y'^Hڴ< ?VŗƳ}|,,~XG/܂L`6à`/'Apvw+X YN.3VqCm3FAe]B2EyA6]@nlUrĞPA:[aH^5-a] xfz.^Wym Vf@-z*Js{϶:NNuah'bkua|!Z{]@b=3вb4T-ۙsquMJl =OoyVeM C%u~˅7b aG[KE!v\Ƅ' W}Tڷ Hłv`g ssgB-b@r76v]kBO6@K769nT5~etk?;:5eJwW|lIo"X}b=\>B>;MkP2ʪ-o-Q'>h 3Nftv^qXɤoΧ Q5'|5ǜc-5OrIA_oYMèDF!1#1Y24K8|mL<хӿw3SQXɿ"ik(jxugOi+ iɼ9A1,)!OMO_fҤpSh)OR`~&F?44Uc.IC~.ye ōf4Gl2Tb:\6CNE6I~6~ XV,„zbڜiQە-: BT0F Hyy(88t|SNt+|}Qn$okmvrNGǞDl ڮԠjzݏ >K.W{@Hzw,lfqpB 3+!E!EJzTV bE \?646D48'`)4F(v)xZtL[bam մO9Dq8i YN*Ue+r2qak3?>~[6C؁N'%JS$8WVǵxB28o] (/crxqI#qv]JQ 㗯"Ш$J>vmկސ@-[jם s+ޭlVj?srfs# k;}4{*侍~"oYˍ{wy.mQV2XxVc ZTnq 25b²B||.wTqJ'yMZYQ7H!s;a[.%7ԛ6Q Kۚm{fJ֙D(fUOZF=@pX1## F+{ق֥h3c\LNuƦ6蝦Pxs }X/:rM Ꝛ?=p }[ HWB(\ DT?"D.F2JJߖ+ a'@f /N0WXtLIMGS T'DScpBMM֞4;v_ ,ƙw]ˁ"XAyfN"n" 6_a~i |zH< Q.ށꣻebDG/Sh=|& @.7/i-Q **FjYc n +'T,{(%&ZݭjJW>Erٙ~Ҹ&dDvl,JbtM/_bg`I_!B82$z=E j=N3QDl>u Ǹ{Gu1;1p09J0!f>$ћY.08yqi1p33H|JaJ#6XİX(f B T|z%nh J7aI޺Pyڰ$%|^&ItphTsq{-1T\mB\D,oWϛJȤ3:2S`Mv(bIj<75Ʊ'iT! fl- 3hxXXT3|p#E#W X.c*d.X~{[#lWNwaY]D+ߏ&ՇL8Nm'jG'ַk&GlZ Dw˾V|hmmMNN0|1ᦝ\!CL:Jw${Sic+ڼn)`X1 & T헽L-ƹ4Lyq9R4`$9ކ UCN3e?4_apxzbnp_cd>銮W6oEc3R")AD_BUy}k}6(DofLn ~]y:t`Q*!Ԏ5rKy]>{H8k|jZ'R)a W=D]KW K;YO p5~NK@gfzRO/Os9uS4TE<,]j/  7Ss'ZxAf `TiZ,GIgU;.=sn)R ^1.T%9r?"d# xI-kQޮH _Y 7fA'RKaRfI]Qc7 ExC!TJ; 9#vtG1VbuD6EVU3AUP݈pp SvkG5uZږb "/5tpxY3N6.}J8CC!˶L gͅ!8OٺbKt3YM*SC@?x8ɉ%Tlwf@ V}jT=b"?մÖd<8aߵ >WWo?F9M;oFQ72^(eJ%gX>@JҀ3Ǡ2jp/O2Ͻ T+pS ʝ(4g&lxƓm U`k٩flX/GNXK&AuvJz9gWRa$>RTyi~,eZAf.݉c5h2cu .73R<YT o38J 5٬CS$;9B6){h"ۅe'Ϯgt5@p]oRxTc3"J-ۘvG4} -'>J} q}\="άtcJWh џ}o]H9Mο2@ bX_]MY9M`кܪ eB`ޯ&_iƿ-Xr:ǻR<Ͽ ѐjbA2]Y(E\H#4%(-_ghୌ5~Ӹ|{L7؍/_GŎp0Z:/ThOlKЕPWN%Wގ]6IB3}yx{+-i{~cޱz R/?8,ȍweb(>QuNq揩5Bx>u7:y(n7PR6\o1v9FNL |zdIл1)\'Qݬ8FX=<ɲw&[ :T{%n~KfPёjUd5y@v2/pyO,(mA7Xd1~K/`8Tsa0 =7f?!@)'(C%VMJCR, Y r%S3|Ȑ8Yy擯=T p)"A~3\-mm._E5m>sO+I+&1B9>'2$ML=BO{Onu`vUS~Q\Rp2h>Z\rd|h_#br &J.lnГC~T@hSc^/`[.9<,GMwA*ɜ:ԟۃ&k7L8fW#a_M?+v21,WyW {D%9~ڸK=zy "vs#Yq;G<0wtІ'ggۺV/;gMcнTrxoCQ:wM/- v$kۤ_84Y؆O5 H~in;xO/Bɦ]Syca~LƫA?̋ͯ_x8pҋRTO0<9ݶthLߎR՜ BAMFVj´Zgm&Y>.35|/!zo(z$Q % H&'ÕK>*U"QnAC#$pcǴ=嫣~LR/sa%WG+2 TM̤IʿS. -hQ>Hqt#-pR qmꅗ)(/AO Q(!+{Twٓ˕A:aЭ@, effiQmrIMG]'~TBf.l܃/(G.BڊU@L3)}TO)y M`6T*RzeexWgN7UM66;'X3 ^=a1JxPg^o;U 5܄ytXi<1UAL, ukmi{Pz&s1彅.htc P_~(g4:7|o¦A6=8^g_7u? tF+60)L'鳖)3_P y*a4qZeP 5n–j{[j vJw5+DV^ojMq坞 QE|JYnTFT7t(+>&PCf09wg6=&Ks@ewZdx`JYqynpϔyݎma1:= CX5QQU< )#yO26耎ytBC#DؠT [eǎ/Œ-a~ٵ6bb <>pʝH1Zd?%a*4_ck͚5 \45Ń Bb kd ۾e w"8U `_'R Zoŋ4]=s.$<_P]lAC7q]=*F"#юcD.|D3NbߩM7UZSI kBh^9€xfnY5ު7350&+AGQã5RMe3_jf1dfU_Q>h}[J8A?]U9l1Ykk֒|q"^*^0Ӛלo$FnVP5!w\^”`ƻ ~L ^MF(jI>&I*[!*]}eGV@S-Ѿςvx@fw%LzL@4Z&%}yDRZ!wR0BqE)c SyC!b\ \MFb,nf~;K":|˩PTוc6&PjOv#;|GŸAgv>kζ$>$fB76gxȍ9m8([3v}.|d* #gSxi5_*^~SA  [*f sDRT&1 -Oj8jÿ qq{4oI+DDu:6?dd7 I611aDl35;(yC2RKEI 5mK8Zm\䒚ZW-;J3 vðYZo+r`kE2`EGCCeJ? ,yL؁Jeo̫k(]7݉\bvqI>@`CVfNfySf#Jgyb_ 707Rv`&$ONxd L])aڂs~{UjV!jY]Sŀ{dh,Ä4y[bHׂ^54=iyXl3XAV}6fAZ4e~Xv s>tnM U5lOr Pv!ڿx6ʧ/g@1nyNF WŇ)%(*z;QSGo^+=;9i Pz+KArmcѧtxӑOX2֖qWiDG ]Kb"JAX 3JZ6%--DDlp[tZFǤZGE nqQUbKvPa ~2+-'f4>eJEZusB|[24*fF[dbɔ$$617^ZwlXa#$tyOru)tō\8eF:s{rJﺉ[Tdϻ^Pea b2Hyu*P2҉Ro7_"-3y ^c_ g/=kJC"٥b|geF0)g'7!F9i^{)0d)L*zzfJl20G'-lGDr|a`tD B6Dt0lSW5PWvV_f=RNxs};5xQ; kpOηI `f F'Bg8+'7F_ }FaUy۽^dHH*hb%dpGQxhґ6z)1JrCsu0pDcK>ĕQ3O(VLp5/cnjXSlӏ~(N3C޽x⯎)1U 3cghJ/37'wF =t͸b"T EըxOv/omkɖ煠4~%u(70xl3 -L57֑C#UC~nLRvX_KXx.stoFS]|،dVCΊ~6xGe&eLJV fqy|a2c9ٰ%rI1m (g݋`.c/xxR $'->'LM^̻k9;#vC6A>}l=|7:ҧC.f[ CDw Ƶw& 8NϤ,pP 9q$,pJHnAK1rSWWFb0 *z?^KcJf7T({76 p0TnTp4~D `mo5N i›Ì{b1Ɇl=Şjr'JRv̙U@idz)ۣ>% vp%q]'+>B?`{@T-PIdK9A}gLta[-q9,/595 Oixhĝ9Bf Ǭ0h&j$ dI<-s 2=!g}(VerDVVt]XT~>ePdeޕ*` LZ(n\ ccnJ?&}6HW4 51 x\|Q‚ס˴}/3SfQ5P/ R`ht}"1 d1;_dC_{u`w J" ӅlfyY<#.ʴxʵzw/e~y'f05:F޿sp)<-j}螯Q,joގR R"gCRqMMpMXvs@yG~)7r 塱%1 M! a2GTxq>үKEqD@A342|-Hij:;ʠCA5֯B6hn}f0> [>L[wk[5&blDIxؙworj8~&V}h ^lxzT:UOtK~rcv}<\F?mw'^&Yr5j  23* `5+ 1-/|b$eY 1 vPԊa|hm]063Mo<6bߏcQb9J/+}B[u޺^ΰ;! ) 1ɡѭM9NT)XѽBśEV<cMQv台FoL([ qvй,Yr/#ANJ]W7p;78V+W !x]^Ѫ ( qtdiN'x1'!3|48}$BlvXτDZСo<;?zn}a}j6@};豈̲[R@OfذSuE\j䌻!C,1Jb9Kgcf9RxZ83+PƔ١]mOYox+S(2zG`[(JiB i L=O7Ԯ/b. [ëfB"5{gKk}DԌXOAd*򔵿wNVdV5cd0"TD@Q5KNnvЏˣ `=FPl΃eBilRc%pÎ[Rk*c'>ptwI v6 n_%zٯv^KK68DDBlVj5J4wLYpA ` ,&2y ӎ؏~Ѿ:5m xT# P*~D⠰fƽ@ذE.#AXq\uROٹk EM6;<贜r-þ3:|vսXe]%4[JR nf+@ʵT6te,?wcA1-uVOTc:w̧#@-Cěc{ܼ.5ܽR7 9-W[0t?}-[5M3Ö}U"+؃f]a_Fҙk}I3zuF %H#0K tbb]F3{ż$GlY+6a fOs?e&$ I6# #Im3%HLnC -XUjnq_h!l]D:u,T֩3:>$HH($ʘa 1g1W|[wnyaQX?HN=`#AHɃ)@tgé(4pC0QSm7=].s@l 1eNZwq~x=mp}XюX+iD>mҧhIcBK8`#/)q@ "r{S7^{&uɾ%?W}Q`g7ȍ/.(HJ'3öcw}dl Z\g~B:4<ꮸ֓.WdT}F] 5:hTSKHԱn"{T,U8ŒTMqEr^,#d}N7;F1KΔ iCΌy Pn4<ޤMM.(pbyU5D>#Oz8UGy%VQ:$ITEN9._a'QHc yJ&~Eiv@A1Ae3]KAKP~Q$vG/UPw--#B0x )yhe5I E Tz:Fu 3? {HO˃B *{/SFN5$H>*Ӓ1-:??:F񘄇X݆yuVf ^u"Zh71:#.}2lR==M|Mqm]j6A7i?K ={F:i$X@!䆪X}7} D8S(H\8߈v%f_MvZjv-}9!tۈЮU DM!RܐDkFFv#JR@Ql6h@FǹV=-Ot/}8;gSOB S\WQͤ;էﵮ!` j%OۋBo}C|!a?NNƼ:A$2}9'*~ƕMMYZ:t0FX5&jDdc?d96B]B@ mw 6-0eD0r BFIrTZkRI*{wȈPEY:ӵ,OEM~je.ILN37Jf(c@3 z)m'R>hp?s0E8 b约~!}O.HLӫ D֞=< @ &|njLdl%. 1܊\|}9GF97#ZtU#R< Έt5^4OdOLzfE;, 9:~VZvGɔT6BU^fz!Gڱb\y ńS"Jx=qzO21ˑlԫ6{K;!x!g hD$) > J@dʅP$gU@B-:weEO8.63]=V5Mn'TLĖ`qV}{M9}Ǒ R+q)d)}t sg4<Tk3?^6!R}:~džW]A DmLՋ!o^)2|3xj-ވU] m&dm]ݰa:XD[MzF:s@JR]>Uj9}">)3$%Ä'i#aE?m).YCE Rqk!t4ػ s#z-cj-Ч0]B%C HdXyP̰I!FQ b5`P> 5AI Z卟in%˔v@A c+{p9qoRcKjRpewgGi?Śܬ̩` |9z3uT-lҠ6-%$,uGϫ# `Dw+$UufL*% .w+i&D_sctO T$?#x gƗ| eR &T#s۟U*fknRH.#W"Y~xʇg4\IO;MPn cM2;Y v,I+2aB*]MN]1tY$lX!_GSFxX.66:|h1ݦ gbNXcu~da_{ܙ9ԛjBH9-f r`~c?ô#x֡Y M>2m˺<{ Qks]-nv A>YK97mKy@g/Bp8QrSh}? h`8S1$s> XYT2!j959v>Df4A6`Vz4VRHEԴ aoX"! H) z+U#f|j1^lA'ˇ?X^< @ |бQQIa?rîeؕ 17'MX5%?>/Rţ/8wF?,I+y1۾(6bn!p9uD KWrT J mA;dWb 8L"nirdPJ}:c̫X^-l9(7q4\g &6ua)JjHxCk]F fEp/ e _.VCvV{<>5~ &׍fBTcHEӪuE]4r\1xhBO2!#C͂mn!XP RAr 㛕Lsbn#Jƨyt_/N4KZ(Nɍ*1%gg46u;㷊(vS,0k:5$ːt$xG-s2gkcX}3ğ2wnRLh_HBujQk2s(viL9j-Ma6}1І{^LrWՄ#0f bn`K;Y&+5XZv 4_Pm.@1{W\ aL"[nb54jW{( Cdv!JZaCuz;y7A">(!2m 0SMQlaq8/wn7 M=vp{J| &Y醍'L-N}خ(t_߷0k"V>%SۜZet:s#I))Y7aew쬮?ElHOWo! `ʘR)jltF㆟6o5Nz@Nz_\j49Lc3s%N 1qWS^.=aXYQ%r$Yd8(?ӏ46$a۔J4-CpcC͈2OX߸=  Am0K'DAץw?k(i-oaL-ue[&(X]Y̻Ycr|eA*_?0{V~ [e߽U]d)ƣl;18N۪(4+&y3"̾=8Tq̾Gn_vN~>l3C ")CK/`2bjU%V1p!Ɏp߬>g>]<$7T5+aQg5)3 ,.fgiRQ;۳P8I`рy!m!P@ 9+ښ~jLeRll+Dj0xj,!\z.dSNCe6\$ 0Rg֕0@B뷟Masw Z_t8R4Ngk4ѭsg<7'"'G! IjgvN9ScPD,dVTyF\d ؇MkJj<Hnqmg(cxfxn{9@VeϮɍ錏#^ZOdrgixk!,&A19Z`P Vw.eu-`H;g8*>P@-Mߝ?Ł{W$T-+ׂ@0q \%y,y@d)ٻͱH_̯? v_>DbŰ/܆V̲PTiF&=hMA@˙@D:شE-\1B/8P6U<_#X%ǜ V 5(4\|1eZyξG/[rXyb)?n:c{&pPF^VϮ^smhnZPTU7ab###zH=D)BՏnmf9߱`aoc9^>d)}Au9``~T3)5`p=UڨQ8Vѵ](Lğ%#@/!Sy 3dEsYR"]Hc=Ay)*f(L}zK?Es1,~^9x1 2^ͫ[)U 2֜ۏw#FJ 6-HeavR_ Zm:nfV]'9Ē%<0ՖL7O%q-CP8r[@ L⩼#qcnKXBU ГpSm7ANR6o 4v2S+-eo[c: '[pC6Jg>d;‚&CTStpB U hVZ]Y!QRJ M3tqg~nN?e>W5 ~R8 u Dݧ*u 0&2 |V3]0 'l,,tϛqP\q}" ƸC'ޅMr1[ "x^H/ʸ_Diu KmW XyۅvHźDS,m,Y} СY O2~׮(6_eG!qur짠ϥP@k<3P2^Fr P0 Un6x:\vw8 8%[Bi.sRָJTdwh ,jfSYW[ @p4ZB />ܥb1\Pdh`}rv ^CU`*-i3qBy)qиn&C9hb,(5DƠ0Fk%3V5(}aHlt'قEe.GZSN|_|R'h$xDнaIⓅK3SjRN*d?tR@*.W`iZeH7fAsRiSJA s5zd1A;w!R)mͼ- B"~ӆꊰ"&tpm6gs4'yP*N]JFWKX|@l }E\vDYW3h [ݝ1@~`(ay,T^UC@न ^|AG\VQ{3HvVQxM01ҨzS;bR HC؆QvE(ۛJ-{ik$E"BŒzg@[W$p7#3=CwPMND +w]]&ORZ յ]zqHՂA2 Q6y0hL8V/(dL4 >ۗ bCxT _* C|J'45vh΁(v"4s@ܣHz|z~bϭlNOqnd C%/&cOepu ^&0UD+T;jt./vɹЧ[9ZSv ?=-H!Dį}C:6BTv&ӽykҾIͯ i</S7K 6oA_!yt66Eܗ@pjҫwlXXGtAoSpz)JvKvz ŠqSp})h]i 5)gmg+|4ÉœW-^O#6'v)9Ժ}[eLL\]oz s_X(i}yB88/X;~L 0r=&0XGkaO%vG/"j>!յENL21ȁC?[A2 zڷ68M?JФϖO|V[Q^ ǍtnDã`JLj\v9#CMRQF\r`\7F*cKfyԸh$㜱jFk^<3Bm%yGC&wCth>ŞqNjݤ=y÷~y+t(yR@!N&ou8"НA@0KǢS-C6 |DpC37Xs`uz\36v k~yRn %)+x:U=y~~gZGŗ9a,px]^l"u'}2.yG~ Rh`-ءz' zĖ y!aclֽWLX'C>K o9d<Ù1m5R?[\0.umDق:G{2e,A>%3ʼn8P%?qI޺5@6*->ɟa%xlI1y8W#+r9=jopϨ(`Ax,V_ ݼ[y*m+h|q FcDϪqwb4X0ԕ8['wɎFci :Rb*MuH1$ޯ$}(̔xjC{V?$-ӋDe`:nE_h"xSlOߔ',"Iy.#dwKn'zia[J+OPUlTrt_b ǿ,LQv0JxFz-vּҊF1JO ) bs9dJ =Prm+2{ XΝ%1o~Z|0PF $FiI:te@Nȑ;)gxKMY.`B:D ~jΊtv֊ZkeD:ed/ q/T쀓=+qg8 5IpPY) 95V8V|&ɎBct-} ˑEcVg8`a7! { Qqm"R?LfC {q;38͋eeO'GRMX‚RA!?NW[Fэt{?A,Xs rCgEcam.IWҴ]NmSE肖r&44/-E|zb|42_UF TZ <]01ؙRK|Hx1&2س\j;S,VInW[v1JE(Ц"-(ag$or,bYyVli?$ĹZ È ۍO0OsFSG(gm"d] $2wcլ57hTï9}bf/$arVNIΔ=HYwBiݦ :f$zѕ,AOy #1@t]G1%ِ8 U| ɩ=WO.#):SfaDl%->>.F5M1wc(;.m5}8W0+Oxzcڑ(g1T˦|xrZ"+i DBͦV% }d3VO2V:s E ,*+"C*þ6WyI0aLqFj}G( AcV<<riߓB3ԣu h_LL*Aٝ:\1)r:x傝T ~g^yd/,YD-6[v=>IlSs[n:jP |5S^sdKbn~HwoGy-6Ɔ] i dJHs2u ll6Ok<$D$9,n\ȓ/^"Gq4 ӅHzy&>a--,dס2/C~wO>:[ ULfyw?R6"#RABœʯTTicF"9S%E`eSJ;qܡTe2Ai4ssFfޡ ,!r_Xrub;󢖕)X~ /O6`̹jRcLˬ,`ʙ˟J򘌲0L `7:Rm_h.~xPU.bClg?H#%jqup1 !Ҙj/I6a7XԇܲvBqSotBp4b2y򶔹 x31b@}],\g8;w_nxwK^ "~ss3r")"Q(HЫrEmM~U(S#_zO9G~K-i/6|/YI|Z'Lg|5̊j}ICl4M5+r훡 ,D cë 413{HY+ybJ<ٝ״KF'arXi% }#9 BR)Pu@3=8pvڦ^|y#v_)aX?@  .D,j:ڊ@ǜ #R7aF w뺊[]lP@BHk(i[(‚oHٝ.*/pi? #!D8$<-Qݚ%tU\BӱWDM_'Qv9Nc5qoEdd]}#F Q`81(>nOpS\KrQQ~ҍܭGF<_Y(W֓밿nj87HDKњ Um1 ")GPYݢ ;`Pf aR,JR*I/?9&J kOC3\v0g͘33HCm"5GJY0xx:\x |^5K*@RbL6hCd3v^ȹ./':0:[«"r܍IʮaׄAu4rt0p*h[4]"W2L8ݼϹ +>a=V^gXtn?>K~sgڷvWhRfWMhv3anu0z&(kZ 0Q8@e/]/Q/f#iy$(9OEr_*}@0l`3랇x`8R˱R5E ~na*+c[īEwQeMY1̾0պ?8̼ά]v;QȤ _wDͤHg3MkyB;tTy3npw˔ Fl4𖗹5"<&m+dLb+i:4K>-)a2U{^ iA{"&DRF3D8JBtJ~-p)EeafZ,C[]'d˟:R/eo!g=b&iDL^6c\צMt3qyv^P!ՙ+W,Y$eo pu)LD&vZK,E;jަP UvU-AQ vFyZ0;q'[TKH&FAy+RI毢-E>aP;޿7A6Lص ~J7kW?nE|e:0UxQ%O,2"p̤s5 ]uF$ź)R߈Q;+fZ fIɖTZވR˶d$AvaPWpzVa%*ୢ[T)''_63"(S;sH}:S7xd*֨# Եqс:G&lk9/%\ch}:>{+ifQ#j e/Hؿ+IǕWA'$+"R S_m`JFe|3&E#?Yĭ`}\L[7+iIR>J05H"\$EVZm\QY9z8ޅ_''Sb\Or1R+,cCKztƳAĴfrnjK$Tw خ_|{5̾IE9 plT?'-Axj`˨W|,(>5)v3>jLz> :"%PwNE>OF.Ɏx!ԝx+ F]:SlauӸ&$s<{ Հ{zߑ"7ŅuO("p᧻^в F-~vWz _Fk覮oNx Mw7cG%L 5EU?lp€_P|'\\fxNRT4g!R[eȭ!L0oI{xd1E3v#6rFa^8Ld7<cţ-'n,(${+FfEMar2ȘYSԅEwb%ԮW%Ӎ U_Y4.QSTK9ו7 kӅYGeH|( N/V=)*s_/=xK~7\,D}^]Ui /@ _#EP-#iq,[:bAA01{VtRVE,dV;w"ZQS2+̈{4ѐ`;8URe#Sz{ G2&Z"cڍ@3J8quhy Ġ*~11 V>. QGE.]'oa_kM{J1bl%\Ʒ,K1ia<~- چS.bl|Pu hBYw>ȡ*+pB-boikY?ʡ;@,rh=b< =07OY="Ƥo򸈰\'Pq ml\;Њk3,%Ѳ8BՐ^,dڛ?UsȾ1,{0-Z^QmsoEIUH?y F KI1ƼP3죪@E$t:Kp[$btָ |=!~RO]"q‡>UeJHg zn.7=a/ͳLL iP Y\n⇶%qZ M8 !z .)EgsJ]w Np}L2 295 s.S=9{O%K!(! #њq?T=ec/PIVM/k8(m푻yC, ސwΏ7aށj!=@\H-4 .˩c !rq%Fn W< GaI7B\p@3aFaǬGsFZl*ы3# ZS=L@\R_O|ѫY$Ľ_K`4+ǎR͵fgY`0v(lUsGdxu{vԏߢQ5S8tPZdm 6X`b/%ɂ iRw~ 拃EXst 5ӷc 2~jMRs19v_ 4zUIPf(ìkfB?ߤSe0_7ƢxF~֗ģ ?wu)yf=c$enXP9QX |q1o_A6 J8e"8fMI/9 ̾4dwfYe^y0dLK…LCmðh7QuCǔ.gV3٩'u%!߁R_~;z>( 8:6Νbx5>7qQh+oNaʥ$ayVÏA,uRUh XhZYv߭.3K(&'9y0% Q3iPFqqAOzݸB?JsX1 kwԴv2̌CWƩDޅ S^o֧tҰ<&Mf}]5H8 >[|*QylOD:]$M BSApN^R{ :uI73D=ACvóA. *i%!-:;El~qdskxOo1sKd֠('\/[H"hW-bZ4dS1ߪuN=s7̓OL$m֛;/j.s n|CNXyG_N<{iӦL'cg;75kC)Ri&o сsA2ES ZA2e'7[@xڳOݿKݳ{T}f3(C!Eh-<@_4akig H~H1NajlEw&؀ l@"jriWw(Hr6c2AJ-yn"HE(G;Esnۻ~Y*#>3CсjJ nj6Xmଲ*BΏ쑶ًeY%L")=f߽ъ(Hz ̼<4$j;<$>^8V(Ssi9dtt~ me+9(?:LgV*+ xZYD(4(VY3WDt@p}4o,аii`QuwF)r{խKT{w ]t┦Lr$7r/q.ipƽ,Uqc4W4?;_\|"1,5-Fd>< EPyJԺ*Y]΄ jq87^Uo}}f C4i׷՞;"UMfPZHk 60jR&kQL`% <"80"O: G!ee>SUaҾ!/. om~l@?iW(Jo~UR* ^[+Q7ڟΝ% B;R3ůWeсt2YAfHB./O*$Ъltnx:*ߊ^V\nh)ߑġ6M_|ڮ@Ǘveuf҇z{&ˀu֓#I3lE:UV%_[T(`c8+Ɩ/wn**`k)P>ƄGtf:txSpn$5pzq)]g%|}:mE+9(5aBsffǐx~Oƿ"Լ% ᮘb$wOxÚv3 ^OZYrɘ Q~Z;ٯh6:wno5):YcvU\t;ߍ[0 FoN֛&SFA6%)Cz#q}DֺL"RV"yLjE\=w05\Kznim}stڟ{?w# %/NSS#IPs5v-TGPɃ yG%"53\32%Qő泄{{ȴ r=M[-FU%Q 9{Zb-z+| 2T+oMJ^Ґ!3i2\^a({MQT#SlQ7%Xb^Hܯ9- m؎|.O7y)[ D OZB < /PX|S q]!r$vF| scO{GA,11`)!bX$%NDEa࢝Za* {{$u]:Nu%pDFX88v2'Nɀ [BJ&!=xР0nV(n`qq_Ÿ$؇ 4;ҭK8ZgAf%6LXN=cDm쐌lit Et#7-o?v[U0eIBZ+XzZ/t@rI]$6ܩ=m._)ߊ N3|82A&5)l-TGNO߈зf&ԠQ,VP nc~g !C#:&n4nH/h_28l#z9Wy2V\/^J& t=uŲceor.4 G:&s܂fxeЯQ$]BY$Sg}F&ȌɯozXetZdZ,,W Ol~1!6wST5_?H,@nE(,=q ѓ7OT> 2H]#h,ƙOr!Jg\xa>PDG 7GX~#Wd֡/C/u+EtʂeݍU,Xy\lpm3W-ĭ߭)vt;b6:2Wc]^>k{Wk@is.mŀӡlx|`!-x/4E)93蒝!#7l=Vp',-ƫ={pw{ h{@ZGQK6O;}2& iK,_2ݛ>ZѸ s^{l8/{v^k.#ߪExJ} @.Fu2V*fM)c|tf wP)BJv ;fHK7 Ń!`,Q{w X|ho>Ϗc 7)yyZcjpÝZN K`/j5xC3 sF^lQ>l誇)B88x!7mNⓗUU^^pK 4۾c`U.Yd~Pdѽ?uctYA2 kVHK] F^gd"onzhc1㬢MzQ&Wyux [j=&1{q1ْJnOBe36m[oCTБwMb|,d> WvWvmA~ѮV,T4>z89DŽk >mtW[,I"zdQq LLdՇ^,ɴ#hFaʷ{ߥ/7*v/8?/WS"qP1ӣS8vqAb}:(5tM UpAA+#?OeOLpmޠ”P_k9c2 }ƶx[if8~LH*Z#%:P R1+RI>Or*qmg[Ä,\qކS>9u)@r/b2~!̣san*`Ư73"8bIEke#Q±̻C[0¾69o~7Lj!ҵK֡B$ H zVq'sYw:|%h_dgRHNAt`DpTcN9SEc suDZP"Z(RMQ%y{PV4J[&h61!_8jGҘC%J0T0~w@B9yt3y}֊'z> ı2W>'5HyM_&hu{ :Fq1$mP`w. ?L7f] HqS #e$WvЉ~Tq˲'(ovSiC}1nޱ6mkrZ093}>5Yl5q)Zobބg`̞Ǟ:`kh0_cŤajm@MϦ]&EԺ:\* O> 7W~쉨HȽ?I8[sky IQK.Rb3KF] nG_T[d2XHgB$.kW^ijG֢]"~$22`g #xUE<%tAFo'˷T 0jhyKC܏S]x"ߛJJ1VP2o.-e[*@M7H+K<R6:ä|%6Ƽ1=Kn)RK~;NUHGe *(% lH\*YQgE'%QR!B'XԂ N4Q$W8dOxf1T}ªʼ+7wuUH賆i e\G6wX!``M:,@ౚzTijFђ4a#Xv"bX3H8N$ 1{hxͦ 3[ N9t^D`@o_Y1-0RFgzJK4Y\ŕ?=s}(w=-emQwG#GZ(f\0r,8W*VQ-xuJ ;Բ=0tr`AEl"VF+:t4 R&_Y+ϴ#߳c?9-)nQdJ~Oxr9m1m%HkV3lNjq~"7X20/h t0&F^B+RշgQR1!Cq+-B*^ eIRyړEDw DO {fDrȸ5Xd\o=m6FWx>߫=N!Т< } U\Ę%P~Y^AM /ͥfgH8)N_RlTMN8*Cix1Ow?~d]s@O.Mzؔp4绋UEV3פ5CX-h5e ;hpmA3DBx/Fjp>AYoܓ !-CO|R4h' ˶H9OG:LF茮;3OLoOSZM*aOs~T #%O4jZOֵI 0S00Vh#=0"&=I+&J뜌 I  FmM"omQV$b4VAڿנp%¸&= $ae^aL_.]y#/9RB+Tfjc{%6L-2+#KbAA xbHq%$U_ҟ1&@?x c q>PU)+NhjtlÁapRアGTs)"YBSw?c/դMGԑ/ eƺ63>ZC  Oo+kW.W>ں[Y5v)'KTLU UFEݲuwNPB9g:q_zl }@țT5'Rz%#¯(=0?ό㎕^Qh=t]T ]v]zƏ?etK#8*`~T2ks_>RB=>B$36' Tݣ˲}o-B",g77?D . N;|.1:,#*5ᅴylK72.)cq9d}68HW˥˓$@?992{nH>n.sgB0V%aZfE.C뒝:+Duޔ{bo]k3ԧW`bDMN|LGLц J:YX4_ hb+g"j<A!"ѵ|PCz4;ؕpG^Vm˛@UŶVa(w 1"oWrӽ ] FQmUMI ^q9ͪ ΖdC%`|у=N!P w>It##/:nlc 3[  A}g-^pRN6(SL?1\0w:?BQ&& [r68ϐG n>||`v.I OW 7"AZܮiޠpx\ʊzHqd8Ôw 4 )IC(FT rAꅚяEϪ8qN~2?8U*y&E>&΂ pY fQCfX25F${?IAO8gK_@m2G*{l$ba+,ooq e~g!W"l$ECż x/5Y~ %ʏ(1!L$B ?]OĀ:A{e6C*YNQoldPv[DɵR++MjWTn6-%ۓwqeZfg^դGtkɽ${aq{&tFZbW}zm=JxԣzC4m%vbx͊WCITN`L׊zhirj.̓EiORi{GƯ?o6Y4("M"yڔtl:eSeFoTtwpA)ӟS5;; "e ,rЀ, ϣ8ӃoFE_IL9[o0)$x{{G#TK e٩&{_UӚG2bi8.Zr.lUOV.Ak Ֆɮ"u)ьI ~uոB0d-TȡQ?l3d^VatJߩ}C&}D7bošJ'`wV-STJj*0 Hp:ϴO}&\=E brOK]\(+AXҍ?x eIqHٲCc}[~$1k)M+\~{ح;]5"xA%-ae/loںʙc<],\t[4k3t+RB9TY%:7֘K%U,KAM]mZgꚵT{{SUYR)u(΁7xa.%"<"NhFTxXÕZ x$SGD ǔDФ#+~NcQ1T"IH.WnDzU/DL>1DZb.j.L66b] x/J;T.1T ^Ukև+tڵ1޴ऑTGEJ4n_1G[.a|]lZy. FnCP1z5(xʕΞm${Xx2K1Ե?TNDɄ2r7fr(z}ls= ŋ*gԢWEhs \Kyos^>1 LV4+;w}a$3 :B/;؆GkIDq&K`/$b0G %/ɜ,: ĥVUs֒pSwh?"ʗhE A&.Ͱ"5ɲ{MإmLL*VUAPWVJbvUvn}ڥW34ql f*i5TcUEԊ6J ^HQVhӪ:ݑIο<,ӽ:rKm:叙:ӅD QHR4̎RHA:AAcg eғ K fώ64LjjC7.w&m|dC_<]Ĺhy_[_ZEf)1 'k-rjعw\%қ됏A(1|H7q O 5S4')pXy4陵炽*G}IȨ7glr6daLXix srkjvXs%-)yh+:`=_{.s M kF YǭFZ~Zr9 fΚ]ߐ`vK)#2Ę0-G;ۙ%ZY 35:{aFM ; 0yb[ nFMl%ض)n v<BUZaUL@93TYKհݖq}Ёy-̪^f u5?DN{ΌyyN8{i{° ȃ1Y h-sgX>.AMvxH&m_(i !aBUmQEҮGy;c*٤Ps QRw ׎%\Ϟ'ph F >L,c<S? _΅W;QUg_*.D LjL:Zkq<6.!O-"Tz,2CE.U<&r<mԨJVeD'~/leD|T%+᪷oNfUOҐ1<y;AоGrކwNRىr#@6itNӘm4\s~7[ l'Fdڰxf0Ene*e9eTjKlEFD~~ UEؙ9M;4y5njW6 X^V{@ L|C8p{6 k me V'SoVAMY$'tU~}θGFeuAEorő~}XUh]VjCJ*a^< ǀȳbx8*mr!*ߑ9wq QwKnH5bظi,e[ 2F:NxHG~ȭ5u ݵ3RoFZNȓMe}ꯘwP FG}|`rؼx'і#Șs0$ǴjNYP۾DHQ,DK#DB}ےi:w |H}o0plל,>XClm@ޞfA17'>RzAeIk\:!J =Y^H=wfh#}h PJҟ9-($RzL J*Y)!dt^4Vx1L=CX}z>Py p3>lT6YgD3l!w2|v笇54J=^| ՗r3ٵr''|o5!4њL_~kr~5Ɋ "ޘX֧\-sKCA#e|!?^J Ӿ"~ Z0Hny( e@@E^FܶA>K& l@-{&ע75@ kZ I s)nd<2&OsOȪzQg)jyW73kRՖȈ,p}RWrg١`-X!ʼnb7bxٰ+."EQU?O7xc(<݋ UɣXQt!AAe4:/hO[_[3 Ƈ<ܼN/6¤5Z-AЩɧ:Bc./)Ѝ'XL'Zw MkQ\.zJN(hk7# ە {Y2W0 p#j*BTFc3;oN ޞLh $GbkUm 4aq;z0d@Xm& i=ȭB x^@{21i_,bNGzc>S:jKZ抝b\sBZR*'aMY5??q!SΫ^p2ţH9 V4 1zʸ;z`2t3zq+hl8t;P[yCNثo@j,~K\X>i7: ]:0eeA4I-Դ{ k?rA|=6~40|gQnx})lGAT&[+wRŇXc"vԿ^JǍ=s7uQ>]qd%/JV?x:}hSqE ^\YbLJ ~5A1}1@e'[3tڶ71MP-0K8¦9FK<pO&thKei] (H|1e<*b  s fxD+6vޅpdsIdY>y,S{2v%L]J{Yn̬C075WkbP2aU(lk{2Ŗ? !4ȹ X|,ma󺟄z3Q"1z w U-QR4X5[`VeAj UT ~@Y\Zq?*nj?= k,ܓz}Smei7J&nW[Xa*\il^s!y.4ԅ$ι"˺0),DJV[Qa]U ̲QԞKd患UOl-BFs$r/-Г`!;19äN_ Nnwp1؋jXxNz])P;AQn)lԗ;?cs`;~q?਑wLe1̭)ET`L ]#"vRQ,\Yp 7ry%s~s g'2*ģj>s{{pN# 3 rT 3/IU[:3cͧ$$k*B5gHtH<$};+In,P}jj㔺6-/sQA\@zSyPtk@Q?nt%Z5[z~6Δzu.U;1O*q_t(h4)MQDĖ'v9'@6:sD%EZҮd O ڌ<;yx@#8W4'|py.f>V-ŪPSg;t% o'y<`tGz}V%x0JL%0s0{dc3t潼"G>i/[AB̅hf8ˬn;k*% p P]PFm:-1F ,\/gP- *,xt}qLQYUӦ&GLHWσ僽Z% c"CɄU/P*@7#y3_ܸqB5YAh/\I7o[]⃐\iw1yו֍N?,9A]٢:p&@`GV0:w]Ã>D(!J6@mNcmQiRIWa.Us[h('rEFkd&#ob٘PϬ$ L5ՊsU`Zo;]Ab*)QB?Fm/14B/sANe>P}82o:^lr3fmO2;$r-`pھzm# ,9_bZ7)FU.ͪ_@AtzÑͶD<KPqĔKy>Fnx\40`zU:fTȱgpE624~.3Tp0Z̞NX !{f< kf.N'a,~ؠbhvS@ R|G6DScJ/ Gfs+MH6{xfq!c2+>@)Uө0{ xNlocs'ɢXz5Z ϤYcҗ(ryPEۘyZR'm`(K.yu(ur29[d3ES%` LI2E0R rNfw E>_c*/)[FH2̵i4aj4d񴓳}Y"IRR |5Zi;Y+F` IHԥ-{b& SͅcޣuP֦->|Cel`NBS Xe(C*}2}x$,wvg8νzNRs.dif) ;W>t u;θrK$'G : + }g*𠌺Vi-#ۙmQB{VDlI[o{k|'u"l)ȡ&M΁d;s4|5 Kfɍד@^( jUwj=S٨vy1Rih1*ktYݪ |N+~ɤ%k,sd7md ,h ,xos^Κ 톩y-lt'~F qܐ$Á4K4q@?i=\cdYۙA Ł' 9$9}RV?ߥHRMeq(u)Re)az|5.Nf6/>#څJ_w\9i]gluqc 3u_C6?i*cN#d7BLjD"T[p%{b#`7#)`0<{GӮ|Ew:plcCMUYs*+/ "R5.AsÝ1apW8E%"pF^NQ>[r)n^!ph:H>&}Pczh/@pHh|'@bVQm[D9&1AZQ)Xa Q™}Dׇ4=Yg"o^JrNV+fqd ϒg-0E<'OvӄU>O%bSݲ W/p%O,)1Ad.c@S:ǰ\1g;﾿9(S)30!mS$ )d\ZC#?^*9s-7v(|2](c}1Qm41A8UWCor"#~!w:<|+-1?`] Zik ugUi*\|V|L5иh(pcCu !E~/py &piѱLپB*024ۂcGEZXL Pu=2u޳'4XA *C^~nٰ݃t Z@2sޱ^{ƒ0iSl{f-􊏖2efPbzB~ @w#1:~ؙ}b#?`HT9yjM$w4'Q#VWwpI4< R{LB+`ݹ]䑉:tVڔ]~C?߃R0@os>XtS0 r-`L{ Tn7Q~}G7P^)E5b$lݺC̓`"wmkoR`ΈSI̲Jф<7V}IkKpR_:cgc.uynAD7.BCxQ:gLuqeqpzaӮK jo JL ! 9`!F| aؚXʉu3Tqٽ6f yfNz0#E䟂%EZ.5 UZ1 /x ̕Y4<ǧ;lUTY"&=|ҟZ&ug8Eتs3:~Iuj*O$Ղ-en$n_ff*\2OAwB\HY n0x:{1n{#?+b.Trcp{+;zǗg5ϔ:) =?cԍadC/8$~Qco9?o Gta;Oʀt%-QSz!ƚC˅Ptzj;G D  ױP!dVq_N˄ʡHBoQKSxPnp3$~ۅ$1 ")bVqr|KB(ի ݲv"AbWA\_m-] /tPDLtJM#Pf3 V#+(u!Zt͸nzLwCECp[QM}KaŕR_1А tHL QY]? ?_Ȣ7 :ps  ہĀObvQrĿ@+Nا go=^aA8{&rU Vk/q5^Po1'u}몤2_H NvKR *s ]T)rso3_ϑP.;c;z/-H5ۏ p+~x_MVPIAGxy*4B Mpӳc> gXsD&{y$Ky6 P0%r]9K>rEUtsX?Z_Ijc3<{vD BkoU~8CDA^47r`-Vgq9ЊSE{_e[%z ӠdUH5Ȥd/ȸ>"4GhiQnqő 챗GHy(~Jda0oHRl1cˣxp,qfW-T 6y/MgzUF^bW)?2}X0gȵ O$g>`L>P6Gb\+)&7ђr$||bYb d5߹`^.ԔpqݩIMWQj[\ FFlsl]CH&^T:n:[hƸ?%z k9h77IW5Pyy.B_r"8t\䳌El5ʷFceFv&n#n" kQRD2ᩅ_JD,x,۴褎YUx4kQ^8u[F, >:im鼀<ֽ$3X߼dJ)yR)u<2kȠcS-iKd-X *+ՋhVBţ6![UwYi$R?y5;B+pRb*GFa f`b;M:HⰊ7ei{#qP$Jۏ&DB.}2*%NdEB[ 5lħ >ɢƫ; /-܃d*~. i>zęqINn/ P豃x 6KO\L\ެ #mtRk fhQ,)R,*wz$TМ K!+}t) s5@r$^FRm`tH#jW:$K"˴A/Bd+d\Yd?O?8=0)w@ruOŠv34fw~>PDNWBW:݉xw;}cVn˙g+k`XokX \M!0LjF3j!!lU M<x\zp>Jkg7Jg wX,eaDAlڷ&;{nm1Zc6?:Ef&C`'XUUֈkvߎ_ǁ'cˆU: @vOՊ?~*Glqhb2pYEv*.7,) 7`@}O#-M6T,Q%ԆISԲv\[ Q\^l' ?K(Rc}d0+ΜVȰyc $yfAۚ?_CٱT%IZ6u0 j)8Lv %q-(\*ǚ@+I-[OB@'E^L/U<6eKٕD [Cݑ&00ȯz*L)-L# mMiHܿC*4}3>Ѩߗ28zF/S;F'f8bӐPԲ:x;K=n<;H" &HzjQ}?mݨoMjd&e}떯9TM)O;Bk; On 9HtD.I)8Kgc뷙RQbM!bUޥ F;WI"hcҹ~*P@)2E[ZƹQsiZ)!v"rhвҝdyVB)?w<pAߤbE)- )۔Jd<ܑ¥ܙ3h>x^48.4 w͛/UtMh¤WgE9$(< 3gK]] Vd`aӐq. xjsbbpB::Clu K\'nJ?KcPQ.=5"}C7SEuFtѺ7?gaLWfzw: fgQǎi$T|ڵz6( yZ0d#)˼/(p[A̾>gn.5x#&t'K6[6*_9 4f8e]E3aRfi1LZq+!slLӖKXm.(O%ao*US3EKP^z;y #Ts6%4aLfKuab2a UwH>`rg)_v/DdR[44PIW,O,$l#^VYYMnAC\)Fof&{P=T<#6srSl`a|NyG&0 r>&*~kS =pSMêo˛L#_ˡL d_ Aeҗqa!dzW |yW|jsOVKImNP:â945'ne9Ǔ#_' Ů-lVyr)x2x9|P[QU#$Mp!7RU^6o>/6l<=&mߗLj.RfӵJx/S[oƝ$^*A]iᙋ O$!G"M􅸆eٛ %3*iխ(b.a]q}U N?K!zbu~۶  vy7-? @:.'-T͐afPuӼ,X'&+m`m'ym8qR4fgeZ ]:G6}12;%ۊ ƾy6{;Jt+%߈t uKJ! 3x7=^Gt;O'"DgpR w|,6pB^.-wtgDHyAⷮM4#$!0]!C#}Z>1?4 Yr=`+fP<*$_^Bb- w̄):6R+g.-3$ 4+HeZ.s3G)O~M%;G\t1$<(Ef hu^w9.>eY5+*6`:6o0<<^װKrOzq$تFSKL!Q_a  m*l0f8}!xp0@!`qNg}wQL{X:5qvq \OMax.A#5(Ac:۔~,\ 6srz֕MZ!x#qzVTu1B oM ԽH={AI.2ȭQ&s#SIكE[PU(nXb~rgɥ'2XdJ5F(2*t!cSL5 %X%YC:7urTbMOƙ o`B+f=6ɧB]. 2Ӱ24%jhjڳVsFM%\u Er#F2 ,衟N<+?Ga/iI[8DeO\N&QsѿV҂!Ar>d\ڰZ`~4$$WJZ~u]7.R=rn'X(B$8U A$> :ux?NdǽTwK]C8WSh{ɟqa@Ov8@Hx45dv.=¤#xju|}eT6Öff 0/4K/˔qj ]F̵<+mfM~jF7sEdt!\P `G` m=PAocZ0X2ʘɌ~)li_G3A gBqއ?żna e`# @tyJ~#Ψ:˭B(9bVWKA^<_ǻBQ#բiLje9r޴CNk./>is#O]\&GQIbǥY <b㽮RIvyc]sHniտ#H3r G+ӿ}B@[-jNw8|Q[-ۛX$3S0af9\nr}iO[FO:{N,=PfjGbA [Q@ḟڸzP퓒obs@Ջ^V2C`Gn=UG?#9fjɖiX@ XׯJ=+W[xSf YAo "0 Dm&zԊ? #\,Dw\PWoGtvI^8U9]hg{8̳c+=~S7o.]j+wr<4vX2s9Mo'Jܤv-rs0u_UNV'(qt\姘Ű2 xօHAj˻uѮI '} JE;"i`༃.;Zܨ`H~Il.7 oΕ9&v./xAG.vަRaĴb6aEoOX~hT]&zŻuEsI{Ϸk΢>8+^h̺45_(odC yu_pJgN1D'W$Z< wї")Sv1da @*6J&ę)ux R^B"SDshQx/㮬,l WG@P1Wˤ̆Lz,﹣Sn4efY bptis"Z}j9)Tܖ~Vx+'ZeriKB>-_\^u'`j^Ȍ4V7RKgv3rʄo h"5x޺{q.5bm\rOOdV Y ^WI$hTkX`?ğLo(v^kƾ_cu=bLmw݁|7uyhRЖ獱aDS56Y$}}Ju}ƕ/xh15$So^Wr!ׯ0t-u;qMRR S'k !ҚB0(?o{&ȕ=Zt͵]}$ߚ>^C"rWmC# JP E@#RxuktIKe!Ak2[g.BqhJNOyzvΣ/)ogt3en}7hWRw_6X1 b$ R ng0p1@S :dwr;grsЇC~_'qGo6H0!dz%{[ 'mEQɋ9Iv"^@;d 0FAo/T DQO. 6=_##:ȾGP:B|V$smp 1D`Cqi6gL|Sޏ]~dL)ݝYTCƞ^ 5Fɓߟ*=FX4w_t,Mq̎ujCyIbiKGWa{;!-_ yr>鋫y"8ՏAԕe9<G9z+Ze u8٬ݔ oO6RbAx3uocZu(U)&}ȗP)_ijfp4۱{g@B_DXi H BgE$'}Sr3+Jنױ#.q7%#A#S7~6wb˭jnXyzzx*Pf0uqKI KJALrywv KRMg~ڃ+H- L7|O<ά})K wQ6SV ~Z\t= 39˷(ܻΔ`J,N5Z/{ 9枿ptylBEbU=M]'3 ]N7݉ /йWCʳ"0cqb6RGX+jou#Y=w5 ,ל+:<Rlܭ*>\m#EȰVWK]n˵fdxB`ncLDl[8BLmS1V_sHuziݠ=eoI++.u 6\g۝0p Uޠ@c˿~otzN}~;쓡mqy!u_+'GIuqluKPLYZnLhg40\otFm56w5iX&Bu^i#m*%P]ܢfH J,LEUNCa8A 8OZ+m_^rQlM*⊰/ P G@"b|Ո+|[X/?xH>XۖbDC\l56䃇‘^‡W1e"\_n}:qpJr|1eF H A*Cզ,@ #óthZ)p Zβ~ick~cI~!ZeRylnV?ZZ?եFSK9`R*1g2.C*ak)݂r.[ 9xl߸&:t$WQ9SV4TzlU2qx$ᄘ/ʦݷE!gF쥓a!DBzMƫl)wMI a=ɳƆ w)4m}UE@ C(QۜH\OϷ:3Q wSjc&H4y F1߄M0C Pqi `6b#}7A]ͼ >+K;O*M`%$-1Yjڡ8 ~&)>WE~ X)\V |LzHBk^jKC %s_Pue؝VUb4TDZ`HVUYH֐G++,M\0X=-c9/P*h="./\fhr}lw oi Yux6^PgF 1nOʟ1GK2W#Ʉt|ƗT^t}+/" Bkz{t>ֶ8As= ֿqr^˾#nș#{`*g9Ω-}&GFM}+؁؛~ տ|?mEZ !\)>O|͑U-:`5+W_IHx"$8H"O}qӿĹ Cpcdʰv+kh`o}; ! ^V 6+ wU>_t|]L,EAI cl$s$آ3r#vƐN<ұHyv{1uԄGlˢd5I{25~]瘫-=ykK #78]Nt9Nͳ`HZEY}hS=m.5Mh!ύ>do*Ŭ3uͷ;'C1jt94, иU';iXH-|F}'Gڐ:%* 'LqXL]!6 í5\ 5>t ;z2l%=WgH8X2ޠX¯*=8I-`ғ/} @!Z3JEWY jF 0`s5%o_ ^t+- f5RV6PӫnзhT}ATh xԠM s6NL!_KG{ k֧,]U X9aqJ ]d>RPOw0:Q4 L >FrqhoyQZ񁌽 ?I,^WsR ,F?*o99WUd81_Ma@|G$9W.*Ja`ۋ1Hd 8JRZm&g! U\-`m*yM8PYWвV:TcaZfY.OD̗+Y[qLru*Gf^ Wynn+Ȳ#y^m5#LjNܭ͹˳rl o7tBIdm $k9 NQ=il \,;bZA|n@1AP4ᥡn꽔_pU'MR6(NgZ#Bȃ;U&) >D(e. o4U]( d"z(op֏A84ܔ_Pslj͈⚉O>5r>)UTԱ.hsf P$.)Cb$ZRP¼n%ff4RBFtwY]\ɋ;+ H"rLWSX[}n"y? Bl;Q#+6z&M^-c瓘^}0YHH2ʆ)y3>7[ð?INqNb>#1T8/L--z~Q=t-uz.jYh;UutYJ4OІvs@՟r2a LoUq!zr >jjp>0!ZUv5{7>FsKT[%ĵ$ H᝔+L_r"^]ނMJ;k|91h6zeA;?lmlQP8RoD5T_[ ~=JwzY;s[AD9›yzi*gW8-]+&N(VvzWVaq#;3ۺ6I#?qۗN(]ĚڭJ睊E9s6V$bH4ki™L&!4cM“'U1y2*a&e Q_ڊۃdp ) r~~tG,UD4~^XLӠaܑlt+{P7/GB[;KLYlhߖH!뭗cq ГxB-Cj3 ]=[+[-OJW+,A1tk 9@褵`o%LB$6-*xs(})}[fn)h_G)|Y'j5ZW̨S6(u c\SlPhS(u~*B.Hgjp3VJ+<I>95yG,gs9Tjxg$FAGvӮxc uSIΦo&GEQ.Lb%ӹt2`,p#}'D+ ͜xPc"3O Za+[;PNRr/:6EopL Mu(rk0Nl4NM)_ V<)m9}Dnvj.5rзv}Z=x\y Fym񩯃2m_U L'p=YH8X)ݿoM5zCƐ߻Y5Q~SQC+BgD ^ީ@лfpc 公r0X#~Yp7ϖ]@wb@k#F̄E(;eA7IGqQjBC[W)Rδ]M,TRs n,c Shl1܌CGQ[a,sڶ!=l=T}~יǨχis-68ZړN;T-Ǡ2E&G[BW%)}+Iv?-kQy`9W"r3$14C82I ͞eF/Dɝw?R,:@I{ג>7—r49^$8'[/ .T(},` )7ėm $)"5N75pf1l&Ѝl eGPE&a4Tʯr=_mT[ ВT9Dubk<y9+3+0Vu$$⾞PAh$el-ZSX`(LJ`+/JrBqs0uk/duoRj)L,=\ zh+ +IZË\ ҃/JQP:Aϧ'iiRéRcGL,h~RKBF·'Co`0y9U@ ..R ":6\;$ :0@M+k#fÌ ̶X.bNVmĽeQL-Pz(HNw2ߧUf@t:I( 4B譟(A#T @qsHYyzQW+;A|b B=C袒K>Oŀ|6ezxi^Wݕ@q .9ꎳL/KtlOo9 síj <$F8QTeSM1>GpVc؊ E(@*HS։F(!u-3Y㘁h@ynoϔ&|X~RhnM = 4ɖHzCC+D!pl@y5ɸCO)Xބ #a%frUD6s~s@^81Q• د؁P1}QǺ!+teo֙>0r%/u2|G0\,/ZWL b{ FW-'f;L)S@6h(ٗ׶ѺqMƌ eؾ^!<]pOLNĿ֓`[kzE 9M$(A]&ƽ]?, EaQHAV 4UzEh9DzaHM9+Hk 7?%+ayhm>mR\w1B0y)15@|yIl+<6^?Qٖ> nPڜ}jq|/mIy e"^eNzpxڛRpLcsW.P Y@εwnl cy Ο˟27]~_ )VNv ^Lp[w*yZeUUeW$Ko'B?yv];[GMlAq ۙmVXz;6kx$(ky+lq1uA@S!d0S!mÇpRa|(Gj;TT NlC<_yi[) e:vSp`nWab,'c̅BIFl۞i;gT Yx ߂DO΋]ۆ7&ؼ6B䋪4N/9I:^s<@5X"mҎէ7wJ}M ZM ĿuZ˖qfU@׋gO< j.}ø8I1zaiaFc;#@:Nۣ8絞֚|\Gw(,@Egc'y:Hr0/6})Jb]+< /hY%ٰ]~?&ӻٛE-=ՅU2=[Er f8` utR=v '\#bqt܅qʌ vi`q&|wr6`ij3vo]grf[B&7e1 `7؝ŤJwM/JC@Q9WTh:3Mh%|5'G UNLw[6K_'3p'Qa6^ n9yܜ{j m 5{q kq׸IA樂D[ͧIKMMK'Ϥ폦2XW8Pچ<ӫ-1CS+vd3;}$?`"Dƒ ks8䠸.MNsϽtJ``>ߕVBq[2,)A[xa?I 1~iubTp( D?,*S\.~ Qx5‘B^te"1?%{/4[X6D&qrm6M'T_ ? n@NQ%vffkrxo>9^X*@TpgoJn$]$hꉄ6om ܂4OVg[p~}IkitO"_Xv}δIRYSnf3܀(y^.GLr !3??,cԠR)bJ &_pZ^`_5Ērtz$Q!ȅC6Qsk]NzM$nF4 %эjM`~O{a0ҩ Me9u{ZFӗ)ت&*2HC_ }r[d4ϡB'Z_tkXaޠEc-]ڟwa@@ U,kDTC* 7puͼgEsC%k*т;+V^J u'Yl% sW*9i?Q6o= \?Ff(vɄtmp_M(2_zZ&[Rgǫ7u1pǻcO07R3զAz/6`|Mjȧĵ*RŶe ;ɹ/ iv ~J'-AϚiQ!sok%IG}f*i&Ii H$]әF{瑱 O@aRJu <QqYƻ|o*TPZI|oٙx޾\BBãțMD*4rb!u?*3s);D؞4OO"T[Fu$ʌ́rc8U8Nos q'4>,qʴb^ro~[\a$ɜ;cf!9;]𒩞x3">egS5p9.F_> (PqU$eH1}z @]fH-1^fӪ@6+2Oљ.RMjON068heb9.mjZ\Qs?+~/蝺DƋ C.ڬpS0#BތWZLƻnd'} ̼+QJkn*p#J\jsS*XPۯ|)~<֩LU>1RÕH FqZ#55uɅOI}R_@n#;-a^#Vlo<]E=QTB] <;D*"!D=IL,N [%=TvZj^nJuawj!M%qbIYsg/9I?6~40ICXnoi~PptW!{V#v 009tBБk$&-H5F7]!JW%:n&4ɗq(z) 5eq"`MVIGc˙N rNƧ]Etk(# a HottWLE<'8);2ngv6U"37cbޗ[mBn3ƍ+IĜz|BZboX-o 8_09~XzNNkEw*U&dd{ 3ruGx:A\JERGnPPf$6OYWcʵ7+uRGw-mP?lU d5 p 7=p_H9(Ƞ?t񾺢hm".h I,pK8M;SVU ︁L: aiWK=?HNTL)֢{!A4͠~yvcaFA[6ޗkW;2 Z!o}:rW@ܮH&# 7;ЯILikVWߜ1XVnj-zrC[rTRLou;21wABB}h0q}9C~7}/73KMB$?*@ڒEt6^k%JhEeVmV'L,%*j"<ʏ ҉ط[kxж{[8#S_!Glm~mt2(6[+WcnltвTk.4zBH X`wBtӠ!Swc-xеuL[*h0KQ_?HڎOR"7)hW Lݖ\A:bOXIivRHY(f C[Y0KǺ\q z*z0s;e唌E?&d1ծMRE;OMFvl:AX[ cP׏ .݄z}SgW)u3lV+Տ;+Áu}ZI؄k.͗ܟzzD<1jWs7v* Q U  =cK4ԟ$݇F6DôӄCp T-+cdRAj\&E6 s AbD1<6m2$X=5?A?eNI5o1E+ue]zVq̸|QRI5UG:<؇ .vhWxrǺA&/Ӎѳx;`$(!xMNԹ9XR'vCbK<^AZ*p}4޸Gߋ?uiub VqY9}V=' ,י`akt=:3nS-T|؇wbIÙk#Zb`^k=I!yZ{r%a;&>xwG@‰DǵXϼ187t7ܽ^!i-*>(OL9)Oyظlp̊6Ҧ8#W0+wա,/fG$KkZ+r߽Ng̖_֨0|I*f/g6t/I6%YDx#}60 uNS*pz' TS+dqtE},])>UpCaPX%mgݕ'/nƋb&@Ti\}3|ʩKI9ҋCGbc z=0Ǡ% 9Ұ( yg;/0crk1QT"Ҿdu\}1v#&~VI {WC ڹ&Z~EU|<)%X` ҝ+d/𷗍WR+ H!lK'ly!tVbQ8xU=@^V ߾p[K >v>6. +[4BSB:HI!{U?j+yȃHNl?>gϟ+=ҢgXZ/O4 圏c1zrg.AA@ 9s<{`4 Gim?X ,ޘ_6㪷Ӛ҆^T(Ǧ,|GeB?c-JDlGOtg:{>V+"=7~4UAKo XYb۵VЌU zQ6W%AC`ibֆX?4mշK?>QnTŠz>B'|࠮2V'[ N'<_0nZ=F.MSWig]@n %pHx|LWiOUW2mb9WKPoGRm;k|!1IW }9o3poPvz̬4^U;D!r{dbWzʛcGRo E4?뀣к)3ϛT*?8)^57\L|6etʔ=?Yӽ8젙IZL_4ltpc/FOg#M$j U}_em WԔ`s w~ ]~bc=sJi;Im%!8HRR(%z}#jn'qCl4Gؗ B?G!dPZa8Qi *l$j/i]fIǚfK|?&%v6qpLz+a FM3y2D"x` 1 #Øԣ;gI:zp*%Q=P!'5;rW pB|cW/eƮcy+QᬳTbFQk::"|w%4〷t("ݫ7Nś.7Z"dת/3IχR~zSmF 2@guZS[Bl7]p^%L&NSJׂN Me$Q1:8YP!$XY;ՀR44R,+n$ncz_OKZkb!`c$J|Zj,p!!<5z'<6ƌY`%Ѩ^L+,^\?ǔE~@xY8ѽWk%C\.\GǣPVaf4 }W{$nTb'ad-IKx3'0K~BFGK>n4 |H <Ͳ5Y3&Zz: ݛE~1"sŴ; 0*q(ѱhoF|GuIr)W)XpiH] Mf_LIzBVH0c?Izڀ~U-L?rӞX1NK&,Gᚪ=xAr<@UJ ͨm=txYWg5 ck_Y0;me!˴;vٟWm oAc~od K=(TWKoNng?M[|Y*?dؽr, (5]":t9PmJte 6H(`#J[(P3۝Cj5Mһ!fh N|X(k5}pb,,VED(<"MǴ0Jc7*ιĽ!eCj/xI2.,l;޻Mu;t\pCO5/o,v@ E*4>۩ ?]o@}3д%CCQ]"SN?u!ezH+gR;\1dNAе;%lR Ew^DE|E翿2j:ߌ,"|}mj_[ JDxq Ų1=sߐQvУI)>dO9@uyOh%r˯rF•-1! @F)yQqX׀D-#d f M8Axî>"szSyU%;u{P^uJ6LKvy`&ᷜRLa/^黳$/SFp MjlIÑT<三Ju,#3H<%tiĭ`lŦ^HݏI-mط{]=ȃbƕ#v<,e -?-6袔mee >"HK`>ZCp "w[ rQ#{pť v_Eö)wܥ1L =J^+:,_\ c/7{׏ZP_jj!>6`|,O1qkCGQv<8HY}~qt;,jU$*0Dr/O92jqlk|Cei_M z 0 "7v;|;t!. 9ϕ6q]9̋3J1 8Z.څDE# R1Lbg՞&w0 9 "cI0 %\{v#p2H?(Vج=iKG?st6E~y ϲ;+OBA!B+ 4Xv?3~-ⵤQJ+̛z*FrWMKuXmN-̾3: #0B 짱}igNy[z9}]rNtgYPKd:>Mr|V.``RY\}:w܂(#K;Z uD"b@6i_:jd~0'Xs*R\ k4 [W&|;\H+TH ڎ+:%[s , >9ш5D'"9{dM"9u$>SE᰻yo9K׳%TLcUV،&K7A@}$ j&i| :qBx}I9Aϯ^BK 3dN$Wml)r'I782To^Ya{{d\=5 Wi㳋_c/{A>?LV"ܔjD琎4g!i #NVyc©,e(GBV>L5_a _dDR -F3 {4av0DBϋ,ufuNK*E*W|{Tt0^7IgL=a<<&,X58E?`RR=I;TܦϟD+fEI2JEBOxjԈ9,,rYGV7YC ,?!մIL7B}Q$oh"}"D:".z +gŜf`({.r=/ ZX5,Emi]}|͟t.7 fJ 9) `'%U11Iv{+zݝ/uԧQJQ fX8qA/sI =AggSv &1GlB+V1!F*= y}7Rh}ڽjϥN*^&]M ym3~1?_Fw랰ԄA]]M 7=li;⛭Q/bV| ze !>̬!F-#5k9B^*T˅-SMѲ>*IȔ?B:>*ؙbWemigQp7. DU زke󋉍*WWBlI2^8fL?v6laub8p2@mA~up\M  Z8Vsc͆͡EA/<= zt"\EF@FgRs |/V&gqsx'WM]Z5 ZaG )8 [(eT? Y /Jc}Л/̛Ȃ"?K8Bx$!9z #NЧz&e"KZ+u LҲoIu^23r4z(.) Nxeu~- 7ޱu/l/hCք+L#kbrt:q# (?c$ܥO Mt@C)"Iۙb_7ȱ>':d4%Q`eĦ}h:,mJ`yF3/7IpdwGOt Ga7ӺSN:oۉTb:ZiݼB^qNffo>>Ҹ31ܢ_zq gA_[GuP"qiak P5E`q5,ݜlK|4>$N~&̹kl^bI)>BfP*ﺽ=4>Nڊ4[&5q[ToIYRb3 孅G.])|>?÷yC\m_ݓhԩ4^K#+3 ZyNGtƶzls1^=fE3 cek?wP?ae,"U#o\#Y0[ժ=EA#LAZMUP`- j$QX5m, k5,nBx T-~jb-ݒȱ \i_r@\~L _aǼޖgsl1+SݟHh袚7 tgA?RvMiKY+j0Y[oO֕D_fN cppcZ7TJG SQ+JۑT,O&xȠ/R-^\4 o"| z!n `Co=]ď^MtΗObbSzz2{{r:2'Rmsj StOTԢ,ֆ56_nH2-ދ bh65D,Z@ /Hnq`Q@bLKn9"+`p$)碙HQ$ yI(OhX[Z i*r. aΈr)%u$; ۷GkM\E/|PUܵ87eG7}uiLCP8M%/1 l5͜>|V4q? -zgLXeRG2A(جx ?a,C|3& y!הg!*ݿZk_x;'z"=*aN7s~|~9 tHc^\0BWIେF 2%c3l~NteZP-!2ft\_t_Aԏe$+6KɑLvиP i>64qp5Fb<_`7홝8DMAW"(5QBA-(7e="RC$|MV HSDVyԈ ^Oy]ю,\B I/ՌRq.o&B-޵}l*^*7ܭuhg֬.G8G`Sc̎;8b4Wb'I'Tl _`6 C; &hG Zk\XS0`lv 8݃ =vnS5^c>V ^ny[?Mѣٝ.Hs,f&jpzDgosQ--35ϯ^. 4Rd)߼jRC8j]wڍ-s<=}$] сQ~AF=WveXiɩoJ6=${YmI:$2,=4{N["N,ez |R(MSt4x9_^ƟHΎwK`J (%wֺ4_nMUF95aJٖㅓ)` Џ(l<IFKoӕ)4*]$9H 7pHdu_gH\*n`E Njnh`2QM$zb"ux 7&  ;p9Zz=TSbT75u\V%o]bY]1ZA|-wMǮ&",ùu ą3 1U 8]wnFP:] 8=H?2ת5]ǭ ""b oV.7$~Jd 73 7Iγy~$B7!l&P բ1.O9 (i'-q gc+ёW2'T/DFpdOO@ 6ii0X9Cl'v˪0YHYdЇ9g>Vglћ!E/a=\W[p d`tKN5FD|4'I fA䕇Jl N G\9E*N4淼>jjJ4j;ISVC|ho Pe0 ']eU__Q7zހ$JH)Ed]5iCq0TV$?ďD}Tt!z S`S$]p'P  &#wh_K(c?cs3az&5~ m̋v,/Fۿu: v ZC#C`Tݴ+Q'<t9T['̣Bz<Xnnf2z}?e<肢Uh l޴|w4^\3ORv:ZuŅtwPmě>b^X=}^NT'Qh?E*n8"shyqľFh.Qđa{T,5VBfN!>wm^RxDƧe7b` 蒯|x3cϸ)Odž {aq*mR-saRC8{&G}}[߻N$kKf΅ 9oc =e@%r.&ɨ.vJ" [=ad 4% nUJL( 7 o'yWK8E>;'2x,q2m5w3J=pvis7 Q*By3`;AL2\n9rC"ABFC[@p`[Z'[$/_EHyۃ$mn}-i>>gk=2;] eբRuU6rZ X"^%@˴M"([eivQdFfnW%Yޓ~ :TN`D~QE؟P *AإN9 Z뷝] ݾ6D[HLgE#ƨˠ[ȾFryd8meD*Fkk<޽ti?Gtj N?ֳ\DHxmFˆD mmpQ|$/qP [esU(CZ5qOFU8ye M4U'7 I-L%/"^M#: 3cGuĆ;؄Օ,,r}w?8..4)jtYbF%*D{ZRn>ȝq Neha nq2BŝFI쏤5^w6gryЩ5 4X`x|q>+/'+mEGWi->Y+=ЊflST"&ͯ JDDewCWdhH]BB#v!+rFSV;5WI=kpwLMŎcoQ3 .1dEJ;~bQZ|!$ED@@F _0.# ¿Ee\x!\5Un] n}"f5*jKW--ZQ:iWsF~#C5Qpꐉj-rg :g9C? 7'ӍKrd _4& dRG!p@ ~v sb/Y>i?1GM!`6(O `pLQ|^X,cc;W4 OWP8_"SB;Z.T" }|@>EJ],q^8ڠW^AAPv+J\4-;q4{9owkw.^uޢ q;T Ѳª|3bjA{S5?x|RzZbؓӲxԔwa+mV7xdBXS.7,[<1E@A׭݀2F.:6Mׇ0{ۉ|{)(fßbv3M7-/DP0Яzj$,>᜷< HlQiX:}&_)jƃ]:n&@$6`{I*'cc]}D#zyrRcM`'QulDĢ]f I6I7cH: Uvv4e[oYI[NbͻӞjmO-w5eR紁րhZC)NK sj>U lɇrV /O7 Zwվ6 (b+ 7cBH|R!oUnh͒NZ_M~b#,]vJt{Ifz4,07+bZ6YWIkQٔ.bۆu+b;vzP*>'WSaX{: y%:@DUhGMAL:IҒ[LQlykYCKxt D"1]Q_]3:ų<n)ł/)1jrrT]WOǵiy+ L½x?zrv g[Z9w,\J/PM SF]e&[3%vdEC#cEwƱ\`m̪MBҊد({[_J;|$>CUe(܄ӊ$t5ޭLRٝjp͗U2OJ T\k? %fSWXץ_ ڜm1^˗lVb`3:B#jB Xˏcp*NN[{MVÁ a!A+t/WWei~JE!)ju4O FH!v;x1D4fs͒@oˍ\\!Tfo|1@e :}j7χR-Z4^I}fЌA}KS3؜3_uQEzW$@,ak  4VbK) ކ> OPE1R@%Z#̈@?iۥC`͸@onF4웚!_/;"Uj-9~ Bo(Bv ѽ^L#Eأ6p޳$tOzNyG{jNq;xC6𷏑&zBe%x˵Ř>9.疃3{/-$z)qjOh$Re= ,r -SRґ$Xފ] ېh&&h ]QdsTB =w]ODvdd)@ƮHrXvV:~߱B? "Ǖ%I|p(!GD\7Ds]^3 ^Bڗ@ps$YbMwWo29 ͱeH% T0<% >e+My66{D_8aaYHTgMjV;S~S*Hn-j"z( )^ HL}0Ͱik82b6/ahFZq-&JJ'yƙa\N( }3Ö:f5$%SO59P G=ժ9FsED|;]sItn8Q씶FWMRꮁoB?PMDq.H\]&64mS'X=pGSB " x\v%ˠ] 7Q}wY~W)SQpTi88Φ5-SU %f}lI0N|qYfgs8uhޔw/Ʒ V?"<䨙#A-I$W0Vx9t~:ڢ_;j( b,6rj clQ,##Pc@1)hS+ɎڧpzыpKKhq*[#pA)?L5Z 7)W'!rqö~cŜ/"QRϭJ }z͛XU]$&9q -g<}z_Lޫ?Jz"u{e_w{vLE$'Pq]paxP'=Ł}!R"p-OpnqXB - r YhP5NlS1@a×)`q7`ygFt*Qχ.CZU=Y!%5Xf)M $ma NxWn6AWh Q_ِ:lm)lQJl*9ݼ,H~!7iB~aOP˴/{EAv-?"<j)9@a#MҒX C5:IvO x1$߼YSxÐv:[,ۊ@4Є umsZbtqw rSC &uUQdNSTw *-&0Zg oF}զz+::D,G @B&MMvE”"$Lu%v^L#!:_AXefDiTњF} 'zYG֒ .+=Vv;"Q >)A~mHyֺtRW89lGguJWK6jS΂P(|T%j~K"IhǓ- ac2n{Ln2#ǼdÞm5$gS{}|'z,A9f;r] IOj|6BΙYK&S؍ 0aim/B!;+sxrľ ]( ^ MNj5ʬpe^47.= 'tO1kvuG8FBfE+vl<K\{'@F.h۴uO):]ui5 RnNUS#C0zPR h:IHA$fYۧ^"NMɛ~ƇJ3 7a<+]M\/k aQCp!X>FSSpDiSnؒMlxOKPG@-A! ٸ'74&݌{T&ۜ%- 1|:!!{A"f{4C>% +>M߀d7 u_$L0C &I>IEa?bHB#+c,=а)/مϮKY. ^7R˒JQ\pM&C Nw:22dH΄1cKC^/tG`ė. @T|]SьTJWWPupOX@2Y,\=D#NUDC-1R ɮ3c-җdUn/~j 38椤O}T8:ѓ:=ΦfM"$.Acq+#'zIIyU>ťĻ6 Hv" jr<#cedԯK[`uO v]׾j Fgx%WJ `伳71VJ^dvόO>+H"Nc4:h'Y,x;`4̃AËIi18hg Z%<"&x[b$Ҝl97]/.V.Re\9{{&^`H~'fE+ĸ2]G2wqM y/:2W:xﳟhg֜0:oCV„+2#t+,-~\iIJ>59Eڦ*ޱU!ѹ>Xrrs6Y%2,2&9'!^#f,f2۫պox[ M'뼰IHHr:пAjs#";w@jWIl9XH2~zַn>}@#2ȝDE\Jv1#?y6LVliG{ݟ؝ͽPFZ\G] 3!7!M%t_ ǵJXC~ >ka2L."H\Fd Η! %'ͪ=U5';^%Hb@J^^x^?OgM u{쪵'їLd̷wcZ%C20у0a֍Cn}::,4+A) dwTǩIW~ cHK1`4 2׮[[Ik')ՍjYyMrXH@a<]Mgk/|ei98^̃Ò:Z[t=fCa%beQbxC4* '(m!EoFbyr[?ddc(?SȵU=C%(Ξ0+*PId<#mu Y큪'ka!//7<uW<'",,jRFp"a黴˦Pio|xSg)~h:h*Q=V'P/kE&gA]Nh L6 ClS+!TǷsp~2I\4H6@zq̷m#yC44*"L({V7yNFf8[+5 ֽ۷UW+\8dFQz΢ᡂ܍z@]ʢchy/Ώi86ADReQfSTr 3U78C3~[{jDCe |MXsD.FUEwP G,8|MFyr +R6Bj]2!z[S!<NnF* 'b׹̎r:ЮdbjV>>8D[YEA0$EJ"(ȩjT T :h'"%fgr~3&:Dsa WEgr3b9&'8|V[]}1a :o_ؕL)&j'm2ޝ537lAN=5մ}~Cv:<fVo,jq~2,Ƅmaõu]a̝?a Xjߔ̜0+yl A/uSuL6@|~0oqIT% թE>(1|hn "9Gn8eb#n~ e|ed0bXx>xX.K|BZ gE(+- 7, ^l#\+¢ns8SճH4 "d [2J AykG_}فq`h=8`Or(#Am?,@v@L&' F rh4Y}Kpi^]m4hΖǑA-3#_E@ja9g?hG!bȅ8n0EIꤦNS@/q7Q7'(9uf匄ܙfE`̔[GL::nQ&Pr|0O&Chjq}RKg? ]%~}e4 5X%tFfD|)#{d 5ۦwgQ)Yo uJt;ASň&ºA)xٌ|4ۆ/~o>G5uάY{sTJSHs/O=5R-Fsal8J(ޜbƨ'1w;^"iWøP[v +SH|+r\H Ǭ}^ wgu>KF309t^vz^.nsN4j۷R`-gm_o3RN rENZj{f1XvPy:Qq;@eY2,_@'Iinxϸ|\myj-~7tiqM>[X/d$pXd&W=֑;qY^6Ho$Nn\ I|WjןZU*{wO'<3K{k=R.W"DwU#{qY a_p;B'm*4]#ڶthqeޜOȄmi=jc9c-E{!`fH8?IB^}lN0R\4x@&Ltj{VʎKgPxȬ* '5hAnw>'UvKpܔ%MvN#Ny6 |Mi~Ml 4ŵ!NprK:ڔjN6$]1uP0- ,_mh/K}} E #**Ǯ1aSz IPH)rKBv_@7e4z\bP /\n]7qhl8ċƄi_ƓeϷDλ)U0XTf8`]XF\[`n31Cgb#l@ۡ' ~,ō}9!𲗭2mcI$ّ b9 ͂!- 0n;d[ Q/>ۇNL N=Sx &kR䉋ٿ*Vw 9ܘ]ѡʬ{UMhrRU? Fx&&>x7փa!B0y*uz:ʩJrG>:aYRpGǔ:ۘ^/p*~UQ2021'b`;,d1-ykORQ}v3]vu|N|aVbaLilq[*,5B1d]1,bC 1#|3$ 86tjګBgwxݨ$@Eqywq`=ni )ytLѓ!Uo,;&ۖF]^𮉌0A7[ A̶uh%Z|#p.wެɝf@j1zΦ%si͈TxD&QO|g3aRcaRXҾf?+KZ8a]Iʝyj}R̾:{B"芈TuD\0褘柔r4 $jpj#2hiBFo8;+ DAEU<܉k+Imq=RK$ClB,}H]P>?^'Dd4Y)Up;CF{#&r!;&8]U?[-nwI֛g<%܍L *u[*Z7>ghf̒ضErO ħt q7t Qpc H q-Ϥ!RM9S5RfbK]f(nᣐyশcKjRV&;gGAsBgaPH? RCL`/_1p%`c7QBL<^dK)q5U [o2րŧE(c1y oɯ٘ETfbn$s[J0H߱Z.#f.5CV*m715BS]`#E/}P1 :_"xzAݩٻVFI%7үR9Ӷ $=r\QYe3;5BOEgSA,$oTZTpQԴù;{oCoL 9Bh% !AU^>U5xZҐ&WO{쁣äیXЇ'%i'<8S*ASD.[%I<nXቚ1Qzt;ENZBN1=x+w٧C 5`IڐXRnjQTd T8DJW=U3ֆ JE;{>%FrOwrUiu$JWzf ~fHPk Yw,c#B U4D9l]EK[s%z[ᶈi-S`]P*GqqeT~"#5?Ŕ؅w4_5f// Qσw; Rs*ߧu Z}l-:GU*L.6VɩuQG4Р[t=b:g)D*c7Կj#0hʗwL]3}X~驆3nI무d6#^TkP^Ml$cUF璀"2#^IdsPLk{ܑq;WޟhA+ bufC@+ 0c/"m|2"{[ 8QUi.DLp o`DtbxYo-0E2H#WZ5'- .׼T$΂͗j\).L {ݰ pIW!F9|JkZ0 o+9.he_Nqĭ;sD*R̷Y/̏Wbچ7 m?H$6ذCMYF}.LJ%$~ *Lk wf !<oE1CiN`/rUT{}f'Le|[UŘ*l7 Qg-X1MŜ&}|ҝ 4o6)?Ҕ%,oI&Kzh4QIugЯ8Gk.S*b*26_8GJ07r-{Fȏ1:,zq4}:\Z@( -Tusʜfi|Sװ%B$*` -?)Q|?@cZm,y;;srFVQZ}KuE/-ƕ[Iu1YA3isD)hӝ`BQs>}L1ASnˇk ȠiR{CeLVE=ݛWP#KEneDyWOhwH ȔtDAc\WSb%2ؓ3DӲBoL;u )CVpFIԿ&n 8ޫ j3>3װwNX$Swd& s~_j W?x}(O|Rh&}q緱h,G9p`}P2dNJq:x]i:F4tѺ$Q'۫M˼EBhB~qMmф @pkJI/M{lO:ILllDsuChctޅfP3}@*cK73oUDA`@DR  ĥ)Ąx[ΣyR~ϼIrlP"H _y,N1:AfHjYPfBKWhR {ZN ~Qaǫ.oHJ/=p)1Se\! v \ʑr%[^gH 3 B^և4>7L*\qZ3gBXQK/}=:}^yGŶP7 >v1fjQRwIcrT#h>B#:XLHBxX3_m%@],%[ipؾL ?B;Chty_~ONWiP$K[MPeY'ö 'CS@rU\VozXXßWl{{_t;Sr9n# oڮblmK΋pPcmޓpQ@J}^$W ?6 #9.:#l(`@;P^R|jh׳Mϼ0NaF?xt쁸'cםAF;MPŀk4U]*m37J?R|x/tLR;6G@/z7=iiPP> U'[/ P怎6F8/8({w tK9%ŁOQ1y!ۭWpi'd U`hN'hQ-K/rՏE3;hfDe-;C`S03 0/RX7^~'H3]ҔoVay;)8|XL2 e_s9ggk4imGhuD N^;>-JVq3S,-LjeF2d{ E=WoHb,>""uѷ xZc/Bf"[ F%a%Sv}*ea*q$Ux91^ZBK`1ST8(gf[ɏ*bdKCDk˒\13.8U[`矔tQ,u*$V56 ,<[Oo;:PAj6Q )1\zM #dru8Qn~^v`L`r(٣b[Pg~CKE!se *ƚBdOk BKʆ:퇴!72FLkwhFW\zLO!v`Oڑ?gjIqi֜a9jwZP˳V%)5g7g!Ӝ{bЁ4Ffn")>A,{BB7fϳ&2|D!!-DB Ht:=!2rzz5; !ϓJᅋӃBm*F]6a 8(,'K/+Xt|IYy4F!: /=@*1RpBv]SAe+fˤ*5>}O2/ck}B+[ruŹ&oGhԉ]NAy[7U>+q88BqO}&eG&',d.-Ok_x9!nW0*3u$gu:i戴SNج^ @U:K@xS Y39ūW.Nͫi&o\>TM 9 _DXX}Dn,.Me|gΫ(Gv)z]im7MRYvШVS,cQ W۔Y,dwEvGVr%W~ec4IYk_㧷j΁P89Qt'Vܕgux s=MB?P,^ #F<ѼG;6O%3~}xNbwE:ʓf[ݐxj'6#" "O$v5? 1&a1BʎqZuw{8\ '2S@7nɈ.r*bpK8QD k#08BF 9-0Ϩ*K̞PE晾!N0t |8DG¶ksnX%|)_ 'Y8& )5*<;qu Zݿv0'f{H1<V՘.Bf=Q:ް_;_Wv['(oT8J&%("( xvv*T5sY_k[qǝ`H{,n'p Ξ{8 -l&ի^L;s?OHқSwỲtd 3L`4yeY|HnZ֠|HF6z tS3['璘rC ,ط b:\Mԁ6zuƔMDS˗ 8 sYI ԅ6XLv2V K; Z][?X(}ۦn3㦼( q!/ fڨh&M{)@P+:C.o&1(:j8 ",{&(h=:ռSZHtZMǛ,k)a5~hO~lũrA(e! .@8\:w(h`fiK'7;hݮ 4:hJd䋭o1($xMVBvδ.:cpWɑhc&ZI';elP&{yo[aAo3q?MU۫ۘ@POx堼pAO~lҬ 0/?2%G3ȧ3a+Q)9.}7Dt7ΐ!3 @Ip3P 5Y1EZ3{ շ=qA=fccZ+b\36hI+Ɓ~2TjM }z,h(gA@~9I*ݢ܇Twyܩ@aYmf'i4n(Vo:X^PqJPy7/Iٲ\Z8j[+<{̥{Ǧ¤HxIslɼ,gF$ώ(XAG1O<ш#h{3Ko!vB2~c,1 J/zG#!OxS(1q_" }[wHQ%@̏1_;4/-۽CxKҫ馭XәlHUt ':3[ Bp7EgTEdQ :WdJU:v9؟g%=hRjȃ2/A{a|0Rs7/y>$:#\J%]q`1*}З/YBY#/$TnGAeϞ&kK_2_ۆ~m:'slԗ -dl8oq2G#o˕Vs %.9] 0+SQ&7ryC X}O5\!6Z%O(q0 .f-K3tn̸V pcZ2粖=H[-k3vò Bjzb6Ƃko:RA >@jRe`BR$md{1gSl#ca 5]W)*^ܘ{ 7V8̾hW૯V5cI+00^,N.3GPn=Mgqr L6+Yǜd2 $}UD8hʾF04j_k>@hǍ~sG]k׹ʋCMR5-`(iY? 89 W34@1x?a@"*B)gӇ`ֲ3] җ"=N K]rg-aq$9E- .}iP$-VVp<~-6N%vc[i\z}=kI0Pt`6G9*iqc3Jvi{N/_XSTP9=zd5`D *k7&=!ˈT0{aGaqb u'aͥbT-nlh w]{xlW5?o ,+{4ǒa߼(t> )v{9A l(inLRZ><_s߷l=m/sJ!g i`~)2bK@blYZe`TjsIlAEE匧,\CVAnЖÎh6E3L\jo^mnmGD٧b<m+%VS~_#ȫK9XHi*q qL95cV&C"^re4SyLb_$۱w&W Ws++c1S{}ʇ T= ?zq$ZM?K?. [ &+|1xKEyjWDt^97zv`3忁BQ*+huu2:;I ++tIɺ:\!9ʠ pMJ*vWZ w$!/AnqF>evl8 m{VQf_*Y7 ʭ+8饏Cu *-g.Z B4K̎:J`ȢZ|^{|Mb g8߉*vY\Ef}D@,E^ßjp#`D XOF;>,;'/UJ`D ;%+̸EE8{j/ - 3T*T;7QPՠ&A_0)2أow bozyr˚cO"H{!o|`$5f5p4wx wSiH7F;;o' pͶX+K#xFl@}H sѷNJ BnH{u 3|]"}T٦Hn ^:"-Ç\Ε;LSU}Ȗd"{79:܋5z;cZ6 #'ciMG1|NK]픿0}NX{3h1iz$XQq/7uDA5nlel1JlCa170-Pgp% 9x9v*too^>ṚkTA7IO̮{%?&`úKOG0Y[J i^:[852}N[)E6,_F;նYԚ0YҳnkĿJc=hbחZ(Dx p૚Fj9`7\9oK.KQ#ΣSGoF-,#C7/W:6Z1 }RfezXU~px 4Et=10'=-ςdVᯑP8q}~+0:r$Z(VU~N7σ.ShK0YUzQj<;vxRqS]NuRS?Z{J?oU Sm{apac8E[bf\S:1e>N^G a-gnB(wVvg_b|Kj gʜ?gҬ*oͺd>kh V6B C Ng[=WJߚ0i-u_*۲l _V31.Ȩ=b~f/_{kb grYYcom8RkݽmC;`fLP 1Vd0L @8}ØLm?Đ\8ZP2iy%jk$\S>v"X-h BWՆC.N?yif 7E_&y`{jej gÖJo"j&g Ȓ4;އ3NJK*V,<<7C 0Ъ(g$L[\<@?_XPмLǞ7&a!ztpA0Gm7" ef io1wz _xCJ7M?T_~pLKƥU&jF̔ebZ6/m,r.X63.[XCTAWg5՘P*lnFӅڼn/;⹲bVfu:1 VQQuCxú< 8s?ƥpsS1/ D +{ɤXؼ0H~SLk lh*!읭8ZBb,z8ީ0.o&'ıh+R&j' %Ӵ'TG':%IFbNցb=ZJ\%!Nu Fl28yP˗LX[XSQ}\@ -VSZ9)-AZ8ا \fmT^DյH2tNPBv?q`9$Z)p7=Ԃ&ju[l/XCnY(d ,Z<7 ٟ 5fBJF-K*}NH9 \f5 `'b^5_rDL ?=⍩]|*&zWeGkaKfh*َ?ctSH[)%EmCp,xU lBoK󏿳\s^q |W M.7p>dt<5M$p;֓83–mdVkTNxN. HVǼv;?GIjp G  pNWl !uփX~ @Y^@Gѡhr 2nd0ef(t5A{i&q̒ 'Q R$ȥ^ YeZ%*@[N p_UN\K^ZwTJ桡I9SGK41Mx&86$:E:=0Rm&$J6T}vf&OgJqP/OlXJ9E5~y{lN+Nچg;12[ }l4eЛ>G?=o1EW!~}sVuu>Z^-AEFv;:!Im.̢+t | 7#Ik`\;c/Qi#&W,[IBUnLjE;h6If ݒryY˴t[HBGOx_%lBI\•mIcBc($ֲۙmڦXAX~߲2@q bz|Tu%Œq܈^AGc5],X 2Oztס2 ~? Cm(H.;Ye=2WF;Vt[Pn_1&nF5fV/+Bsߴ}3,\Gߌ^7)v.jV4|=0h$ ]8f}Ta`O ;wFѹO`#Իixת6E!s,׼{;, ^JZHc\.vێqy3  l ,-g}_Z"MPY3\6Yћc~P=}WTj8݉ m/+Ul$U}Rdz)Th܈E k@T->snۭ} ;P ,mދ``t]홌g=BLAs>NjævkgcG^])q~=u|rx*&w+'~C?-<ܠ43{L'j7-ڂfu-~W0rC A"GD>i5t2 ogfQ3ِmO &ޅ'0,k]p}jUG73Bf9w&{  yʘ?e)$k)~JRx0RRgaݽ)g(I1(>-6ZbGQ'iK *ht`ƑaoVnufveW ;:tB%N5iB_HvP #cnbg$+7]"z`Q53<=k*9)]=;$3¨,VD<]J*$xy㌠Uo+Xoi{XVf]P|D֧C<!w$YoH uaS/*b.%RHӈV O|dᰃYAT]?:*l+ lpsO>^.*:g=-kR|\Al\<2 ~ge+&MlO qH'm&>a Hdwjb!WY%]˲"Ekg $JI\ۥSv*R9YKq 5~4ƿjļ'a} -w1m+/u/օT I8!5IhΩwq{DH{ )udv1ռ@STzTs'^TkTN~) )@EeV L<)PjU֨a> 4f B(`(zжRCr$L פvV lXJ6V'Ȯ8yOvx=ʬypդꛬ}Ň3𯹜\ 17rFԁׯ& 4ˆߤRoΪpyx1dB1絒垩`}g("+7SZARyZ&Un=]wS-FUJRHcn37]RΟ&πaԝ f1/:taIanEkI*xvA|!7{5NJ9nņwxsl@|D!ؿ^קֿQ];y֓mx 5t Y[ycqs}#;WxI]2VTs֯;"b$6$K#$].]i8{Zb̭tTvinKܴn[4'(t4њvS E+KOxwjrk<}9  ۰|X9dmsl+;l=%#7/c^WIsm`SK3p;f)w7e¹׉*a@DUI3?ԋ7|Fh #nzTBK0"b _Z%A&UA/Cٴ5{Rd‰ nR&FEDx$@V;Ԭ!hy/"|{cTLNs 0:ɕUgZ "/I5{/87ъ*V{2d2}6⟡$=nBz"jy ^^A!G )X#$at^ /cp 8R R3ZѸ^azH?A"9GFy>z;g4.:b4 1r[&=mi >kŨP8aOE:PԹ1vM6}$QW#8x;(m+KfW?QS-QV \`zT72c yXNqhapo燇͸1kTqpPpf(qXL˶l\Zy a~8F]0ȇ(ƤZ{SibUD {r'98R DH4epO;yXH#K=T棘 ' cn M! 6 Ɯ[5H;r$Uu3('-Z>[yT[d 5qq*uW BGou6T1ɗWtqFwpG{TIZ1{-,xXz$ԣ[O6m'$+.R-b4D|5 !r T PȞ,ư <;a![A q ƻkZDEPFmA8(UڭCC,^"LμINeگF!>h=U%o<`{̴c>@0{un2Cд mΞ/s ŮCYӹC잌Rouу>"E껈*Sxb@/~Tԗw)8=_}%(YIk#Uj>p[q%tzU<|VLV~{mWdNwƇukDݡBEc)#p=_Z #Жq@;EzeՆ$ B9Ơ}7YQ?aY޻@bʎ67J+&f5]~CHl:OSePDV>B%JդqfwlTus(ݷU 3p&Kw RvȌvǥ3Ekzԫy~yvL,㫌LH# !3ݧ-()v)ӊ Vxv󖖤I—FSU֡g_`=/ڎlY_O!p یn13Hw=R /aȼs^1Ź qc "T_l+X 4IWla?Dڿ}ZؼԹͳy8/ieL3ligk)1l ]@UڜLoJy 5n0r#Ls'åYOO(dzCqm0m(tSEı9@ c0V^ UqW\7+ρ˜|UFɎaO1T{ݷХ !^}h |{ѿd V{b/mT|%|Rl|Tf~E <>KT1|:^o4#-eC`Wr-9(P !]luJjګg>alM$vؙ1;-c-klm:RGuLd,r>9fq9U? 6ʶyڜcf rj+5.: 5sKsg1ݱ]a7՟L)de/z su+ryO$arW*2ΔD\λ\}=&Y' (tz_Ll76AgP+B -0yR#lhSŇe[Jr'si+T, hyi#OXWEH%*u߄A /)3Wu%fzSkb*>{ . i#V^^rQj̍-Oq0<0APr[$[*i]{fUCVWB'=]ols(ɨa5@/QXe_ ]z3s26̓%\)Yv8|S&yݳw2ZՑ^:~Qe6:'LHٱ 懲G2?[3b{KIX&x7:09^줷tnW_lm'= so lhX, k}FȜF>F:[ ܖ?[b1{zyh ?W˥|%%!Mg2_7['7[J~V9MeoAcLKՌ 褘3c3[W JdAH5pؑ!cŷM5$@'e@e,Th]xVrܕh5Ptc[H' Lt]wۂ :<=оj 'ck?M, &M ;8fd<ـ`=U[Vˆ82!Y_@:D/wv7IyzȽC5w Lt;th2wCηcF:kuw65뇵EJCNA -5qP,9, 64+ D^|%þlEcU\rЛB9t ѦqIg[h% zOa;W}++Zvqnn u^ Ya;+f!S+F_Af5,We,jݬfyDŽbr+fjPl2 _*A0hHo}c02-؆}ƠF8 Ơ5r)T^$ihr#Pt ?>*yy¡7\-y-ЯKN/:E\`W,pw)B䠵^ ~y"`sNK/q?l(j!*I픊5h\7Ryޔ,m~uZ06/+FK^H I~T0*TwUw:ڷ?нtfݬ $ =84-ūr{n}SO\߰@9" YkzE/tˀVt݉ȖA;,\]dXbH+vkq9 Ai gR黁^|->,Xu!#S8y<@OiY#ȶ=%>9SQ'R{\,"!ŴyPD r:~T8ÿp8y6Y|uҕidh+DwlZ.kpp~[lʞzqkm$f?ZmpKIϮek\F{h)Uy.8$kԍ]@aD8 "#)L_ L ·᧥y4#p3N)q7s-;7zVCݓ ڤƱd=C,/w@cBfE^DR\:4"/iÞ0 .5:>m8]PVsÜ= Kh–!vKsSBpv/"l,-Z}{XFF[[gcZ9ݻ.`Mg|RGtï48"+=myvLJ#cJ@`90j(S{:0fr|' ve~|8\d#} a*Qq!v>D礋u![xw=->#B6Ĭ/8ry}",=[ ݩìI9n5yqE}-˽X܎..q1erocц]1S/ɺ fХQ-4>Q OtXKr˻Ym5RF?l}E]w6˭^HQQ$#eN)$/QR{Fk ǑT: c>HYUs&\F ҺP4k$聯*u`RMvƆ $nx|Z5 {01 ߻w[}xEn)abl/O9|^"㏪#ɬ)e6"(,}pz1r' fP_> sN3d4Rl=R9Q)cPQ)?}GSfdC`-v:pL@ hKӻOajɛuZ@OdU mUU#Dtu3QEIL#C nLg|ZXS&ǚDxHb൧knI } éVѣW7&h1] YlP3r^!%8c,U#7}6 RL`D)h DE?͈y2ek ,F}g2`.ҵ9{=w8$01SH<2Ԧ+?XŦR] EI۲$I>m.ƍ}552C]dc!Q1[j2;z|]S]F9]65c ί?N1Y"1Xkن(KWxstaI(XNz`jv, Up_='o㈜ Qa{u@OdaXI1]]7[#qwqڄŰ"/ #b#R$_-ZE:S÷jc vMBܼ,Ϝ= mx=SEf.b3\¸$&0&fq'm7bX^& )[Vo79`Z Z_gQMT1VT ۳l4l\pݗ@s䥸<3%b9(K@8d#qF-7||1\ڒS$JmT*R!7Z+x(5Ie~2h_392褔>1WC3Y^V 6I/=Mm%9igy~YJg%>s3)|̨(+{yi$T9rm_( `m / ׹o㗆f 8ZCjUZ _l18s+΅^ˤ|JzQe"9^5Md_4ڑSh7h fK/SPcXM_mt.($ky1@X|:3h{R'i?>!42]{h";t@c2.ӕB $wv. H[FE@< I9(-ކU;]s:>\ŤiaѮs:5%M }7i 瀰4]}`1K/XbwFj1k?%(A/y)Q&>R( N=pXXs3@.؆+ju/)LQjV:hR& J)!;5h7@le1ȋ$&3~ Qp_Wd]]k5|ź0ެsY $V͌C < y7$(I?G+ڠr/fV)S}JP·A>|鉾0a?F6<%+ɪv 1'{C!KDx{w0F%|5 o Te7j5'6P0(>،IBV. P%9Kc$Mٛ0_~maSc(`'juu R#$5gl,!Fw]StoU%[ <[h&q6e?@u %ӫhnljax5Ii?E w4åA4mSdfDJQ"pm\uA%l.]-LF̹ zF@zYSAFDՎ]Ma pu?ǂxXԷM2G|wcDl81wvׄ\sjTDۖ.I.+v3( *37"7ycˏUV7vD3/7sA(70Y,oIW5w6`gC0j7:\Ya f"E d6E:ŽbΓmLt^ՔN;fYcO>m-Vxas] _޺};ʄQqΜf@UcKTK"t}!H Le DJVaqnwumRTF~{ 5QHIs/ܰ/>bTpi7<ޘ8&,q uMr9dBV!nBLqrr#a 1Y:m}1i٧=Q}XhpxK|0-$=iI3{gu[YQI y/i}?VYxmpL/U^ Hjg:ؤ*5I^Q3;Q82ctgbeXdqڬFW'HAxfkE)hn{k6WpAEFF8YYsNiD>177] e+7埡u7 X>80۬A=n\@'5262_ni.gQFML=lAJ:{?bV TEK > 1c]"n<]%;:\ dLC EM?ԉ ⢃wJumk ɬ~)U$c}t䱇0I9a m28?FBz "&,0Ï3h;+afYBq*ڪbbw[> WDU%:0 ˑ@ w2_?> mvL*fl21# "X,9{]dG߽`H$l`\eϹ[YDrET.9@KYJ5 Q7/Yr^[j$Jߏ^(uţ[jzbˤGU\?Xqr M< M ^7IV8)*%g^e8u^AM柯eQP2;Dϊi&23tqeq{#buAgRRZ0<^rYS+2M&A|c~ST뗐Dm,rStOnxepz bs|h3L݈;y×s-T3Feܡ;nZ?ָSq a#pC\ӝV즯ecפ_nD,@1HVeLA8 FI|@OrlC'*~;*Q5b6 1k<;wr19"~Lu@硝vw| g&Q3|S/IM\klj^- t⇌I|«L9y[pvJgtMm։(Y1}’kP` F[bHٳzLe\ %':br({5O'Ƅn _9]Gt9Nh&UF8ssXX? YЊQ# "i3W] sD8 -*Κ(p[hF8X\BpnZRar'9ӫt:\NX |/L1= wTQaM̵龅{9AfL" @% ȵ`M5"ͺ)\`|9ɻO;o7i I ?t&iu?ѡq%/[P7P5U-LlC:\i݉ؕU4 -ʇഫ}vZX;h?MsXX]FrvJz,^"[q%[qyN߹lUd\mgVn 0fMuKaA,?<YN D)LJLO7[nGFh)}e%sVa3S?Dm2Duy*_iQ 2䢆qׂI[dQjr1PsNH< pb8?%i7X'i4wGUcO~4md15d3b}ȞtiJ-9Ѐt˫d$ǃHR {pC/y9A iZ{)2jC=@~H)m'`\IeN)G&&' `LԖ~<'=q0V}S^&" "Z9@2Tq>.>f?PO*5z'm6 Ά(lM}hzHNʜ)r4.L h(`Շ?owDAF #՟]LP~?ZYAbKi= 3;S,(KgsO0A)e0EŶφqsSܸ$FNs=m<,-22rl/suA]j#sǕ~S`Mf~o, #E ( XguQִ͹XS)QA:UV5vP !ԙ _L),Yp_*p+KI<+F T> \%rP)&: lʩ*E_T^NFWeqCP yAS9){HLl֞ ߝ]*^ܫ&eQ.,Y d,D ȃUa!A+TA;}۪˖Qlʐ0TUk{s[&xo"$Ĭ{}$bl()!k#i >KuzH]3fW[H~0Ľx^e.!F "vkF¨!W )d+Im!y3`=}X"D ._%{O#=m c0Rh 4Asl!+ǟOy>+.2o U܍0m @ %[n0mvP)f hȯX؉Xnz98ED8n R &w:Jz<2?/koDp$騤VFH 9Ѕ$?I.p=`!~ZJ G7ʵ"W9ʂS!ghDxnSo8m1|ډx SPIr~B-sjjUVIw /{d&=_f=.K<fe x2uQЊFMdVMnES9D\ae6~j[f=taMSːmɭ/*=*Yݼ~*n2aA77r _"dE gac6)яCwUe;ᖰBcOo?s:~#2[^J"J֔xdڼdi]I7#!sWxip_;7s;u^y~t:w]}K˟T+O# tTX Р?8h¤c#tid % :]ƎZ /CS ^$E#o8te=( |MՀ!G B]vgsH+UT&w:{RLgMrkI0`3$2; f*k).8n)?B߁ ›M/y @`1ԭ*k%t%oM+Jq WK.R!%+.q/N1ӺcĸIR4cXwkJF|7m)S^U2Sc J;FFu(V3#9ڔ(jO2 \o_ ѓmAJ"cz MީU&Nad3 ;PAoWq kCg9Fda !: a\)ub4z?(G/q> XkjuFRAT̮D$}9mڳoٓR&ٶ^ g(ރ%*9yE,L/@M[=գcUJZ}~!j۱Y`R:*WYg6ŲAU5uPH9yƎU kS'C,QbsQK O 4] @pChQb3Ozp7cR'96PE"kSFZυsR).޲pVح|ߒ=Nx+$>v BgTCTet ApPY7).جf$37)*Y?`}y?H`ḵ@AݣYq>'9*o@ " n4wX^M}9@(M"ܪr 4+5jŲ3`w؀'p80:^DIHA  u5'ճlbu{;,WY4-}⪵-K(x;cU_*rQAAW$=;psF&#ln w@T~dX=EZXUqDO)ITsyZ}[H@ZPVpAi̓|bi1|\h8LA<Jo@JmC;'4za2o6E6\;&{F\ljc!5/i;jJA]TzEa/~W-;oMIg(|zY C6Y5yi-v6#Za.C?%ʺ/YS ꎡw HzLNxtsP`W:1Tb㘫>ܹ$SAi<= ,$[T;'`TC|j_O촪:<M=Ȟg*2:seRІy!v3OkEǫd66 뮿Iel*sޚ2ܫ`$ /P/YxfD% [8pp)=_MZ>z itӊ8]}% e.6 UucHSOr(rIN LU/ٓh&8;!YTH`hoAvM[ 0M5`Mrm;a].grV &(vo7q}שK]wlZ$:܋Ƽu ʳqDEpIWĐ76q 鹀+$Y=[lU1 4K4g*_ ]Ww&TmE.*t(㭀0n3`Iڅ5\;'eWkjە! shۓJ̇.@[KxF=7lfܵ0q:%/8hX P=9 ((0!VG`B\n(}F6F<[ k:o@|hUU"e96$&iY:.+C2[Sкr^٥f%=.ni&@5 ]PzY)!v2~_+"gD eOmP_}N9˟MJL=":vh~PZueW FH@C@gXLEƄܧ\{#*5 E9GwAPfȂR; ?6foJG ;Sg;ۗ47̔Pk~lj+pGGf$@sGVdOeI)]9V=T$`916J׳KaofoJՍ1# zVeЫ)ȤŕD)}LFˈԁ q<샐i7|2 ,E\^DLx2.{G#l Dc[T^ۆ/~ưD 2) <:h' p#fɆj;13m3D@5oߏ`2z)R 1jv?YJe~(D:H4:[EsDl8ɏt.Ly)WR U 8p 8" 0RkYn`Ɛ8ᤙ;U( _D}/QJܱ~\~H5Pz9rjGe ΐ^6a&v,Ws2o9g0ⶁe]9hKGE #K^O4%W& s icaH)}tq&ɫnM _p<y+twyG=w*e Y'dHJwo?{KvGz[Emݠ!.R,q_r%m<̏ !jLjRƇCPÝp*W H$۱j{x*4k6.=HM_]? ,(}uTks5b %{*]5l7KR) '+RT*gZŔhDarɣ6#<Vtc9"2[0rȊ]IQ @2<QN0(-G^o&7^M3Ce%-tXpqNkn`M' mӳ/M7[q9y3IIwj3'Q$Tb,b M1|M" He!m4HФh 'fRGef sWRKB+BZk-v Zd%L`ړd;>qɹƆX hvm^sr1OUC @[K t 3hc\'Ͱ ӶݧCu9d[|jx,[Q^< ڎw>P؟7hB%R}}"1piL~C^vp6 $ s^^ϼ6clK%Ȧ5}KPCܿ$KcwƸ4D+ Jg;02Htim_Ik2YiޭO(qINI?J Cdr'6sH$*oX3yHk!WMf!ggʪ/ccMB٨CRs8nqd' Uֆu 뒌{+o틳D"  / 4}֡z'LK"5} eM²ΧB@C(bk#K4rvbƓrVU\QI[3`}vPQ]ruhHF@BN;S:}. s4Z^Ϥ{l4g|/9DO<%b7d8髬 ǓRK&e}-jpQFy揁-62cn*ckUQ"X;GdF9jDLp@ž1Lb%G/@AuBg7Qm!o0kS|xѻHho_X9Q( BÄe`2<أy9_1f؊ {l<'mL=ǥ(q 6Ⱥ8lX*)Cc8^|{ѕ \RNopm6M2*v;'fc0v7PCG+xɿ O2\*x= FmKʭh'ӹw`yƌq?Ұ6VyGrYr&x܁xx_A v0`ag7y)L= *XM^0SA|5m ׻+^;3(D9μ{sU١Rz{)EjptA ?R+j?C[ϑ"7U\R*.%v Z@Я̌u$QOek#S Ju`9i֬rZu5W=n_EfalJygG0,;W: 3 1>t- ɟk@a\ .H^:s|[?&If |@:!3pK ˗ i[IYA!-Gwcm~g#JEMo` ZTੁYU@?.Hzfd;h7e\}|{mUN?|yJSaD//hi`4RH|,gϳKB.577rg aM/:]`@,OJ}%@yFVn G"jXh )ho"1Ya}.\4şh[Ʉ4_TCX+`]nKePqC狥xb'S)0t~UrK: &xcJà Nʬ'IrտNG <> T RY?XD<֓{U_Ȍ)#y)+T5m#9C8BokpJQy=^5iZ-*{CA..1D #^%$`pRҡknJ~{i {S!K7T'leN #^{XdXZ4 2ғf GΫR YYS[q@ONl(mRRxN$Xe~p ݟt-v/C,=HGQ r%z_ҘBf,\1l^ƏRfL|5&(u~ #"wB5r;~,R*Mr<uRtTmH^zdbgoȞ3URjo8uO-2ߛEOd5I2M0] "SuUަ4Hh4<k1v Mܐ 4o%d_Ru'W%<j@9zAlHvN(F0` 2,f]2 z%YK.|ڦb 6.d?A861xy^ x-Gm&Įy }q79Kom Y f.Gq La.!ǁb^ JjZC3DʭZEmbG0xK7E3)|~>,Mb>*%2*.$s_\0/m"1{{WK!8 ɒ}5X+ ڭ_kYO~@z{ϗS =,7ʒyN^"-OYinq7cwY$@mQV7N%M|hd:!IOѽ7`MÔ7,(+hpaOGKNV:BC@~ۑoIMΦ=d83eOLƸ}gbai wXWԠm5Y6/DvP]a7 [טנrVF;QX:d̠',50=?7P~w͸"ơKjhLakFFh{-[zyi5iGxnk4U`#Fy1}=\*O)|A f/,9"In'5#nO޺WbXְkspF@+)2OAjH8+mqnuEv9t_iYV|}3E_3ڔ'TЛmӅٽ-dcQ9ZQ6>Y81kH' tx9pes| -I[,ERFćԱ:.w0DNٓ!TEΈp㠙'R\ i$z*}Qlc)+n) m<.F#f{nB9Xy}m#\h_hj$a̼éEuʰ,=lR^i"_X>rksN\ӛ[WʜT:2YjR~!؀Fz+?Y"5!cWˊ@md7W5m= D_ouXc}Ա-^Yy)&hm |buwWXu `5sv5{pTI$ Ϳ餗m.0l틾}w3NHGm[9Q$}վJ]1zh%d~Ueyt1|f!t'Γ`n-Sҏ+K~3܊kCc7q<*ECT!{dMY<vr*%*'X,,XJoMEv#.}" ǀ?=dR]LN(a:w+r nvB(xV*hj u} Aۋ^7OI|S=}$E|3z)73*h臿m} h̊U)1*|9mb5S?Ҩ QnI M煛@⼷\,#+Ҍ">67= zeIQ|5O}&P@!.p ɘ. VAԭ|_j# f8ґh(e)%Un3זjM浄K€O]+4iN1'p]* PY4,1[neD8-!S0 :a2Ҫ$e ܊QڷscW{G_.T1x3GVӸܹY5 уp ԩd&ER:$W}砡Z6!l20R4N.4@+xx#Hc8Օ=](RJ45`RigAG ~Ir_1~(hj4K@x:B A037"oӤ.&*KqNG5Q{&a=`;_Ƿ߶j;!PW˰ gi-&\)d,t j@]dߴo7x& IP7tz#Zm xy<;]iگŜA9M)eJs 'EQBV\ط&/QK%=hWH'F y͏閛J[K^ipš: FwI5ayx}=qR"YqvO}‹w0?6>^۔qzd/gMvbazUJX-nyܣq&o$ءV"jTm|wOSN9]rLubσ >PwSt>aPpd߅TA⣓FkC4L7~#MPJ(Vp=4V Cܝ4}txRB2lj'8A䈂)'P~hV+D v$c(K00ޭSZf*T(Vz|fLCg[3e ogtˆl@XTnGAׁ/` ++^/zOw:HF0zX47U.9 Sw8gb .clԌ4P-e/)r1eԕLB&՝6'hY;pDX{8Bg&JvR>w sC|q _H:w}d 7,JW$UDS[WDig*gG&df);Fwh {JF,?4&jd#NBDdKRaTge{`-N _r"2"t7#|y}Q  Yͪ|+!:ǗZd-]?sMo5~k \Z:8 µß</fswuSиL,&U~>%{ʎb{j^u6y"}BTi_ L>SYc ¬2Od ]vBdTQз\EڜL|$8Pf+5 B6QcSV]RKdHD -"V(cGត5mۃRS{E:q@}; 9'IVD* -e=a-,c)qNHj^̜NƿU==) uc&˒.tW1r- &3!Y7q7p0C\ݟ47n' l͝F׾S|hތU(T~RlXz1r̜'GNG},!-.;xN|k(2&r=SKP Ūm})9UӮ 1MLɿd~iRI xz!IbP<YoX LM~Y PfτHQa'M^q\txi}.z"C:% j@)]՞> b;.OEBI 딺VeŏF2eOzڕFglۋմK; x؊*\mCə^l g\ӲY7aꝹ,SìŬ JC1DFGVzDF>P#=SS>M7|OzcZPaZNs ABhZڟƯ4J}I7ZOϴgz7Ks}#&Ik\Z:=B]E]ØWϿww>ӝ @ì ۀT&4:2pCD(&Hf B>-D^'/!UQ1/A8wLA`ݫ\6!~0D'U vl8hH)L% '@ ݯvo'ie_4:91Va5?B Bg׮"'9'ͧu?$ `.CRe( ٸd\}bk:eȞl$_"QGy$]`foP܉MLλ1 * F`Ȭ1+/NͰЃ|ҡ]|+9 $Zxj2?G,D3&U/?aijnuVBh*}"u#+ͮ#df*31Ni0z:h!n湃s0 w8(#Un,ՙ[F0WLl8.)u[*9mAW_sJVѵP޶u 1N']bniS`$U9X܅1+PuO#eɬ^ǕmO'BAYj?Tک`vGv5a-W IjWȝwb ]2[MHS9h?ݽ%cLOlKˊ]u"u t?ŒzHrlHRf9='.A㟅ZQN["A ]^/0ya 0 vbeD,;nw;l'ozf)b8+mRT#|d-Q׫R fwަd :5 w솟Rpv\ղ4uE5CP)A ݲ[g1)IڛnDI"|F('~:+"Ky 2#3;7ƦO+1j)ɗߑj<#rt4O@D(pIEVE<#nBU%2\Zq'=wa Us3 Q_E()<w0EWc0ʜzujOp phMPOMzCGQl=5Ȳ.E@^|mnAzIP>5#+TՆB5MUK.}ڔE𗥥]H`#RiG=|'ä^; xBl*l֞h䇼 I5b 4J3ɎFu+>6|HH}q f &.-Z5YیW:.BБ1Q?2S7rdFIS 'g$O)W0dcJ=tH菴_sAГ㯆wxat1wWb+)vYÿ 9O=4+`zjL&i+- eVG}mlL<[ܜq ̎jh/dMfXb3A޾tJc"LU%Z`K^7z1d*o P|}c(uIh-a EET *oŠ8HAlHmu[lf-TBV}Ʊl' )p9d҄dHtI5O6OMXID6}s=\p=S[ÏʣH% lTi<2hEf%f) ܆aT+xB^fi̲u&`5$-݂?4Q[ T0kJ+.dZ+mdt)d`$Mt c=`ZP7XP|x+],A ; X{A|䡟2VU$! 1dG%+K}QCUyQ;j0Q"}T ݧ9mj7:qakky׊Gu[k4Ƨ,#&α)TJWy/@05J]=O N%JIyF_? 2w#ҽ\+8:AԥTHfF\"waCWM[/>bcg[ֱYa8W^0]1=\~+l 7#+ϴ&NcMJ>qxqWodT1}vF~0f]=Ck b.\cEѣyh?_)[aҶ펡kPQ0y6#:O!U+04P=N-X_džJ!d<"#xb`+^Mfq=ΙBI̍1m3V- C`B'6qg%&rY#QޚeFˬFbQ#0MŎv#Z-b"5bЌ+6åC}ƍ4_7`q^ӻ=%]dy-g|ΰ8-z ߤ ӍO_۾\P0c}f",6sR YQ .ZbZX!ara0 ?f0b)t}dJ egs Gqk>TG_-akJۜ1^&O [QQy9cdQy<۲1\^!=4A~` '(n$2^}UO6 p`do ׺tC&6)+qƸbf'1m"ZK[d7tBA̛r q 2/-khr:1MDf~W& rb,7qiP$2j5p(%MrM*XJ&-6,k!3;ضAViA7;^hYD%óx :N~S;J0-CtB8TA1ٳ}WzԧBٔKe. LZe 2K*К`JAvC UO!]&euy0Xr}ge211*/ţ/}d5=wD}W `͐b}X M7D-w0-T}06<,V?ѿS,T'0ILj5&$hDπ&#zUۏD+Q>P /遱N7ꎵ%s5,$d+RJ!/F3H'8W-2)_T'hߨ薉"tqsrα?L0_4L*;FX#R K;?ab'c>[FO.\|;biDXZ[GtOKhma3 /$ȝi:-ӑ?*">g{v>~ßk-um"-N?Gѣ?{ȕ2خtFPEB.n_QKt -. ^a#LDgSn|IgT`@lu@;zmIH:qtX$q4m/y>gx;/+_N0'4SVB,oDqGV| ~j7g$2W Ej@tobYbW̗7$nJR,XԹ.+Hq5f@xE:~d lBf^=;;g7PSizalDŽPVQєwCb0ڀXc3$s䤄9. \^bG (Q<)o~DJ״Ly,ciTRŢw[gE$:0Aw.| _Y8^gc QbtnuWl0#`A:);e#m݀iR Dj1~RS4(ڢ]ɹH9-Ne r]31i2e꛴.)\@2QrB?Hd`굷h&h-qC K"تؼ`qP/}Pb-/^7L?Ԩ1ȣO3@LX;qp a|Ȥw:bx/Î_#{kL}ڱI@KfEج.$7ϑHO+ g+s>o0;*LJc;Ve-@ yRފ fXfg]ɘ9oWHv]4*qIЎ[DSM*+Ȏ}%`2Ap%p@폿3&$Dgf[ӟFEH ^~cwUჄi$ZJt2ag%(Q8f;\k XprAjbbٝgBE"3Pݑܬ!Kk"q%Q^>yl\Ukz[b__=q$$o:s%fTփ?ցR?_öDQw[EcF^ fHEfh,zFtcAf1M3$te;,U= 2n3'C `Đ"1^`i%?GϥEDt[Vv˓H|fo+\ؗ T(V'Ud]ᾼêN2R_Y:㖶W <~A}&MCpwr/tgjT >r wz|HQ02*Bx{7ehgc sr潲Q@BqlA::xcq5,Z?Dq&rbp ೬$bwЮw aFr]^(dF5;,_;_:Art%:"!|\54\(%aŠ!2s@̎_SJL;}Ϊ MD+]i3y"ڀ'?ș"A,݇pF"vfxpݘf_&V OtKtK~Njҳ:vˀaZeJj[b Q $b%ř-gKf ^-mnu=Pl.|>lXϤvI+7ԜN` Yq.KUiw(:pe1Bx ӥV[Fp~Htk01i]lwbz6Vx2!eM+Nr9B@m:3_\m=ul4gV)I`ǂiěnLh8ҟevvL֕ʔ1aEm8)zFJpsvh6X(!QB+lfzd<G'cB۹2 `xVf}R Z{I<;H>J? RDԁ )L$\q2 S[@aùx?5v^H |߾[*>VMHoַQ,gkіrF(!φۂamF:\ M5tPMkLvyX+[v/ BIyt, OPlYӵ}LHںE ȋ`mw;{8*_uraI ݼLKlR"${)Wf;kE}wna2~r6f`,4Hdl rZz+>2(FPx{Z\!zkޔ[u`%Tfٺ^'bP3}F{A@m3TQumz5bt'wؘ! ="`V~Y[9?,MV?zF}ˑ53ej>S6;g<]]` 7wSZgP%n`=(3z%?<L׫/mNIe>x*cdƻz&:\ݷd=`uUUT.yr7fyFAo WkIY^D `9'B{G JApSKՙZuU65j}+ @^cכ ok4E!&f | $6$ ; GC1{o *% /~.0?+ѹ? EďZT: *&eWTa3)0p^SantJ[\#޹ e;0 +h`ϣ;"5%t&FrgTXk֑\dja|&U!v~նɫd6̕[fv: V/5V"-T@RE_yvti7Ub!Ք5_NZ͕hrKZlV0H2AH,U{P8ľZƪS4vERp1f4˙ ORDC!lMj7Vom/ iO Qk0ki 3cH|cD_l 1P3^piz12ta, ̚v-Hs[| 3|1 ءYo9e5LL;'襏%R"3Xdfz4 sP2cR`qQ-p1pSi۹kCa0;vvlv:Y3`V-'F6>j `Dp'^>"p&8Jk'QGEmQXmvnjK.#'UER$l&4*J=J}fק5g"{(x9 bo֝:qDEY} {ӱr&R*eQZAs&#$-3Dmp uɲk0F!~j=bt]]۴ JlPAOhXa`ڀ8(PCBcIdo { V ;ԅJJ Bm|>NX ۏX|ww Q4_Oݜq? |-_<8?. 7?woGz L2.Ace.v'͜e K+xS Q_A۔ĭ?`"8WQNߖVy֘=!#a9e ~.9kZLK Dxc&dl&fw@6SѲF;pڟ%Z w&й(pYśk9œp.S JPYiFs{˓olHEK=IBYUK]2 \x| Jy o;Zk;9p{:̼mBQ.B<J,ߢFpZalǔW@^i1di4^u\=멁 ZlUqmh#Zk2"JCrùلo@Ӛ(䷗suWJj,)v&\"ڤ)w}m#wZ4'R"bgOt(qt 4=I'%Q۱oԤ!Avb \VH NU>]Y)j_;ʂr䰳U!Kgأ*QB즍C.@X3?d/9j=@.7=o'ml,$`38M?o# o27 $D3i_R'egfT'spcDL.mp4(ݍVO\W&ѡBɣ`i[]Ϊ7;pFґMsqfH]9᪔eC  * P\).0kk(w3ji#\Dy}YکJЦ22ߐz `7~#%x,u\WІdrFQ_Oϭ%1jTq5!\%Ph2 khgR'>gfb{r/B~reGit33 ]Bm-x߅u O|8Y+O̮2jޥ"ey[Ц,>]d$߉xL ͮ|blv67WuhWvrnOi!͡ `ܱхW< FVW&{9Bv!{tn-#]ѸRd3r .uZ-f<:,=@^L[Kf_Kd 'a,gmwvG%g@z[y!5eHEq~ cu|=/~ cl~Ieށ%Z 4<fdl"2 T)JT~xA8џg(1avlb̞c%6N~XRӓ7̣2^)=wC4:cA[̸I_ ȕ22!lړD~lV駡^>o;6~B]ah"ceMpˌ&FaEH DGM{9w*ʊ# #)'q a([XPW0"yxŀʐo(։$`_V{>Ε>N.kq=5:ql4H}N գYyױMZz-|Hd#=iV׊62U3gZY3Y  <%%jى\%kl`{5&L8<$@~<xG;jKGN{^ɘݜ r6 Qν?ڡ]N\ =`z[DUC hq>.F{{e8 ',STC@+(y'x\L {$ך8nnD«(P4IKFu䵘>3L]daytڗw2xG]p}J A:2,=r\wڞܪ-b<LߖSig00iB#?87C)/7,;R1}OP<J=zfHPo|,ڦ˕%bwUdaUۢ&J4fMJ/%߲q'~ay %[PZ\h{Gwۋw?ZDzKBq4kʛ)I/PM` LU ^ _G?ܨ^šizt5:aAFɇϪo*^D³x/hvBb&5(5!PR[e݆FSiW vyݍǖ2Ae+OIrMPYyNqR@ҪBV{sP,N~sm..w0{^9 w8r]1%Gd~8yX{'zFl8ZH $|WPk,5pVZ/dx>60(.+8h X,W33at]q)6|<K2/CRD[lGqn P]!xG1d0YEp3Ezݷޢ_Ú(p"ofU9$!P2k:ʖP,$'DV Qym5,?f; ~iɋ!nP}TuHPݤl1֕ cMAC e|\4'Su[䞣ImQjX^LrDgB<7)t/Mr/ّ&TV]Xne@L,{L RÔ3Ξ( *^ޮe*FcpF ݧijgCGZy0­ t^_KGB^Vr۽59M ,&a/8 ?(Yg+bcwgR3ޙ8CI\KYs[١[MXo$IOm@p5deMgL?ʐp 0e3(N=33aˤ] = IXki-@3Uݼ(|5OsIgh `S7VmozB|8/MƳbVt]XIXi Col&!R ~܏6^޴u%F*i\\Jϥ rʆD#$}'E= 6)MV'uI$*J1J$oG˸|81Ts6d%_X)U*+qyc&1?>5(^`֭ r%hL7%m>LХoY[KNc̩Njf c<gWȏ+N\ .@v;@:pXb˚TH;F'E8 (xNU8#8/YӖgaZsMYx.RF(qV%{QM-k]a.T?z\[\ KZoaW2B5EFu".wGa،&ܧ6FJo5$kWnTp:rEQCaoD:IcF#96b!+\ظ@q'hݽgarM>N˅2$ya RXY pIrk )Ai 燫=!Ml&'S.INjw?yVy*`RֹGQ\`5=U5~ D(%7Z}H`cf\CC*)#{6z6]4q:\ueS?5 WYBY0EqYf͞l.Ţ-@-]5@CC,} &c|S' D14U+OOkPܒa)P|0@ܩoZ+C6;3Uj6N)Q8Ŋ_ZZVOj^>\^z$s1 ͙ONAF!R{yoIXxY!芊ԛ |xoTNE=1 :܊:hlC;]LyWՎ3v>[6kAz{ʷ̇E^͍|?9I)oTuV]+KAF'j  /%'9WcCP3 TiB6˗+\ I~}D_E+Bα)"xȎF+'0ޤ%"JS(ISS}VJjţY},*z'M.:*";+R/3o;"(䉃DKT*D$rf9sMx!"VmP/-9 G˕.{ޑ.fr5'%S=TCk-Bgoύ 䵶溶9V OF)Qd;և39}=ft>σ!Y)F̏_RD3794G # i 80f]V}u⹫U_qm(Xaq,j~.Bx{W3J'1'.Bᆶ") f/,4՚Ą0 30k^/{QDͼŽhc?ig= 4ڮ#Me&xgjebCA9bLE 8cwa9JZKÎROqzE`7mHP? %P v|e;T@gVX)C% Nu3߀]-4()@( yNjg^V6 ~U[iah3dR o[zG6xzMadHEЮ $Ln+cw">QstOR-{ԣE[T5>w~(&G䉵P%S^8D@fĮTyMRa7b=NH c o;6{)$za`iKlAfZ)W{h7 YDyA5s㐯 E*k;˫8e((+Ǿwt~} LBԟ5dxmv̒>gA;{G⧴0>` 0co Z_Ղy@Dѿ<_Vb+P T 82z4Ѩc3tJVv$e.T9-Y ,_C!9J}a٨ǚc@ ,W$IկK/o(I>&zcJ"gksY̏J=RϙJZ52}:]w&Cħ OPh?>'h 1VA1ӍΧĸ-ʻ3RӘm}z/` 6Y>mqw0zݢ>v-|oԋZveZ_9- x0{HwcoA牰t37ӪCKpk^H (wuȥەfH|2H_f6hecy>eN5Ӳƪ]\ / )sجJ>f0 &oS| _/~+z}2sw3k&LxSq`<7*Pޒkwa]){y*Y$6L/'~_`ZI$ GXIąѝ_͢v$-^TeXx4潁;m `:V܎co<^"_*ҐW4i&F;V1iGj1)O3zb"yH`]GEeuR$zƤ I֖xx9_&zo[G8he ,\Ěh``XqID4G`BmWvx"oma8CF|q߃|B*lG!*u錩 ^SPu=4J JlU@:xCGNpALfvK̝DGMˈڙ?hwxk"OXq*6DKz ٌ/:ؑYT`S(rſ|PN连Sӯkpe fxQq F5,j ԌJ<# } *D,SʣdLw%oS6 HG,s萷@2>쬔-2jC+)|w>3 T1\ۢTJph{$!,ZA}ъ_;OIޑ"=Ijt- \QA62Fc( LgeL/1PO`#98L导mg5$AtStyM']gN4] <;u+㢳cNjHLuT9ES/mc$Ņʷ)7FEvy!֞\ZK\ekDTL&/QkQ M q J2Ev5:l9 |([69 9oK\88cv9~1~]O[:k4[yLΏ^7adߊH Bly Pa }Ei0Y/m\IrSR.9'[ͼCM #aJ*mP*\rĢ. YWٿx\7i7!:9EAN7 8fҙ,~&)Ѿ(*.c,GmS5MqsXoQ}͙[$vr4wXے\#<"fYJr5٭R.Wi}QL<˓~pu:h]'QȤ~S vޏiވ`prg~Sb+O2(񪫝%+˗)‰-2;*Hw(n\f-~H+KX~`2Ysۀ덍2:iTe_UY\݈1m.oU,|Os9#YbGnݸߪbC J^i <BBZq~G2յ/C`a D7A kemxxQsϽ.IDp7``%∯)Uؑd0w* crbNJ >X͍ew'ɻd_5'%E"AZU*H-^Q t~-4\Ћ|}:`$:zI5bA{6SJmg;YX _q hy m,˗,”ZljEvY*Ȧ/NG#ٲY܀/r'V_:΀$FJ1w+Z:dfhb'鎂vgUARPpնks%2.`ٱbcQRHA+p9IyۦskBEH L QR?r bprVϮ6%Ae)tQ]9~hd`t1A-g6|,ZO[DiEENm7ysOX~-Ls_[xpf"'& +?Kc˘@M4߆^Irٍ_R(۳<ڿcfnƌP)[Q.NRB۪cgk]y\Բ!qqd sYf3WvM'j <78ZLRPt^saMK LZ~%Jv+=)C4'Aloh+k1G窃*v.0XL$Ьb13pNx:MOY2wy1n05_´ҳ4$O̶h3Hbp3J #E51^Hr]ۼ BgN,& S2r&FGֵjPB&$;0vwH zRV_ؐTٸ<1*/7$=f~DUיLʱ 2bfjIڴ'5ĹxGe;9 5ݣg?''2mMR+3|%5@XGwo_7k9E--؏ư8|2QYQ.oɴ kq[a w-ۙ7JJ-"9 11"mRC,C":5`#Ʉż^KhBƷ6#0CѸ(tsPZmR"&]Pu`;O" al 9ϯν &vh)HeK01RVQ;=2 -*So፸}Av/\sdEMKt#m3ˡLIIO7 t*M\ /4~V]0'c$h4qǧRǟ!NAtbv "| 6 uܾ܂/GV|rW1x*Ԗ|j aJ|uX{sap|]11?"AȐlʚs//s@)#HR?),qM /ܖ/p]fD ;4._6yo QץO]ZC'wo $?} FX𶨏'Ի?vgl'K_GAm0G%_ @i%NmhJjFD:Zek5.Jb,]-XiEI Xhk`c,0Bh|~.jH) JQ }`bz}7!qhiL`{KݪIcF*zs m ީ=ꕭ 2x&e=>hBq6Uo~Ux+7.ir8ժ-1f/M8qKo*^_KrC`Fx;b_iX8py`wnԃV0g6NmB9 R@U)iY%;,jtDRsRL\-KakC8zTY%9YQ@Ү3&Yqu۱z'MߔXTVl)ߦ-\?ZPcXCP"z09NNV[&`t=$v}E_dz6VjRur薒hmV~zFLw"v4'hxu>ZRl"K 捁5gYxkD*z` P;E xx#w;G:[ш&W0#=oXVN0Bg`:zld~ZAFɺf~JL YLZxerHmJDТmbP>Ϊn6I'@?W3D+DŽQqͭuڡ)SW:d,`-Xn,0Ұ /\O*UYCoHI4‗z1p6{zehf֋ ׄnBjq1yg!/\Iz|Ǹ^-8 ,C/Ke\a4*Ƒ= R=ظ¥)ĥ"pdYrR*h-bzA8jGɒBeNV1Nn\[i>Ig $V MMcWH…0\ΆVIG's.^Hn/w͸yIIo6M,H{/ |]n!ŭΩ7@I!~B&^G[*Y [O]<?j2JXk}jTkhr"hqv}g|xkONܙLkn9_'mޢg١]/<~Kw}fW όy#,E&mqe5EA*Dn@ץeEIW%@5]s]ܦVxbB^fgˈ<6Av4Ь%)Z"+K[^ J(1F(J\N ίԨPlR1&D''+q3o%W&+5S$7.\b\yCD^oXs`w};'\X&q?UbR#}rUoc]bz0}AnY hlsBfH\놊tX ^Qr{#"8TpF[vBxUZ YB~ 2PڂG8SlFf/QK. i,h(*iyʇ&y"' )!_9Z[CD?Qr T4ѱf/p*QC[^r]4GpE㥩Vw0|׻g[᪽Д%ر9 EXC(`_>e}Px~F%Ljd #C~'?)XÇ~mkiGFjK| _!b3O>T9b2isHBB Wd_->9_Ak/&x>/qp +,Mx=ᴒXFWy@w4T#ۇ_z9nk3N pRu6t1ֹ܍@ϊ^zC?PV(f*oSR8\uKυA6HϓP h6O#b3kd\E&H0sI+%eѨVQ釅w˴.- 7jjaObbo3ts+GfvӱgG"D>^W Ճʝt0gH.)9pX8ݫ?kz2IUy L;*5\ \qkR0G3D6K FZܫr^#`_)i x-tt8ġ@ڥ2y/,vܥսZu1eYriXTY𪭮|=ʹI)R!E Y~Z~f9Wͦ Ȁ"]>('S+@2ɛxֺNt 9ō)Ro-FB篜 PGhN[?URch "Cn2A;m ;Bo[ =f:hpLMߝ ח3|cNk鮎1x,XBI \VNܹ2ہsK Q xVr竿d06HtXΨ<[cO0 19NLw 3S$k?[ &:Ih^ݼo:#6{ 8V&=QdOq{r(E)C97-:/SQAR| _O"F[f@6-@p|חYKV9[+!/HhR FW1x/`eK w?Oo7.ݦ BtE@{jO{Wts|:m 9ϿGʯ*Bg-sLF2l]EJw${]kl"6EOY$ cc^@vi᥾eLt $xR}SJ]Zec/*2rف`C-ߑx`N҄[>ǾP8FASfy@`רWE ~XiOuzu SZj0Hg$TȠxca,Y<@˗s5=2d@Q"Uyac LTZ&{0J'":fi؟3UT6Li<.gҌ8P,G&@5bV#mzv%Ժ+.?]t1W\"`i,@⩔N/YW+"0u.~XlIcfklnbDd_p.|\_4ybP`kWʼN./t83d,|=3EpQY ) i߫(%x?GIkľDp)"j B9dBVZLG>zXH*96q>=jݣra a(6u'l| T37`(f\*.:aZ`K^Bz*tYAj>Ͷ&^%*$ĈMM|(QoUH_FFeܪ +Z{ɼh/^8P-ܒjZ d_-sO/_uW=Y6﯅ #T۲ 7NҚՊo=@U&72LUXx PY3T $:Tz:DڙC8?!;bM_0KyFFw(Ѳp!x/O*}Z vm?9@le3XNL&f*Uo2BiG,F?A ƍřJL{݉H֠!׻Ve|RANFT (k^Hrׁ+tYUs|<rG wI G0뮹̆ Q=0 .|\*Og;,k%m/ ?G7 %d~2b`qxd0) H0El8O;Vd;d(3K1EdL,USwwJJ2e9B)@`qĕEPN.3q4qE{R->D?ىEȠ0Ӄ~1e[v{;bn{wb.ʚ1@tQU #%$?"v+[AI)ÀFeWL-h|8ʖ ǫ j3tMK*C}0gj[w Q0~"|< E_gs 0m#}xAk*Y촆ģCM `9#{Oܘ:\Q*^JO^YTȼ7[x%{o(TN8@D gG~cMTCI@Z4JМPS~P3πzQ:{PUӰإw8-6e@"uSwg(b޲LjY$q>.VݜQBSGJ+>g%6hLGx[EB̶iS/H]Ԡ(Nݩ-4}4;}&FRUCX͌98S=ľy՜g.a%b6Iz]R`ib];K GHC?$0Р kh.Ii4f??9y[ƖQgI7G4%}(L#H=Ո]c^Z痤ˢRľ~%269!U8YE5~d4g]^ Bby6 _6J\Am`i_WAR֕Q|V8%fp )#c,!39 6 Xa20AgK72@gD^2NJ].k͎ =-̂BB;4M@/zL`a:s_5t9_3-ΟW*75^:Pp_^BCEK{G{ḽ:C1-'yI#?F"ͬ2"h\@&޽~+;I}nI+PTށ6徨77\?xP[nnz %=yZ,ר8Ofm緤߄ (v3c)_P=d_ArˇʲuUAKkd|MF rV,Bs:;SEf3rbN/6o< .*/|}w 9诒jӖekkA3։r`aeꘞSD xS.qOj:{OlJB4܂SȼjbBiKo°tډ9a%jK3`=?%hB*@y$[L@MGf(km0A`+eShY<@ In򉀟Cxexqo{sLc&MVmAbxIXt!rOMVǎ*ڥ13Bz4v+OxU]۸.$l}/0:r}wCoU@fpi]Ckvq'pm]KbBH2}+DCU:H@û]p@{ޙ.CH_:u`x}٪~nyy*FG4fǝh \ wb2 HlEFy[f6OQ~RƦ\]:R_wL'{JyL< "(ˣ4 ? kr4:UN!5.!a}ݻ$~'zS-HyeYW,~vǟ4,jwHe:d/͑Z4[۠y2~u3Gs > MRxq7hFU܂hpUnv3r|C/(s;sBV"#=J{/a׈Tb/fS$$QCD.Թʿ.j֡. **yRVREv\ A9c@߅MLzo2@Vx޻NKJV:Y-"uJ R5!B؍o/yނB_?+gۦVC\ŻU7Rq/vIm[yۺ:h.C047\x{j|!c@⭻G T(cgcAxG3paٍe<я$LEV̨J0`>(87o)v% ]9S! )?CbBѡ5yhjP4_eiXu hO ᗎ2 k"xr4+`:8H1;[v%hhk)'3 BN3{WqO=$5$!Emtx. z )*UbC(ޗ8 0SFNgp+Ԅ-!m{-fol9)@WQ Ւ%Gblժ*lFt& eeC{`m/D-qt8N(bM[} gᱨU8̱zuJD4}loVg9%K,Ѻ2&e1a%[PnٓY㹌, O;>Hay@ïDgXfH,B_\ȬlAGn _0W0"1b%ȷm9VZS#zP MF(7`x&Խ/@,&,GF-r]n *b q;Koj X fR0c]*3Fx# '+jmd(inv2>`bw!Cja8y/ZҦU"2oPQ}7RܰǬ| dT $E̼,.-T B%k%cZ @ڪqa FFjbJ>a ꐛ!]bXQmK*w-wH9"cLtd{JrA4˸0g\H]l$1:Gɸ&wry,R$]6c\76Ri4aI y=fc%(YdUyk94徔uVKHb@ً',5e"hMML D$v/h#dU1t|?1$ s;b@Xs]S?؟;AEd8!~K!ዑRΥB|L٪[4 OoiQ[q$SȜ*墩T}kԹԺRp%~[b@t;Ϙ9U$9[UAɄAҷEw0h9 #$m 7m 'QԡYjwb3 *jxx%q;}HX: 'a\͏V͞yX }y\'<.T<7ثAt5#Ղb-Xu7m & 픅qI/`HZfx+1(C+\YaC`k"hra:B&eh&i̎"gE7 :#mlʲ$A6̒T&$OK=Q"lIztGISɄ#GNy`i!:]ɔaOb ' Fjf:R JAcX^!Ń^q4nRLHh 榠CNF;1V{j㓺ཧobmEeyci@J]qݘiXԣzr0;2L"yI~EB;f(6S B7N凨s=rv ba;m:)|HUz*x5~tRT&}P@- W*rW><``gߒh(GlPUЎJV|hYKN$ S(t{8.ehz1.% ΧzD:i"UD-XX3RT#MUB иaU#\JC`]k_6\ —'c+L"l/Ɠ `:{UvƨBY韑|n p}BI($y;W,f!xMrBTVIVE+9Z;*doCLt({f)M]*X ̻ލ TB_!>*|Uyx6͂7+⎏!&"q?<1}\;dTqoWUEF ,#=Htmm`&˞rIV}?j;7M?&ru MF8_eV!e v}211v;,( n}i!o{76;S$>L`&Z_{njh$/@0)^P }\^b\LIhJz ȿb]O!2of0SXIW7'*:~^,ȜhʃsJkIIjbߒ srǽ;}V&m9MDjuWw!^8\VuׁRF}벨;=={L XbJŝ=u0'LODl#`11i,`lTAnJ׊& iogWٞ=zjBVs1 ~UL&\.TLq2prQJn˰lςlnHBHCL!l0A4,C՜d"rA4`<\tKmn+Jw4mFQBQi`~̧NSSϏ}1 pP ,x-DSXP -À!()6aĞ\<"E 5w3̒!== -} ߍ͋{56L`rmNGpcBbk̝^Նh x"- [S(8cOjXlj|SfC;͆}eC>9`7ZsC$'IXǚXΒ G'ސwJdAVSE*oK.`jVCOAXb6I oҠ9ܾo5,,Rӷ ^uywE1&uv$zg{+!ZHy3$8畳qEzZM H#)5 ,z)i6RN ѡznb?uj5f-(Ou8iCcl2gm)6FxK^囮Qd qdxqʨ Y`ћ8!7d:*Kgَjw!]$x{9iW7?= +e[{-*m&&<>|XMiPTyjCQVA ~ I\^Yx͒V,{a!dگZzfsK~LGJ_L9?;ۼ~Qo 4__9 ~`v7nsQǤKNyxA4doLw9-"Wŗn .{\E״wA9޾:KRԝ[}f#޴R=#Е|b؏ ~tR(TObӽu4S2=6q^2>T(AF0$b(2XƖE^gGw EO=p]Ϥ3ݧծߌ=),DV9fCQ -7B*<Kl`4g9I3i0F@4\N͓unh51St"4XY<~~@+ZIyqLYYϠob{ rw!͈2" (`&I}Ƒ)Pc[`oE J`~xڤ${n-y8 *%^ebics S!`-gJ8R|QsZ=U|@`#Ib鹜XF0% -ՈWyp2 G1-ɤ U 4Kw7x/"Y<][],J:) SGopM2Z0yo#:O R>m.~6i̇; enFaB+G3>[䴸X&>P]U6=b,vPAͫi1,Jne~Y+ ;p`jNqnq>{Znh`/m/cm{G7fyϱҹʘthYB&%Q|MT|c('BLtp zc;L6c\2x0*.xk(}Z(ⷕ&CxosY3d^q"":P% oYe)źHSsai% #11Y0d.PAp3(\DV[VRFl1bގKYr: 3(e"f /gNuwe%3(L|`+SMfιp1~ ,2+(KXs $%ZfsމBCY)]>!j*nU|V]8'ERY<+9ߤHF#n!MS 2YY0.TB>  YmecKxS7/1b fի#|kzD:&%kD"~[bQDdn*+u-<{3Cg<;ȭJa) OMn'kR}mA˼S[q. ٸ+}?nFICA>\3eM =(ۯ1/~L݀ԣ:`0q7rdڟm*g8zao>4͔*cr-`Ԃa<aMoS:/4uּZj>P.xTdxF297:$oeC*@*so[Fx,eʦAﲤAcq5CՀ0p ~y,Ƴh/vR >nrݻ Z_v0_\*F5 “T.0sZO,nxu^7[QDm,9HI "cw=L~؆'}2E^t2s]NxN$T]tdj͡lR]y^usD3%=qeڮ+o-{L39aua}KXv++K$aahE1FeYyfbҏlJ^WDƘ3rD!+|8NL Í)d!S`2X1HI#(|!@Xֱ(\;PBdP*I. h"A%j œ (Fw&']| M i&#V'?e,Ƭ|a8d9{ś ,F!CN 30eSf$͈\ȔgK Ƙ+}ДD-#VBE7~]y]i;g0='cUM.E:R`sXYMEOp9bܿlZF*ƶ vD#l9˘ |yRHPƣ @p D/ըrڎ>mi'e"Rjp;JreNK+"peFyzX뻋m4ŅӜ ڊn,͑X@G"e$0y}ŗƩrN j;M"DqPK뭤'r|j6yˣN fH2hoqJ$9aEXnvz{>~pO{w޹fG% A@\R#3%wtx3%.i[s·fipĬ#,}VR p8?$MpXcXxEU@jlA7<o; r_`#Bd`ȯؐ<kvKyXܝPدNdWܼ5nNćܐ\`˒*9+~ud0;8@ȟۭbU␾&!w^ԩКUޅ*sӓ 9;;bGʖHuh? m}F5:!;c֯>+V0*4xU]\bN-X el Od~/1 Z-ÍKiǨFigN)?RqTj$,` )pʂkƷAPԜbyxjO˲g[Z,9%!lfTfbkvFxNmNyݸ\O@%݄06[&zӎ^u\I⒠e,jhrn|_S[~֧%v^= ǒZ EčED,e1+\y;- {{ ނ#O9W)q2^v_`qb-%HW^9umN-Zi.b&H+*PMCWi -zo/+:,ZB}vBB&KO( FS2zhg7M/vAW(v=zw_ y 7TXġZbGo_"b|h9 Ҷ"̔;mۡǂHdBmH+jQjIP_ Ȇ.}¿ABՆ@ 2yN_D*-K]ow;?C~=OV:6 .mە6+⵩[n)PBU7gt۔54x@jMYR@5s˛AqJu5":gJk/O@vAC{??eA@c $cCEMz]|V gv)˱a{rs|3pn躋. uO++2-]l~KRS*Ƒe_>2y#]#rK7C˴lO`+ d\V{8ugZQ n8NL3GZ4P㬞blD~DHqTˇ`9<=r*syzcʌef>~l$1&P0g]z\e܆\uBkI濅۹nf #cqN]&4Rj@Gy'YHg8Hk+I7M !}__D:]ߟpd_kQ_mbaֽ̼zZc߶.@{:ź4#W0 Jm7=6&Esfzp۠ FjpRVSKP50S);Fʴ"xY".K1y~ %9bb4>K}96|.-)x$'V$u Z0Zo|kFPa$p8`9.2CA}nH y`&5G uj!9By&mtA5sԣz33(,cȨc?>L$q Aira"+DÙf NRG?]&][y{rO8U1rYls V zg N =: A[Iʹ+l[5$F5M o DbF,WJVp# "K29 Au!] |OSeP#>@CvR+]W-LJdZBCo;,ja0+z=@d\u2ޑ9tV?n7%OT KG߷OCT̐, :Ǫ@RVvn^T)MX"½J$sND?rp VjŃ2nvNeQX>UE4ۤQid*E GY@~d̔]SH+psAbI^y g{He/')WjS=,V%Q޹)-w#./?'+jIHq"X2򂟒GL9JAw&:wgO})" Zj=uEP;c}Jy7vήT(^uT±e'=,Q'gv/LG g`2 QXpa7 tN}'#ȫw4.7B,DИ(#rJC n6FnVعmɈw[5zn#gİ6m|[`DA8$CyL4.G"`#`*) hߍMx??~u~xs?M!r*Riy□[>EС4ۭ7'^CKs9>SΑ%rS aKb*ëx5>2c_,N7PoTJ|iizu̎߳3)ghʉy5clR 0̇\Gh!SHޣ>?TB*vt7A3DHhɲ*X`Q6A$ڒktdAVL`o%%{] OCcO_!R(Ԕ5&FZК씘"Cop49E&x5UkܪF4g|jsnp#Kpũ L̉OߒVz@i@rYS}qw ԴKl8t|$tkz-shGى| G hW  =EzAύPGȐK'aDPޚw=eBiLl RfF]:ymB#-hDN-mCzYC6r>Bl+ƀ_<ԫsx7yo8rzx XcM%_ x=y^P%3>J _ v&^E )Yg PsJOЁ)[NA+~+j '-18:\tfY?ݐ?І&#T{ܻ_d0W?8QkX5풋5uP1P5y8} XǑJ _O_`f,]) 8ZMAAXQM06s'ʳ0T08AjOS\6ֹUA!W֗ C; Wu wo%f;$Ί @{ hFJr!+~݄TQ$jչ0+qd횔_mƻˍkl9E٤3tŞ}.F2TE{D`W"":7P\?@,VļD#ȹ=JrH< H*Tk@} LWU{̣.D}rebXњoÔ)&əQVfCJf>.6:զLfҏO|6 ۪$=7#,q)]^s1„yP͊BwSxTnh7>Mu4e P$S^HV'+ݬzi07j^EA3踨o~E3Q8sChNrNR"#Y&&l> 㼤Ewؤ)w(xK]P'//?niVXm.=G_!s}K+nLIftx;!adŤ(+Bl\ U&+ih+ i䣫LD\2Γ~ѶDcbl,9*npHy"w9S|F-T`E5) !6&\sd(S[JAM[ae>z=||zģ),fM2K0UJu*)o#xEaFb`w+Vb@~_A#f \?&?X-:28 8w4D-n5d遦s|渤@p:^j;Ĝa3K*8{C]_/ww G֝J]u׷>ګ14F.a#By4K F( J.eU{!PҎ|sx1uVLS,ǭ*Crl~n!wڕFxBPhn6^KmpԹ}OՆIDo4솇~ĝٚ n3G3ć#jY]R O$18w0hylh\-⪖ȋG;X7R\۾=YJm⢮}Zi1ȓC94C[#H&QOaj+6ldW_5w\_j[u]Gر|$E z9:S7> BtDU;۳m9raFqʚEKDq^ѸPc[;QҔDbЁ1JgEir%y,/8l!AlhM - > &Oϝe,Ex2ce3IFlEa 9>[tYf>-͙_|,At<ǀd'd|7d9ujgN|bz'v!8Cu9iQ~e~y۲ kqM4qCӱQ5ؒ-_)~lp1,(_$\:ȋ1c @I#=+SL.WGi˱J:30ɠOVLbmEcut$(GzkOƈ6+q@I_mLw%^F "<84 ~Q=JWc^? z!@>S2XuiLL3H#W  Ssj Tj,K"[uvx,d1iO!%޸w HAL"AnXbqvd]&38MvC|d >ݳ,$ XefSʘ?uYK;P7иZ֍T)8N2d3EqN]DYHjIP,?DD$KboQk佅 {S죈NkC3Ⱦ7}!6$<)٪n7tN0jY2Giz`ǻJ]K &t\0_ORy?Xds!0կHst9j\c, iwg0 1ZBJLJ(2O҃\< )3=&h?>Hb1Yؾ62شlm*35B7g A^6%e1 J*::*W8=NaZFBdkSV4`#0b uԵSŜb n°'ՖIC2cLߩtTjoPD=2i~tm%"m#09tm{O:bC<+t~ir :kxec'[ެ!„I&$={E=wK *إ?7/PFG(+H¸½lȜ̂/֌o/6Ȑ,ׅI\bUiD]I@4oMj@ڟt?Y庞f:ӄRRV5fS"n5}kpj55vqEl${ ?G4%[mF)IM҇s vcq 4gr6xK֟@ v_5XD:=@#~bw%nA/ 낆沠k&hh=0doOTme,G&zk ;J%jޤLߜa)fGEN6E5)HrX(ůJŽJƐ\umE)_qȽ剟!-:7.;̈́V7r-%g2=i|e 0k)9D>ƀy;%ҹUb8a3t?X_'I2_(ko#a.Mrg C{`؎ ݶN57!2 o HC>*1ٯ`J߫?Dm7}RR7ܐz\VQ rHb Pep%/> 1ۈϫjC\99YJd]A/IŸj?qYVs"*50u]:N}X3C3My9*5|z&sh}VcxX*TIooaf0\V&@MŨ~!sŞ IK4 zئldѮbx}dkrEϚv o[BrIesŦlAR쨾1uuUq- *M1Ɯ5-&*xgLnS . '&j- S5a-(1ZAn|^( 45?6GRy}%+)+^dxXy`yPqwFU]q ڋٟg\W*֕C}^{8p!fce@g$[RΔ Wj27A)rԼWśl\'#f%CX@%n]M ʹy2_!M(}W05:aD_TFcD|3[OYR2\H iPT& TY?,TQ[q9u{ {p̺ÜCh{N#mdݧdH]FKp# 3EKZ4o#'ǘ80gK")LO ;zSaf$r]nhE㟕ahD+Ōc1< r'49b(j2Ø!9prc V嵳픳L6I+Lr> F2W@9|0̿4꿇c5+83G]TqjLj Jɮ#+#B"$*Xc weꄍIJž{"^>CAW&9E}rE-dmd]'.؇pohBCpKvC^rywIm4#i?dܱv)Q.PjGW5}+ p$͏O !JЈ<UB77vx!BW[ﰉ);Bq13b0 3 [m4W5PE!}R螑Q jK{h@¾]TUOa]?2hYx߁ǧaR+v]^8 wTfQdZ;׌tBQ;n2C/?Ég0Xo :=65^q.*po]0]=hl+Ȥ*ww,I!e{P*2"F) >nZRtXkJ"&}Ke25?[ilnGN '[Y)`xX0B9rs{C鏭oSf!ӉTpȔS;BUu_R抍r>\w<*drn&eyOUTi0K WRL7S9mDE T)FˇL¢ op1WoMy/P2#|P5cQk{m >2Ŵ5~} yQm/?QQC<.T i_.p\_y諡bQ!FuK׏i݀Lg- YB\z%{]˒r@׸}D k/o!փcwnpE8[s'oo\)P~j4Ey|"Rzٛ?"_[\0Gv와ا <{E|zMH5(exkr:(#5 @"hO8HɥkO96q<>/ &; UVڜ癳Ik!aR}gv[%Z¾]TCQ*=Lw#3(pIԔ|Ob%?`#1QQ0ɴYS8GT>"4 [Rvidj}T I/Od` 2yS /g5L*_"chĔ.kKvf"uP<㸒I9@R]Vo"br^֩<;^rҖ@52@HH7ӮN҃"eȄ>"7a3߼Mh:;A:TAN/;^8-LUg~"y#S0peDZs:kD~dB3>9pȢ+DyP@^}AVdƊ>^hyjjl &IpF_x E+e7 /O9ۣdRzCPeqȔ5o!(j(rY C1 7|)$x׳Qn`m/UZA ]mVeEQ6fq-8Tj'@ݟw QBHEtScV*hhEH>olr?B #_A;$Pd]YgaT_?|%cu;N,': _n81PǴӝS4_I)̦_Lqk0+* S_kb9 -\#q 俇ńi;~=o^{|}sQ;^NF(rs;}ZW*oklG$v㣊Ғ{^S="kѠҶK(C2)"0Vy>6Y3/cd:/ *sQ+]`EjXi_4Bng+M vZ.hPﴝMQYHt I2n+P˶/Yv6(uR֋^|D7DrY  7c%¾-P эhyqڅkDJE:T>g|ݎgpȜ>4$G[@k VG$+#׶ iH0 9'a8pNg";8rXC+`6.0+l:ˑ>&7h mF|n _Ŗ-BqчA0uHrg *ZDedQC 1n,(! *eDs AZ>Xw +6^,<ׁ彔OܪֳrLESڶ ]":jٙ ˦sC7'7< S1%?I!aјgJbd>pb70Qg2:紾o)@GD=nhSwQ­\'(6ܼ*Cdv|gz#)C!*DfCK`/rU*6*RS95 e8K %\Hߒj K*/F#QC,I{犬KRG[. -BQkjl&`"E5/4LܝQb0&Nes n8BSG2Ioi^ad]P@xп]x=#I)~shXƏDeGc 5]3gͳfk>&#/޾_5Tco- +zǒXOO}W s-|)x(3H2ӑˎW>@G5mi BDv- WHvq`1B["fo+0𵭯86 1cЪxpN\XI9 zWA{u7~67eRZ=V$zFA4/#j^ԓT$6KO{7,:<šQN{V9]kJT'm"{F1ʬYs#\K~Xl7R"P/yqj*Vc.] q,=V7"P!v>gEO7& 64]*35!1bYѕ5G:~A* Ktܿ?.|ĩL3&@\V-pf{;u` @F0'pe  i, <rMi=XU3+)Ռ lhi+Benyb.*"$9ePұ6fl"H6&1 j+$xoA,Y%ASAYpDiV̢z]*=IuM˨tB\8i swmȊt=Z{e1iZM( fC:u e`L9nL%U pEo;s /B|dM0IяE t\Wgj+6BPU zR57YR+˷Cch):nt*׮͒b*wP_+3ar8%%SSq18iDm=x^)\@_CIȵq%@?D2B7qK@ktܙ8\HƥĆW0+z6 A|o=L~TˮpI*Ev"=g뛰Ìx'-'ìܡ-t[4d'4W(z]fC3Y)-zvT%o݇ DH9wj\Ktx(bbӶҮ J#( ؀Z{[hTwa_'1f|U;BJT9' /V<Ĵ_̴0ځ A "!3Ы=La7ݦ$Ix0l9}D3 nɏ{^{59a jFT9L|\ɝM/<n4p3\%!o,b2f=Ee,EBiSgHq+acNaPTY,w5?mv=/\#0VNvODI=9^>D@MT.Š6L{z/:a?OEs J|ą=s)gmERavӵbE΄}e5lF:wo;<7: ]2zrSuPp]WYWY/]]?,@` 32FD6>n]IaA>{d8uo=㗹BDP!ḋ#rp%r/0\)Ql*wMP|]W^(wk* @}PXb#) E?ac[g&C4P#sTb0& З_1Bݼ{_k*<@/M =+a.CiHHcZeM}A(TG:dzt_Mnc{+ʦvFb&1ETmg"f XVMs.=^Dv5HWǹD`=ҵZq:H+OoH͋޶7_+r[ KewB壃1a ky5b§rk"fvߧq!.\*C|bu+b&(KC6lJxwh1޳W8{%eg'-nJ`Y˘Jו~.2bquBXbYm*c ){=<?k%/&0 Cv({;kM}ȶ^wPaV&s6EB-1&+6pA,^"AHcK@pT盈[l;f4ϪCs\x#dKZz/lBx;C}&U&}Oꆐ;A!NJlNSOA˶qvnfQ;a xXhƕjd61K3c=7ȦFYxep47Pژ :khW>}mȢjRtũWi ̿[8#276nqw,dVϓ U&o)d1kMa[Ͱ`+=/ɒ m3 vʒ&d96N0Oɳhtbb1!q?E|*>/@g4%g,0/~Ņ&*VSKO.0z | ތ1'v`Y #ЁljAOE ۢ ʢۥ؜~Yy N~ۡYӫYH=p|ڴof,h/g֏T/mv`,8l(459jֶpӷMEh0Y 8:U"%xJCwS[I`f؀^6I6YkWb-QS2~4 xI)>̀sOЩ9<[UeFFgP"1h 8Ppg 𚑺۪FX&Z3ᳱKxkiw5zjW/;7]|HN(9w&Fvs8 M_@fF;9b,WeWad(TeT˸AC݇9h/8rJ1e@QƍS7d̒S՘VQ m/!+x %Udk=+眤1'j!T#P~ Չ @laTR @lۺmwFv+h&aD*!i [6a"Go-[C92=+Cڔ(f:?+:)2wzau'5"sJY{I7gA5x$[y*IuuP@{1 Xgi.®;^O|kt,|?E[IXnN{0v|)"QDT'oQ;5^UmBOp0/ Z%N:mJ>*NsD`B(D@elvv& a7d7]q Lyߔ8)@z8'GULm1,}ˤ㫔+ `V^M& j rX^y\Rp;:5Q`C#MFTe*ڑ!rI՞%6`@Uw ].99^3DJrY"Z7cg >CNTUϛӨ|% 888yu+qc8VHYZSi͇t8at {6|\[kƷ6ܮFGU &q?evސ4n"h] [ *:UՖP5& ]C?"}rp#MkwoULY }X5e9uTekğ(%~1Ż{bB>x;]}F\AvJq&pޚ(u,M!B?wԍM")S 2]fx^w65`wGuTG[zi Q]MڌbC8:fkw/,S.Gyh1w)Ͷi j~{Y̺da=F~PM<[($z *"7ɕT]r*II5L9)=kz3OvΝMO0`;#Em:f_<-8 Iuh[MOQ֔R-H.[[՛/@{6=FYjT/~pB^b0XSiQ HC~+X g'I'zlh>$J=o+s ]" jqQFU )2xe2duɚadwU<.=%ʜ.->6?}Aώ2l㈨Hmn\mW`}OF kl),J#e%Ni)ZރY(!UaS].1H0=#RwxEwv' Ԙ6^I;.ӑ;O,b6YMy_X&-7OaF>J1bjFP~ MV=0:3.'Nݟ|Mdr4ԯQ@: :r2 І|DYNtEqye bٖ5.kC'8:|[:~OwV[bcn BڌȹB1|T=Bi%f  zMq/C F!^ *W ^h(:J1-5A[*I Sn= sRAYLh7cP8 ~fv8[ҡ1pXD5\p۷ D3%5_$Z3KA:;Z|l >y{I%bɎ h&QjS m MQ`Y'uk7-"532zxV@*Uӈ@Ť#M'e xmR`D4'fn:rȮW ZS9es@(E]h7mĄٟö Fe_)|"±Y"pS\Tgfz&j=[Hkdq+p :ڹEpN/o7?/-T~ݲx<~nU sZizks05,%*H0_C0e~ck>3T@c~M1?jl|.?vDe\ a-ɥƍ5wa>ktb;ERͥ&>WO/.h[G ΪDdm:&<-a xxpR ľ%BrٯP"*8j$~R#~唆+(:L=S+=U5CHQ+yyނJ!2ꐎyW{񑬧N]Z>۠7菺{sTFǩ`wSq(ԉe\60q:7 6W6@ Am҈CۗJwQPYf;r{D5ٰLOk,@~y/UM4聏GH%UǷ l|*nGmK:,"/ַo2p ; #X#Yfy/ _/{)_,x'NukoYt]TLz֗DNmBc v*&UD)5vޞ;cTq(SD-YIuG0E0 |8`_mm׫!Q؃1vm>'C %aWfؼ\ Yk5!Bqei44/nĻT$JV(D$$J|PЇ oYO* G[4ꌮoڙ!/(!왆{P;ߤ#gqӵQEGK8=I%nT;5M!Lt;C b0Xt-Q$4Ie xp8VWhDUF+]1#lzBBʜ+10xdYigw+7fGg >ѓs;(24*47.H8*/g]CJ \k3[#&O8QM*cd\-y_&@K8rcq+KcZCiMץ"w]oETdB|ADKB>,7Ab0ՙ+}㵵 {0|ӈ(apH_'Dy{VYDJWuvj%E63Ps;g%l:bFD{wcjk;͘C+p&Hz VG{Db=32W?'ߐ9H ѐ|upkxYC%$ʺ4K)$VbbxwτK >{\L$ڣ+ڬ\U&Bxmvy-:%fg銓05YPF% 䏼R!w"4 ^?#7+ wuxæٞ,{ s[BХ VP Y,W\]}:jh- Ζ!POZπ 9%8ʤn7 {ۅ"MmB i?c-,H /)N|⤉ $+ K7LN<C#^㥱;%nӌ>%&-:gqPR/̕zb9L:Qw_feV xjXPn`u?#0c`C.}gDs~1ZS^;uT#D+c k;YUT?wzm !~d`g<+0^헁VW 3pbG82t"ќ|y~5xI?vZZ& q"$kCw˹c$7rJ7Z8T5Y ^ջ{+%V6LYbj2 hTE~FWVȵc}d^J`k 8vz6r@(>EW *?K \o,4&ٳL/|hZ)#l͒B![0@ݷnφ{!nLWxZ'ku .k2YmԉY.2)E1nl1p,_m-u d G=:!Ra&(ɬ&y/>g`DV_I9 CnT&  YR.)^4Kgc/A×ŬPX`z]l]'8ca(X%z3eX~2ܪr=M!ONݔ$fcL|n|Z}R9>LAQMV#L(r'C?Sw쨝[ _5~zx[ŰdY6єO ewFk y7~Z3!w_!rl3v@FHWsCcn[x駃hjk$5ܽg**O2 7:K,!T!-頦76 =ݨ#ܢ@M#3`hN`M(B;WF2qHauvAAJ7Szs#4[xO=gJ!KqD)NǜQ1{y `,a7i'8[3aħ!<AY.k-恦f rYa=M-f?؃խ6)%eaW\g)LSQ7yZJfTw/n*X'oVPF!tg)lБ5ϵ?3Jt# 8vrR7!pp!ƭXN``bCi+`8(((Do{4Zp" k,xy#SGZiƬw'NPanDoN.K/hTWL\oH}h}E@ؚd SpIDQM-]=[%*YD n#a5i uno9`rlekl-cZW1!W~'!(I-v/I{#FK;o{~ PZdljJrT{&/\̆W< }{Rרz,=旳b 畆 MOpidahcM4,*H)skf$)l~95a$KK,XMa 7XE@"F&PI{+#|AQT<75 mfmv +E@e$*!>A^"cIbպA^EZϮHGJ)m}=8g*ވӉr?$cً8%~,c~񃽿-C{~3^\&P\/ 6?yҮ)@res1z\EB{#bg%vP'Jb٪{Z#⎲t/]0f{`AƦuq%jЯW?Nq̗zM!3q4"&q4Vi)H*YO{i/0QGyⅅP@]%lHItߤu~S5@#T},aob{X]umr<~YlԾTӅxZ;<7|ŌI&g#A%m/v/jp -}`t#qGЫ`^}Hh%Q%q~Za7ݫщyѿQ%*Hkw}-bW$ε̌e&䃻ǺIƦW2nuAMBiln! Ce myCh @x&hI'ba.7e* 3@?H5dU7Lt D@3*.% H!krQ-EsHi %b 0A+_x\2yQ8/Bi#d ٿ"-vbmH  L&S2 DЏSaI^3╄Ws$S*oŸ&|6|+n W>"ymqw@> Ն-mgߵM5ݴ|JQƥhR`LԒ09G\㮼7h<]NY%kmʈd(Rι%ia y ^˔@8qݟz!2lBVeNE݊<`i%| 0U o@Џ>wgGFfQK&WZ ;%V2Ɯ@;Bvw8(N4hEr?ewtM,J/#OVpCQbnw6Wt3Ou 0;ٯm %xbJض{wft"ݱFXR%هJ,?2LL52ȃlzL;mS(Q؋l;tItŜHePA__,y 2%NF"91}A=ڬ<rva(x hS>Y' 'b6z>}&0d3 (3 ovA?R9}_?p2k>L""JDlJ~R#L?U 4qn@!gPb衒D$9]y+@=[W͎cwCQl&hoW;[MQC;).>ؔWB}p<1{Q8X%xo[BJuBNN/<TFuk|ŦG%P|z.F/bI0"7TXYF?]ԧe9|DLBV7m4#P[4#p<|M{w Ɔ뙮uv' YrΔj1 jK2n!Qo.@DSJ9" &|]j9|f샱~~EqyG)KE\ص+2*#*YEA5s! X5=-tEmm}mG ghZ4l!3 TSڋE0BƵ-0 `Y׽$gfAK*fq-x3zYM3DXԨ;]̊xf)Yz>k 6^;Cfc:^Y3xzxl>Y PdQ܌[K@oJnCN7a8Cyn&$Y5\JMSmNाYM6^_Ө*sGCnF޿H:p2SƄzT@Ѵ]G{hGrd}2brFLQp yZV ]-3> OqR*KV~v>J+1ߠ|m+v 㻭w=6{RXPaU*"s .%'[oIUzXC :ۙq/4Xdpj: /K|JBAwݘ+PdC}/#bmdHvB*]_G+jSiZ)42ó/d/GDrm+2k*>#ZZ/bYb%} #s5:C7v筶pZb/T2uyt^c,O[4/ܟnN[bs.9M̖>*?5NgR e6"HavC Ƶ/׽EԌv'/R&]co{ 6T!j܄gfl jpqeߞQjL \LD3J)joԻ\Uʝ EhC6(:žWJ@{g㸱9Z*"5eZ3#4yI${pI@𫙅} =f^eX:ȾTS\]K<>̍P,Jαŭ+VC8kM`,񴫻bKU_kt3-"9&z_;ZL^2> #-^r\ځ1S0Rav s2RLN\V AË^Ha>Qٮ ί %i1'mrp <ϕn6AK8Rm/l1S4%Ay48K!;ujؐdqFopڋ1>;T#f?*%/}=.53oy]B80@!cDij > K^\"\v^Br ^ <:˒]Ǘ)j;#@+s.g~Qa aLbl"-RFnnvV͘]RڊA;w~j 0oi T|p*\gܗ9Fn4 0[(9y陵vEz1J\6Ěӵg~bCaD["k묶{c].)R]&g=&@t%-FKY֌>֎cşCE` P kXtM)iקGy,i !J=4qTx֢8m#x+njjs! LHycs}b?Lip&b C.c}ϋbћg2)+X\3U{^` h_Xj(w$,:|P /Nu'f_꼚j.4)\ 8("zLn Q.=SYh<Hf-#H%ˈ~=ST4V݊b%} q`gRK苓(1 H͝QLNc 9kb/]cKw&AZWc۴w-!fo~ wRƃ,xA4<3Zu5"10әfB)ol5YL#Ef3.YxT?W/xpWc+ɺܽIEuW.vir`A/m(IuG OZI[]p`4 8l>_POGI%J4EP!`yQq:n|X衱I<`SSqv9?T^f˶ LD+cDo@̕9Z=*xT $k+oYw݈yP)ȡ"V5]-q+f:tHR ^7;V1_A_GQ?MÓkp{6#|%oHdQRMpP3uTk&! d0a3S[iIk6?Wn̟7*S+Gj`(~L$9{T?{پ2}+ х˭%g?މF_ZEKZd!uH`HS/`}!~z5x#DF/Tp_2o],`t:ءTR9~ t } R.0ukɆ ;u+:zPKڭX[6Qc|4f6/վT]&?\c,@s D2pPZMg-jBRp'r" p{^ZL_믓l^m*}$3k.hcLZ+4 7|b;n8 aMGSj=B>4{إLnQslPUD,@Ch.I8QCPTXt9CR6;zݯGEڼݚ\Yj*}ep">J;3 7 pra+sȘ[[jx ;$^+hve:X/RupP-̟n8ټqn +:T\$\0S9ٓd""ZjU<@UQLY?&OF-ct"mE ^2[h3FZLxu%uWȢ{_@%I_OԔ{"F7B/ThC0Or#giȍrc(ͩ{>Y~Zu@wqFȖߗ=\-)Dz 0X(飙;{9v4,DW'\SK$ .ʐ~V ^6'2> -2Ʌ3E6^j '.u;,Kǃ~3T[؍fS ǰhV7y hd;*>Wqx2',֨ҋ ձI2oV>ﲢKo rb%&9D}x sBDŽKEEn} ЈpעSR+=T$|d/FҖE2ɕВ=2!gAT rRZCfIfbkV鄋kKov3$Zg BWgVRr ~fpP+8BL eL9lWکi{1A*P+nۆ]ߖ_qTdp4v/P zh ԫk{#o'vQbD_ŠM!D\ٔ[3\Dq;% U7ZDᑃ֣#5ςRk1`QS§"JyLFй7/iɅ-j kd W.a-:2(*[a9;Zb|Habu`HYZNz8tOz~QJ3 [[Y0<Mk+ U~$۠U+fKO %a~9>/):uqtŇݡhI$׻$ϩPK}|b@WQ`W.yOA43zzIJ[gK.#.Oe9 XnT[3ٰ[&wyH&;/`ćilrM y"^Yaa5O^%W#uWn΂<1|.t 4lPE"DjGY5^#:Ud)3Ya+ _{>݇5f"IKV&+u+h1?l2\)8wbjL'7|TPȾP"kIį}).!<[3#9| ?%hDꅻTyD@vFɿ<VׄDf,~ ?\1s=O3ԙ$[` [.n4TuN* =rm :Ƨ9z"8ͅ ͮNFOh)ӣ_WGQ0M^=JMp [%+)=d٤5^}џҒϏ^ 1&rU8~ Uýj?lJž ! u5XB`GUKBANjr9 Dey9؅7)1c`**n6eDq_Խk*?->14ʱJYA `=jE!Ө<<7٥! ]1nI NΏ{xa=}L<첄ԁbkn8ק7iKE|Ni ĥ儏/=N6"&Ӡ7T#ױ}WjM~FWɩs,G숕6%` =;v1h962w1do+ԨtoNT^fv8ƈj]LK)&E)Gph0 ݋c`-J@!ѣH$}.=(?fK 1Ӵr FN=DPz[Ƈ,ƿg ƚ kف>vILN@vgEsUE}F@kOvM;q0DD7e|DM .svl6UL.5wy>Aӓe yIGǺP[ٿ AV9po}/pV ,F;I _ph[)E^HI#}cMjTqbRg)XC>/՛>>!gXV'I(9dٺVX02-,IɔIz=|(.r\sW[mN|@n[c~^0 o]Ӈ5iW?s9F2oM/ 8s\[-梺0M[=Ȓk)=5gK zU3xF0߹qW"d8PGŖJHEn|aXI6|PJ|fQ)=MͩҶuQtnRֈثdP23& I27k?&F^)0c9OS5\Li 2ClhiK'3yNjv50ZDS],/ىND!$:8&wi%A]eEFɓ3BgWyTUYj;m15V V(#7J_u;2!t shM]Z?n42K0O]O,oш%FWxJPsK'q/A0`L@ͣՖ?3=@B64Rme6?㋠ؘw yƐ6eaE\f=/.*b*pЍv [؆V@2[2=q'y ȇveBќ8}0D'RyĠ)N[,o5ML8x:Txr0`Xi"SŜLd(H~[lEխ[.lRf=BFFB*CDOn^;4[( 1=/j [㚩aiS[̍5[W>Yћ &'78^=/\ؔcFy z3BiqXR(dQ/5%g~C.5 Έ@iZ@ s Ԟm+ʜЦ`+%o?%W;KVo]2+;5^{OtfOF^x.IFThӻM%\:Cx#i"[xGYb?Y( Z!P9_Xq-g+8 [oF $?wf8yຽ9#5'!2:%>߆;&أY;#$. ^`xc25ĬN\jQU}"4=8J*jRƋ~5])7E:ܟ1&lh|wv=ўKvˉG3QP2xn},iAθD(@h V@vr;/& fL9sG>+ƽ>VA3j/\\'>kyxty9;iO~bs6Mg}ĿP:ҫ=~|Pgf?+XPv6KpE.l@7;'!VY;'Y͖t?SVT)m|9bЙlQ[\F` =ˣV8>x]\mIne)( zt-[>E#zv2gέX8i H(gn&9%}.Ȗ !3'@&O.ρ?"wo8ۡ̎!Lٓ9dWĊR \h1Џpĥ?RەGGp1A1/.RX$5,${}Pl BdlZϥլ\5RfIoSc 2?eielRgOzēZ=?Eg`mKc!}J+&;$-?˩]MrƊM:H6WQz]U80`ϔa0~c^J=m$d{2`!QQg`XTBTfQUyΣ"+TXe^_A9Ӻ =eO?&B?P$d^lcœ洼*AVHoMYyyMWRu zHسp>%ZdIt>͂i5%S6a-+ M' Z+h}Dt-6=~KM8p^$wWaP?|rf}n+˭݈N a@_CyG%>`AU@,ZS5f(iS?ibrVT5@fZƴtt;<2>O'=?u[9v[@0M3#C1Ƶg{EF7sb9OEP02َ&Sו5m 7`d[ER.MZxbrKr'QPurT ݹwwgo[i|C毳 Z'3A%.%aWpB yfxtYfQD_ p 5Air4ΦPi9n$6 3g,7V`Z5ܦ>De|y2@9Ib5"3JBc؈';\7Q n+2DUN¾ qzM^zBBs/61 8:l&]~RF~߫^f4${@iDKdSBm}NeٮN's%PϞ5#du8%U=g{(/Ry2=/߯5 jz&F;Hf61:g-([~Ezfєi=qF* *\^ em Pz/̈U;#Vo0#m~*P/#eAL$1HBfW=}n"'gv#[/X ԺǢF}t -;>ӺDywTZRݸH4q $m>ƬI.><֋Qċ1?O3rS,7zenD]UƤTDvdl1d<)\Z6 0\8CIbbiy/TwP_}x1KTᆳF IֱKg[ F_+YǸ9aGDmץKtލ(i0Zs}m}sc<581U97k_Li TǛ6.kt"joUcPxD#Uሮ.G/QZB[|QVri zTWhi|P?!%G@ [QpeآZ&\2#XEׇ0_|tKWsO ig*;lILjU-^Yci^t%iA ?|`H_1udH V|6d@\W%oqy)r>o)޺6O9rȮ>D4yV߂T H?o>8_ g Ss{%]f, ZH@N`\2 'M:>*@+,Y:7%D04P{g1P31Ƈ~ ʋ)BBQ Yr-&9LI{uq ik{ɱ4Vo"F屘.kcvYT]%t(FLuQڃ` o-lfaKa <`_2lH%4J+{#.f SX4`1%|P,M(,3MXO?6Uq0sXJuv4(;`wG]?'j}QuH*|)ghS0b0X)DC:Dw%>"<;ɚjM^7kMAA\_?CNwcvyz,="V"TQa12߉i4x ;^R2N1%xE8TyaCZr!Κg q ^ڐ6Kʧ Bd.~ Sx(A?g,.cY$gF}OKÒ@(uVX'\{w輈yX0pl6p7σ9Ǻt WUL9YP`d>ƀe Bl/93'Z ײUS;2W6-?ESB2K ˵_eg6|8+]hUNǭwIс7%XHÝsɊ oZ'GP ~0sgCT4>mm r4%i7rRk΃*SZ;S̾= 0$n{07v(֭S⑶fz:\})׬)>Xh"{mXg n*U2e#$Xk)-L1of~08w..@Fmu+f~̹ꌝimݶuQ,? Ɨ89`n1Dct ͞h"$54|l8r~$jLe J]PR-"m1,/#gR ڠ{zt@6,;D>P;c(0Jw[r(Lm1M.w#ӊG(Oho;ʕ/e(W#{ $ܺ9g=hF?T|yzU|ҳxQ}K^"^%f||Fb z,bߍ(:闧Ng0L;2 ƃFV]pm`Yɧ!qT9s 2B.Kpdl_M(%{_-63Yc2fsDQ&ya`\ohWm$'RAdZB7;6+zHI436٦9 )5V[k'F7ȏ1xΌdW>hJ`"Z!zyruܣ riN8i@d9*we6.BF*e0Q<~]&ØDDOoMeH]%w+ōiJx%eu@©s.Ǎ* 3)YLgq+9Sy\Y1m%़(7%@ɒ83-nDAvk֮'6@`dt?Lo~ pRZuР3m qֹa$Ga38& m,9  >}A-%a:";C~*u/Mu}y5'Jٶ\<1DΏ=S+91q hOv7q[?IVox&k)'+8?ˌjEt"=\cgM9k`5$0Aj'qYl<>+; 26a nuuH򠆜Ofv?H;Pl cCRjG"H*&<ۖwja \HBũO @VBX*]):$AK+|nENy͟1m:\}Ù8k(*0"S+$) KS= LդSAZhJTd6`֮DbPz Bbۓmk6 3v9;ۑWeK$S^|ψ=*v }ܴ1L=utIb`P7N R>:_1,ԲSYEUlqȔS$tNruD, zlE((UNV@æ "=4) 2ZuJ˴D7 ~0=lc%@1gX[@l2Bݗ8UL(`ND|&<.WW5w%)l7Z:RݪEYB*a2F*:ܪD+b.[ˁw}&)!qst #Y۶ԓgq+0Yo]0R`OC2N2+ucd۹@*lD&lVߵٷDFٺ;''հwvH vԆ` |mJYICm S,C&ʭᢴg21(ޠrӮ\_4"=1Y}AϬxjӘ2SӞ6J?W'˟/9{%j 9w̦篮GCl+ғF^(=OP#Bj}t1RaQOe-}Mzv@$#ۭAB$FqA$Wb5kN;Hೖy#ז`9f(,u62CK)x|{2}&_l<)dobO}V؆׹G,Vw?S 6JR=T.daw旳g+c¸6ܫ'~5 F#߃yt&,~&ڻԜ !*K"c}n8*wpL_L_{DcnUbo֘) mE~jrE:si4A1IdYm攠nJglڒ'db$sH/WaY1_0SƁYRY= 9,f "*cm|@KqJ+2e*e0ҍ$5%[0$d٦>E5zt;ܫ ?ҟXGl^zoyN.k!fɘoEduJ.Q όv.Zd##'lA ~b i vh"?CkjkÅ8w ꜄-IdoUhsEj;zc|EӸX*^p)JD ;0Z\Oq7*a?:8]˞InX/lşSYj2Ͱa)_ONor 2ZTNHłZHlɌ\,*pĉ`[zV*X|9mwNys;[XηLuTHk̷[)2 !Ý_eZ tXx@fU+c R+hFC5Q2otOkS=-#92 <^ mq6 $幯אָ"̓8:BV2Xgp:x`}+񣅇OD qegQ="QʖVpkҖ6>yA l͸Q͗cway.&;% @M%r^KeU~1%>ً&y*uqbf oPGn]I]ض @n$2j0:p7ۘyG"AY% )٥j"ȑpՙշ1C4Wxvh U\N)*|uť?^\x6@q2fvt(?͸q ѰuX&\LvJ1cV:О8/к|!UO[ٟ΋}ff[G2Ir/~*\`Q\nE5:(?Gִ&} Ԟy//.i~`чBx,pɦiT`7(0;}uIBQh}ɤbzWQkbSCCHAI9X+9 hL%ϋ)=ńԡ~ BloE!'$jv ö55 O>IG$eO])'Hݒ~$E'7AuK<:֏hD:*Uد=15է`R~-'9ipJ{K-y&v,\;6Ψ9H3+f)m<8/ȃrKo/o8>#!DZ"ЫbϸS* 8z^t9d$W ʶ۴Y*(ӀWP`NDnM₰B  |`ża׃ dp[qܒ*V[e /uͺ+SK9^OW%WOaK cx~1X!.%P,si;F}Ac5Ǡ;/?p..ܒ+7fQxIRB`zOECGm09M_=G;.?4c8toWig\#uk;;^1ۛ{vÍ&=zRALKmSi}uu}QVbkQQsbN[ɈW$@mH.ߠS]٣喂  ۭCSRvsc$:kvۑ]DݿRkSm:-;y!+ApȜcmƧPbDuszvltwp=Fj֑5m<*u<6I %ulNNZV91QLdf'g3d4`O XT#DQ?,su\xZگA|HmzX7F)bQy3.3 5 .^c(γEq{@{84 l+K1aHv1Gb|5 xFu"NflH8cuU)B-G>܂";5߲7K+;DF{Ҝ|WA} waH+ԕ W2d%լ V ǝVrnuxĻVa;Pw`KX>DRC؏!6@u&}V:9*^>ie;bOoU>_9 $K*q.@6iBUr'H^=rĆA輏b+8ξHQm2#Gd~#6]!cXt!&[8 % m*q8i%vUݡ)Y\qu ErvCq4 %_UM-n > !X~1DG,RT]8{jmIT}gg|x$!glU,;M lBVօr7\'8@svJ'c[LgzTDM?*{3xF rl#6pf[@4KgJBДɒ9R 2gSKL*y-cn*@Y4Z`Vj~it%K3Ǯ&7f{(]GdFz4?PNEL/ϣ< Ƈyrʞd>`AQ$5%ߑN*!L z*15!8|/Ca0Mw\g < -úS-n#@]eV66AJFvOJltı쓭XzKMgÕR̬zbbsn gniOPN#QoŴX[oULMO|bN7 yNz%q- W;< "= g` _7keŋ@ْ rUnA)S@^wf# !F6@jH.P߱oS@`:Uos\ԤvJ8uw XEPJ君?; 9\Vx:`›DC3:{-o]2"8gg@^E+19)Ϛ馾xœlhvSLƞ$Mz"_r#l(DVE f$[+חOS9̋ B > d̟_u9 AXJD DJaZ2m *>*pވxU) ֵ8<6y:!gZD[h=dpX gM;bK<61$д@݌xj磦сn (׸z}kfJNolQ_C$(1 oZ U9Hǰ]DvbKMd"_A#}7lz#xs^!es*۷v]JcGEG6xU{a#{T6@|@]9i?<f)BUFx^#W|p}ӹpKyIs( L~HV ;S{; HmH\":"P|s! ŷt OKdw2_`$rP>vyV5~=flh 5Z72jR0xLפs/-G[}#t!I7̳R"yaw]DX`ЂC+b_``G"9鉺Ѿ3G]_aG)6~8m ك^ QɻA]z]QPI vdz*y)NR%_h ztGCRH.X u*'Iqz=}񨅬FR xE$.LLHf{/0F;F U{O)VnE&[ī7I5֝f|j{d9ocRE@7TPk[V4O =.KLlwu|r~ hkp Mq^pcw ҄0U[AFH"Ѣ٦as׆ҁ>]J˓t46u[jjDek^fQ=zy+ro QJ6% @.uY. wSBP1-G\sN "|OzqNB!뾦#rͧ|dJ]U+HIzn/(Q+^ Ph"-C-02է8 #eM,k;j3IM^A9)Ga Z$Cgy4Z7hէzHk%3 ^ZC>BG(_$uI9_3~u[watZ zF uOXf#v6-ǟ!-ZT"?TS雌k J*eu ^}hNITšISN0w yv LGrDG!m[gphK. ӗ D @6:+M[a=b[e,xCS^64dpK6WĄ8Eݎ;H2gwr܈oRҫApB蟃I b1LGۭL%sJV߮cL`V<זb)=hszBMm9GwT7^F\G0+o*$Քj1#hUKDŽĚ, ?`S6$@ 7ԃcN9? ;!˻&h"益P.Ze=1Ao`lŔyA 2, F_ЇmjeulK^uc.\V>#T"c}8p+LL^qBnFm%(j-ADU.䧹5 z##z\zApltA/$HshAcD9~:6u:qpEkCXWneo~/y085ڬFK ӑiA$]!ҏB[@;ӁݞB5.2q&u5ybdHyв`;JxA| W "Cǃ94FlEұJey?r,VY΅Wc ],D[c)@(9v׏J-3 Ci9Acy S3[АnmAQg'6gLw'$r+aY?TP8JMKF+!s(.Tu8ΎJ]9jрϔJVeǹJCBz)ʰ0blu: ݺ= H˅JBͲ釞U7GӥH^@1ggC ’㠰޹λ$AUR8|5 cNK kLZުcN# j:&=jpXoFC/ifBFt޲Y;z]{\.TXd3ݖWh")U$hs8?]RE]t{*rU*kDϝ(tn̘E|`hmý|H"G1ݤVrXY}|B,ngG=y3ZA REn(:ɉv3ѰT~2FCtTzZ.<5KWdqiUz&9W|XuY[ad$x0+~ v2%g=:|J$7?[6O|eeVhJ@xZgfll 3\m0J eDkY |qH<*͏S#4T8k ?e@^W E(uCI 3wE5@sڊ9k4W]Й+ƧCZjRWt.~lռ O2/XbWI^Ċ^~.*t6r*l O5B؍°rvj5"SfIU&Ndk a^ J 0sj7_P0 mv˼ `Ic,WnoW~1Us9fmgXmP?ڂ.0] j2m%)6C%Zpjoѯ{2XLJ˓ϖFoMeSQ'02 +3 !,|Va\?.Fp2ӁѾIQ쮙wt*ʋep*d!1]R?bg.ufٵIW}DDžG5պRgFQg)Gw}j7[Ifw"ai8^xz:P*`SVBx'~D\҆Wb3YxBŝCTg=x` Ncw2>ƃY xevx ۀ*GOڞT fAbudzȌnB⻜k#2tIՆ3R=gkA| Gf1:1 Yye !á{hƤ'G_lXp=\BUA*iEL*Rqw%hپ)b]z |KomQO3_锚?-N:(枀Ct4 s~Jq-XoӼN%:x+„5-?n=WC5 F؁l`%䑁l5Ua a/aҽqSQD/ :e#Y׾0PbWܘZ{\2ő f6u2\C?a@ۄb x܅;I'>YqBz"F3zSkc#ͬ&*L#§Vk)/fn}GS =]{s1k rT鏰(* /ӽn{ξNiޗ>YSXBT~(% BG7m0r~N(Pc`Msi*ȶ/rW2VBIYBrHTrNkvfY5@ ʸ*ܾ Rp(K{K Ej͍R]y(yKW. ͊*;|2$MhZR? 9*E^dˁ n+Z햒\ue= aq^XBo#˝| lS;UN $XS.VăJzGH济n/&{V$ yTNA$޾(#1gS]o!L{(bD~zj$'}PY#&qxbgᗸO,-tGe2LpLL4yᠴhJ#$hH;#djg cӦ+wyQx**7H C$ƥ${~g?\qaoDtVaQ-qѪkaZ1cZ?G:)5,EvHOgpx 9uĿ'?&N.D>#'rJWöG=ji;Jr" &"i5SjM >0AWǦu׫);ɛjghA@pοMu yCy) u쨧gqEω5]q6C^l. Nq83'v.ਵ}B'~(IJ#{TοERm'&kT[@O`־DYe 8`^*2ǩ"V13.( Cp "0I2v<ڔ-bF~=dQ:X(y( ܱ%T3|CV0*Q cCN?-hFlk }< g:4ڻwZv$hM>+l.Mz)Z+'Pk3p>ē ؋q+n]"\[1AȞx8DB4 q*F6w>aC]=Q.>U]!1od<է5Ya.컷 c$- d(~bp9Nv+}mY{צ݊!_R1^أ-J)$EՐEPj{L[U#2Vɿ6JB=#ݑf%>b>}5X'H(hoϴ0Qg}KcT q"Z# b8]aSI\tG:Wι'-Nc>6} t8_Øty |x8+)W 5WCmq;tT*p^.SE_kgĵFD/$: ?ФVwԨQW2zjXQuX.N V4 j#sJ M„d Wv7sɿ,l6*ovs65&12XT^7)7e.ՍӦԢ )jQb]SssǙ7d7qTe|酒t? )nfdfLM0m2ftN 8Xna-¾rwg f*,6,|ğ hʡLG~oaz[މJzyؙd~W 6B|`D|I#m63Pb/Z B= $,x1EP~ntYSɴOPKf[;',tHT`F XUt~4~H|Ȯ(}X Jʻv3OI[-0JIQ/۫@S9s刔#o`IпHR2HPT_ Fj@y k2W< ;Od=;<΄obxgB!TE(їa+O,KBķ< ޸HWBlr\}Ef25]ٺ28$ƗGԒ'ZO8PX%ߵFY>0 YZqgam/data/UKload.RData0000644000176200001440000012475013514360421014214 0ustar liggesusers7zXZi"6!X7])ThnRʠD2S;%>/m2-uC=u)Wt@qMY}AOâ%\Ջ ׸{m}eLC [F7/3fJk#XaF؍/D*n>Mv5#9d%Vd'DTщ%o;5R#k \huq'Jz.D-4q ۿ~`Q v9qeۀ c, Tv༮ ?_rD"g&B=XVGЏT5D$L9|EAkU  vCoE=ӽ!'^F9F bet yd/O0d . P殻;hўkVIINR%Qu$ RCt ܐrDtƢgE*$Dߗb|_qHTG2tͅr!rXtH!״tbeێymD8+)fQTAn`3;׫t34]Lթ34Wb XB$V`l"<¨2T(Z{܇>/&SG0 MXE)y S;$6Ά,|=G,s(kɒi'hN r]kx '4$ë@۠ywDZHط)Ц@T$یTV^vE]2ġP1SEmT~cZ0)hom ;u2LSdR @(Dpx8WX^&C$AybcpJ xŜ1'k| 'O@~hN 0)έ2W"Jf?^.Åc qhpU5R)V.'-4;;7Smjyt%C/5Ji-DZ|覅JtjN" p9A%"/a|։P_& (Zm K V䬷&. z܉qdIPhb%%/2ekrAc':(7SϷ'UupxuE 5sC oUfE*D+|EKt26-!\A]Vl:cRTkj29?.*_(sdO̦7s&3+:QhCp*|+gW1_G:?_JșpIW #A(@BvNTo- =H7HAz`*b}9f ԂL+&M{\Z|.6k*ݣAg>xuy٧knuZtnUj^&bZ.JKplOK!T,HRqzޟS0~QtfHÎ &;mt5侊 @9t-?:xUdSր*F$ʸY}hO^}>/d9)2Pm>mhRz3`U3d49yVV4~_|oeRQC/mup.@o$PP b%GKHJ/@zM)AN`5:Ü5Øw`[1nY؞8~JXBpbjJ:[cܽ!84OndsTEi Ki@tM+F%י$LAĿxAHbk0&(Ms+$ xBAΞF b?žʤOq$A.[4 vt>hAihk{$ &TI5`D(p)փ[2۰UDLN>w+{".k+uAU[XJxgqL7,'O725ʔ <˘M|^n'Tùu%`OڝfQuzX4@7KQb_ '[W1?[k>CEӶd+'C}ϒ]~d:ۉJ(8fa{Y["XQ7}$BO> o*TjlcP;#z w2t RqugΖlHKjC8 46?V[0% x}-O 6:,P9f82]}i73q)lVu8٧OBJu+~ F+4K黝SԹKf 0ctcaZbreqnт$R]z%zZ*rPd'Mܲk&hMm18ƅNַȳ+D{*~j$+:$]&'Ijbqԉ1-KjS?+k*ڤ`_W?;t89$%x&˽s7cd$QbPOZ7j eɜ[`UOG񳟹H>,RpcWow! Tgg bG ~v@{ogi9qSj4XGyL= ;Jl!hK[1W J#LBu*{gCqDLQf,ƫS$,q[g.vNx՟t^EPZ{ݯtIaʖ^WB/Q8Ho[#kM3#O z; .9H=˜*dg yMI|0hП2eG`y7 cVwqJځdhp/)Sb{ls"Պ9GgOӍjѳJR҉kSr_'9勞|Wl8v@:]R5|pu{&UX<[r zQ,v1I?Šϸ 8J%ߐR rwwP•Pγ#r=(>_Щ6x @Ûuj,\\Xˆ^Mtޟĕ,5YakTR'_Gz'#ݰ?QY\C"tJ%WJ,PKgIt<K} D >GP6y|ru:<̟8Zw۸ۊk$mn(%@ aX*|%R x(:>7UoЎ-yl"4կj'KOO'E vQMlB!i_i{0ZL50<)>+.66dW n_ʡ/Z?w;Z;"M8۲ 7h);B 4D|wSGc6&S&H?:xl5^,cݟlE<maU.lL~Wm8XpD  ѺJߥ6~-Y/^Ĝ#9g]4K%"yJo{a% )S^A.#Khw *:1bka-BВbb7l㙵ҹQCz8յ (0vd3=d,?t'-)Q>fiZ8(f4]"$RTKrº>x]}(,=ˈBLSn[ _{[q !:/B}A+:B{{_rEe?6B!sEhgkfnWΨ'F xWڍ$]ZhB('; `)J^I,,kL u'I7 i819+AypڷˆUoWaʪ,TtVJC@U vd)Z$dphxby]3]xDYE"&${ہbo8Hfʁ$LZ /*߉c.'$%d;Uä񝎏d'-*V@ئmlAš+tw&Moޜt64dth}}zBt.\}9HcVs ~Kʃ=ǯz+!% bQ5&X&[)3Z-Sv`pM (ɜrI=m,{jY C_Ȕ;P4lżȎr_$ƞ{K6Lv=uEr#Y/X kWxXo/N:%%=އ >+#Yt,P.0-0~SwDnH8q9I_T/3^2Mkl}jfon 2'vu鎨Ӈzػ6 l|[T=l\XA''3ïYqܨiƛZ9;6]O `Y 0g0w);yRR۝:VpU?.M8acW38o~f֢FvSz0<f⢨Ot!/<.g4]̎1D @Y\LSnW*P}Βus #5Tp-UnI%)R5R æOg~Ad"-IW`=Ov"7EױVnCy;ts'!i6d3=i Lbt<,X Lg^˩W; ?S"ɚ #y c)J+Iӊ&4WUvI٥[:-敐޵9e^}%_d^EµrG/۟wy 5-6P.Io<<#Jɥ5ڵjlJm0gζ][Ĵ /ӓDvFH ۟qܭwB&!܋JZul O  ȹvetG+wǚLȹ͇Fg˞Y& *"|*HfM ,nR/> NҎ;aODcsb`9=&= dyHQ"[l~tQ"Vhf96>-W޳eU;u"ŧˣ'NK?k%QK1ƻp# Ov?Ֆ•vySD6+df)uv6]}o9U+.3%@J`UgD[8e`J2i:DVbEJ&<@߄`+ 3 hVO/FV:< 2~Zܲʏ{@w@@LbNE]Žv5ussZn.Ɩw@NNnJ};D;0WUO̠: Kɠ!)R=shƽ8;T+}m!Nw]Ѐ& |&ܭ֩gV+_Z= ,ȪD['fK~+>B̔VoծQ37G]憁?<̃i㜶Jr+PQyBD01~EgmR1GlJ}6ade m2pN<9-oIn$l q;W{JįjǞ؎u 폄' Q!< ̥&fnuKyMT2gI=l~!HÓ*';yq:ky F7>>gQh;h_:k~Du[I4q{6qYSIX"b{À'kB 8Bc-bu8=TDdXty)gOL-d㈞)yIY4%Bt! F~5$E5fZ*<ո)㒢PyYE_b76-gZD`(ab!mO'f{e2׊?QIּLD$EV3pF+fB*6nhEqۘ$3MaՒV8}@-ՒԭOw=7\$kKU8,IhDycLxT,s|jA.jsͩ2TB+x:h|Y F!J,S?&zE- Q˃J2lߤrJ΀^~)aB@(p1w s:mi`:Ijv/5XWƔ<ҧwMan[fmOMahlܖnU`2ZM+P o`@i˱O6 [y}PnWCցct@2RŌҼ&pr^[ZW+3eO=_" Cm8ѽ9esC`۟7oLk۩O8֛NvD]ɒwR(&߮LLf*ZU×=m=UD]M㥦Tߙ<ye$Û"0^lɊv=40?JK=\1iuW##ĵ5^t}=ot"ϖ N,Ծ l(S2څI/6 [;KM\˪٦p}WCpׁ`%-ѯ,=菄[s~+W_{5LՇ}Gy|(u! =J'U汬" l6SEa y;äU'Jzsxz 9VD _N-bh5zй%NѾMv0}0Y:%0)>x;W&jHG3֠{3*M}HpkY;ꉱr _!,#Sy/Jxgq'fӡ(vGTV>tHb8Nad)?ɳ)Y'GzH/GjCz+Go.Z6b;hS(,ΚDEס-䧝ęb÷WywNI>{6lF& >eC0|j` ^_Be\$ؼͧ _ 9g:4+dm-}&b/~AJRA~m׍QfšI{~6 ª`4gꊇ9m XWKRuu-2'<9(,9$/=!0Gcr(´u9 wZŤpt|Lz15&ܶG "4F+pz 8@}Ww> P+`19(y:W [ BnQ Va H-5oiDV% zoz.vgC{{U-Bc'aW1M\Xҙ^фsGK-ؠAq\׏e/)|auZBwM$^2eG-{MfkMG!h0]ր׶~hq:[;w415y=vzV}Y֧zk*IR4 sO'h![_ˊ"J0F 붟fr.Hb䦥TFc.'b&1lj,hd4dB#PcfbSss4~bs ;bQ db-$9[K +؄Ŋч6~ A?l3dꗓgG]ͥ 9;;ML&L59E5'rÊ~t+ܓ޼*~ѷqy$Kq5:#,n͊Ąe"ms3N\X6ODX ͌fۄBӷVJ N6==>e1t-#14۳4L=p[Cu-E@q^@ w6蜋9H2&=%wL 5S%hώ>W8'Y.1u%-;b+_6!&.X"f{?osr֜@ͺXY[Fiر9UQZ3e:#J7n*&. - +m7FqLd7/hD,(>| $23qm5س|v~Gle"eEE4/,--Fܝy}@θ_cIYGU杍Ubkk0o#{*)ܤPcrb,})ddh{|cp*5ցu7MsY(a<1eq_!mOʹf+qeSm#zIXf򩆦 @7.|"g쵖qblQ UV\(7Ho ű%DQlMwۺIC<^X8D=9u<=3i+160ڔ#!p DoCI dVP+D{h 5m&nmM9 D6N.n lKd?pwQ}؆>xfj +1 S|Vr"5n.Nk{ ̣>]K0{A TSb?xkj#=ҏA4T]0L_%[z_] S ]+M{[E|jnehCo1'B@y@yD}+3z݉WsA[Bn.^k2ovβB;r{{"KG%?Oc(!l숣RN0$Ni, álČxm9e]; Ӧtf)t>F)d/`]"ʃ$c3=r^Ν2窂QÆ1 E enUloŧqO~xlT]bs\+8 )pV2kcge&m-/xAK=0Kv!4 ~RNeʀcmp<ݫ w.N“h5}u#?bwc(r`zlI#gX-s[)QLpQr¸ "ًI08n f==?P/=vź\f%8|?WsзNN} %1["q>*xdh`6>l.5>D,F_})75*" zIYtMBy+?):[<(t c_jfAȋq?G x6 8Ԧ'Wu[եMre&Ӭ;IJYN;0JN0tח *22a'ȞKlR5ǀB"> T$q]a}gX73wwrfQ$Km,5^?2jB%*KNaJ0 @ GZm1fcə?;@ gNwS̽OתKR)' X&~w̋;ݷ6y(qɝF t4*곆z}qcSI% uɢ JLV N;&hӮH p+O3Կ+yC`O7F]`E IQtθpC/s* Iߴ*0V-Q+JA8,A ɞS= ݅ץ2wGsXX*.S5DB }DxtK&5]&>z vA#x!PT~JӠJ&M]-)zo~*^ed^VƷ.b Dx@GY\_8z ߎ_3\nX+PBW6?S(=bGrO}Ys &fe׶4>x;7 &-?<ԅn{5޿\vwə SjpbA#Q x"@uz}Qqy5ܼ-#OEjSƪa.eʀZ9?#>®ƺƽ9VtX O!e@9Xwq[gF X'`r ?%oi H;4wĒځj92Jysa;mhl#攒!|& vCIwLҎYv]s'͚O֧JRB@#,/ 3uu?PZʛDEQ)'iDn3S@=oPJ>XY Cgf-'sEdl6<HC 8԰X =?vmp): >K;R;kF_|wJL!R29ۭ oX¦zq}&AuZLc\à$H{b~~_jR4e0OH7sfl6ܒUcz$ǥ`Z[0)y @~Q=aɩEݵP0i!Ձ>a@ԾkXTp]ʮԆ O_D`CҰY mfnL>eD&h4EWosxLqY5vuHó{ԽBd؅{2 HkFl2yUn˧k۩Yt.=-rg]/H[wD[ŭXeMp-z6 3^߫\}wGg(:r>XJAo{'S+"x*|-oWH6v9.r<>.mX _uFcSE=x*m-[i8DÜu,/d 7]u]"j(.`=Quxek3vc!%_iGэW$[8[ G&(;_n;9q? a \2 7o800bLNzvax_qv0.;7|C1_Ȓ6 w^&oJ7e|Gu9 Sxȩj/_IgE'TE:FmM{aϖF=z$>UzFEA!iq/B@Q\w38~Fhdyg[RdUaMD/Y MؓbHQ0b]'eӘ%cÆ/+[doƦrEC^Zh,|uz, Eݩ}{nr >:.Pzu83,7SbP"rDLB+D{Km K[Ƚ IÍ E ;i_8S4w>R&j{aup"D5) j( R!A~'®@ Ќƛ~zZ4,C-ڳs(h&$qg_QaY ,{XP0HMK\JTZoFȊaլ0Y[-ja,C(*4"`߷<(dNbY>P<݋?3nKϥZ>S`C·+ihī#)%%f CVԨ7Tg%lC=v ekL4?42;r;mfYkCAmghR~#9G̷>IIԎ;M>;z\X(Q]Lwy!aJ~&ȧgɐSG]vyJ~s8r_~xc?Z<37L;8ґ bv)P&SX}~(JApbULIG0ZB(/$jh_q5Q ;H̅ӡݧtU;@{D̂TAΝnR8 h {|]0Iҥ^? )oprIA¼̐f&F7KD.LV%j>k'nOUD pVr^Yn@*Fw]kO̝&}_ΐԗO\ 8]b^/ltMUƎxEfԴ*;k!򬇰_9!& xRIu| ͨ޳pjp3E+gMBM[KDrJ \Ly> ܂C+j+1yx*[>vRn+`c8`'4˴|xIh=))Tk5- pna #p 11M=|E~a8KɵM7_%+@n^prդ6.W)a_T HFw_NYfOw>_e̩MzYkZ8ΙXDl R|;5p_'o4y?ogx^mxJ(ˤM>WrI =( zNrۙOo|%GB1]Ƌdj^W>f(3G\m~pGw0RljWV>A];ȫx}Zl08tI̍mǣ0V3(O(r޺ѡQpD&=T'5ݎ$}UlJ_G@KáZqFG-Yt!j* )1$JfݩOb۵[{osOLq 9PB6꺎\[/F"h;lNfj:ir#*Π{|9RE~"Byh҅01*H !2bPEwHA\|A _1!o64[EyDB4%?\m9s ]I9o$(C$ůzc)&\[zHk4uՔa^G$ 7tdN088~Wg Rn:41#j'N 2sWr$ [eY.Ʃ&~*^ȣHkG~Z]-Ov8QS2_,l}ըG08 WxqO5l"Tg[o ͯl\>>3_bᙄBoKʌhe2צpD&`̐JUݩי2-vѲ+?y|SW"֓b(+Z<ìw~0|WX7#c_T"4eɤOx+nn&&d*HDV*=tN@-]lϫwJ.EgE:7&jJb5֍,i,+.2xP^F1A."eo ~5wzO.em[VeQs c>hy"kU}޶L[3:($;zf󚁪Ȃ)<2'$|:|7 :#qmH>sw4d8ń[$Kήm-b@n@.5>9t$2*+LbJe%^X@nD?Η L4Eғx:GVǜKi| /sZ$bҬȵ *_G,1@$\֙2CE~3(q)"~"NZ0aJ?իQ;WbUBr $ 27 :_/JRgVHWJС+hy܃ E0HOYkˢzs]q9)~~(5{\?#2t`Zj1!3ޒ&aF꼛0tUU\iQx"emw7\Hv;lV>FHM-_ Wx[D]$ILZt5.0e/%vY:cU-*/H/II&(#Džg<^~ ƃR.lH{-o0[!3 =U񒮖M 0xeXD%֨E#o.x-,RBb(C!fTYs 엸^̈́9N{]4Bkl^1)e/ʫVqyDakGmx&F +B ~iR}aϨ/HNo /ȏK "2y#f*PؼGˋKejV36OҲ MkG=̎VvZ"%z3We@Q1D@VR,m,lw&ڻ 6@'VԷD+9ntdG&9:;0t1/R6z$a r9"C&2Lc Qd/û<fUFyzmHBd םzy~;pC7Pl': ѧh> o8 ~da|pC>' 3 9NU.Tw_dwݑ Ǥ@=FzLPdJ)ܕxQREmaSTK!]٦zua-@FTT;M;#,q ́Z7=3뇬_vd-BG-^4,'yvngH(4(zEAf.8cxOﷃt+dnm[x>Z'VFOa=Yt-0Ŧ-@&0xa+~s5裋c?c|8D_qDȜ :q߀Q0|/6Fz`)Cu\|?qu fЇ<^wxՐV/qmyQđN6}7GcB =gS ˰[S즇3"vD9,c3ʝ7Cd\<6`m4 'YYo>zHkz,QfIGvZv_=ɽYPK{hTu NԒaUl[q\y5~xNQ}5Js>tn\k=#~1wCp $cDXT?z螘4'JThHa {tew֛"|g1I5tJ|0ڍ%׻(3O 톗&QD%,Y/{2 S+PTRؙiLoo`t=AHB3Ih@J3p,P3=yH(hx11O@x|{9R Tv1'=Shs?yo\a l"[ZQR'[#ȓm^#AGyTF7 9Qܪ >_(!\s,K վzlnjݦK.>$;~RQv"lLƅL|Nc]$J;A/9=D}DUC.0}01NuB1No,eJ)~z;Җ&BI ` ҼbfB+U^H@ zI5Ke:@ 掤j@#{I%J慜 KG8pplh3hDS#M+(Z-1tTAFZ{k*TidjҊi4IY.wcX:USml0ɛy2 4rv?@ VŵnR g$6dt/ٻX~P[}r,\=~MxYT0u#]6313>vJb;dVj{TZ6mW3O !(6-5ּZ~U _el܄jBOy5[B|ke7, &UE"2>S!Mu.l 5It&5~WdnysC -^ghw`L ,^g}M!(LqQ,黳'Ad?C { /ݔO}&u.ߠL󋭤]TPkU!n-uG0=p'rRbC@+|^ޗfITͦΉL]5]ʼ44-;@ާvEYJ tT޷r$RfI5߯U'_g:iRc; m"gyTzblmf܏L;uc(f5ȴbX ((8ػ^0G݊&iAT9 4@-}ZFWPQ"q$جYlUwRLa5+ݼ;+ όV#VIpL{3^f7vO#1;KLt:7 2nN'T C0ιH-PDҡ9ƮrZjyȪ6 us#0~Z ;5*MY!m;FkTr]/Ӥ~ȀUPUHJ%i*ʴL{JytNuP7)"YH'JY cOѣy1c $e$AdoJ;bm{?xRs@?氹b|M%~&8oM_3sG|OP"VK\˗TޤWf3 Reҵ?/"y(C >-hg(C*xh!/(4_կɚF.'nQDho->: DI؝F{+E>olӑ$3vV?H~JҵOfw4\wcva6;`昮s> 0ؿ?mncVRd EnM{ˠИ,:e?6,"Bg5ߡO} .0}(rv[;=@בbq6V1VRUBYಐǘus#V}|?f^4KwO5>M66mJ(]8/PY*+f ηx86$7NH\5)ަf͏sZಝGzǼ%C+ӑ'cAMd"zYfMj5fABǵЅ=PFרxx,HP-~b9ċ('E{o\yͪ z2?YY/l7] Lig~|*^đauPO-@FKS#0UplWXwPBY<D{ф@הqUTj{?émC1-KZFfxlR-n!k9/69,~{pl:dl/fb!XU_2MϹ&\iKQlE9xy}q!dsu>|Qb^0IJDs>+_|m.QUId ܕ0.ux*F&A1>~AK#!8z$+1CU 72&k\RoDݪwtgj䜡(.VIa3:ُͩ)ݯtW[lW;=sό߾aJ\E^@vo5s פo}Ū:xJ-g +jݯQ3XWN2kYZN̺ ׆~DLHgmI* {;I$ԁĿ[s2*bN:zڀ|ٵYO]a>>ںmSm72Gr#VB]˟a|G Pb&uR=Ѷ)8&DQ;~1LK#E3d)9fM/Zp k]Vxk_Q<ƊJ#Dܹ߳ eNPnx1Onꂠ.q,bEO+Gs!]ϳÏ,.Vd:r  Di/ nei2Θ$7wN0ЧW*]lIIuZk&E6W2k 8+MwI$mC֗MElk^|1cO/+aa*~,Gj%N|ݩ*Bugu"b P.'>,oP , NFs? v)8ѹ8 d쯊3dti2oL22F2T"lg+ةo7h +پ'C&KI@gP-Ie!mBWԴ*XqqM ~LNDila.0<\zvDon3ͨX*8F'3g?Mڀ>8Ցt+G&7㮁hUBX2[BC51EsTL/Zo$2YWFx( plܥ;fٍl84>paހ/!_)z+Mb zGWU 8KÜj։zI(*bA=#$Wrqrh=F1m~CT@ [rb*D8B# CޟJV/ Lږ, $^pGێګ=2B~"dH6P_5 g B-|.ADNS$}#n -qmj}9wf᲼AC};P;P'1(в ^aGf"LBE*<9`lUZ7%G@ )+X| 6o[!ϮI]2De <V3e=Ӂo.֝v*jo }.NOp1kZߨ3HXQz om \85Mx>ʨU IIex )yx,Jృۍn ^/NH\.WWj1džM ),+M7z,N;e_zǫ^@\P'e(u޽jf4Q=udp8޸f0ۖ~bdv.M^3*-(GöN}U‡;q99jRm<ȥ&KrЇB]4fp}SsBҶz87ޡb>zAM?9]x.NO b- Z2{_*h2rNҸǫ+bŦfEsVSR$?s/ hv ͺ7O0>uKhX붚 s'9XC?.ic/F5 UU>VdJPx"nnϰya)C֡zP=gh SWǨO\V_x9qx(ev]Qiz|9ߔ>5M] R~m0H6x["d&7=F!_ybL! 6$9; 5oYz`3E_+F ^4?)Q`:N|=lRm{J9KpD^#0G:ēe*_@Pc3gȍ5rkn6@tޝ KD6<ԢR{.ŘJf_c|ד"TjN.#}Aio'7oYԬ0eRVH|) *e65ZV5i꺉Et5XkMƝ=ڼKsW/Q"^Yyؗ>~ʊR:qL+&:ryDٜ8U./V^HSS8cFG.#^jNx_qj?RNTY4X]y`SSzT\1h>MJ]oST̘_ǧˡ/YrWH#G_=Q/#"Jԅ^,bڇ- qt*nCLVU7Gީi!3</Kl/,Ic7Rl"6%[+bpn0 =}w&/w SHEb/e=kݹ1:!6/yetDnvV>4&D>~,ϜOä6lKd'jU݅H5^sxY4 ?˫ĐSSrPeMayv3[CO(m3rXEe1oK}n>?зL -gxeo6E6SBFjgH/hATe{E pj݆wg*Nu#lQG_&zvȫ A<+w Ö\PY Rj$.G;<\I(vkէ7] X'kIWF) }/6c1Aw>'L͜>Gvu1M.غ,W=FpYk?Oig4aLrCGՙnYMM#t%c ϮW2$2ni]e?MJl꼋Y˳6Jp]d"\i@ ,4䪬$գt}"-4]3AJoȬ4O=5+ ozA*8uk+I"1Ai|[Q8؈7(J O>HWD^j]]{_ɯGdTfՓ4: @=Jtguอ=^[:P!v>p@M˥ @v2X5X2@u7*5~2Yq r2Q{Ì_+PU1T4ph9.hS\,sz;RS}ewR 4>d}?N̂ϋ;Ǣz5 Mxe"IF}p38s 2O{z@|5Ӯ΢aa *s/GB;>Np_pY 4MGM=` ޢ!Ϫ3m,!Sq>gddTZ2g>SPxF-_ސqd~_)O: D:͠Q)eTB dB@Fw7z]UgPt1:G>@?e{r9ȗǩrcOy MٷT =qp t*u/ q_Z&&ּ1@?&kP^vNI,\v!mkYlaX953:zゃk+xvwdꃼE [cr3G3xdͱ2_F=i r( %Ve(; 6񴮢r2p4 CIY0 [_B|㰉]{Bzn' F ``7ShၼG@9c 'aPgb"|Z2jg &(2I<QwDͪ.RGVOjIK`JkD n_P[ cߥ9żqQiʛ' |n3RIzrAS- +)-pql"sjJTVʧXXy^rq!Xd\p Ey ̥cb46tfKD B_'QT.PBh1[z(H; T!' 80cfTW<Ή [ CUd;k zJLIir9K}cVKv)#[E4aqx/UXڿWȆWW΄<܏G[c=AQV"@ݳT Iuk|~{C"MIoE .W<&) au9cw燚_p΃go8@m;|ٟM!>Dw|kԐe8 _-c @q +b nwduIEUG+PBΦRKLd E eJir WEoQC.ng'SޚiH$܉WӆܣbIsNSƉY6cH=W$q'\/HToۤNdEVWU--I^+L;O,M߅AcM䁛~zik/8F⒅Wλ|5M>beOP9i$=]iP6k;F|#/j?=9hOr-ԉ@ZiA$lP1|Rv<߾+"/WOlR2=:Taq&c榆duCJU||]8E%:\O%p6DՈ"'I%9a`gLDf:h#GX]gKbd V,p =T\9s9 R29.۾@kFi-O8jRϙ͖18rC}]/fy=`^!GAnM ( G=QJ=˟<7׳]M$mF\@/ȍXQiEow!pzsJ.m{+ ڵ>L^(BуgArrzz&Sua'J^uyr4}pz-M4c˪ww >1V_Lٝ.b:ݯNٽIǚp-#c9zV a|_8kz7Z 2U& / {M #y $#bcJŎ2ʴSg*ʎ¡TUAp18x;( #ȃKPQź-$b/ҜrBOHzK""*nчvhT1xY2HB,d vRaUF-bKSlJ;o7Pkų%~n)wf*!IT+F8J= ]fի:nEcx{;9/ӦNz[{s-ם("%撕w(vp N*I"[ԗ KKA7+w|ziP/I|Nm0ȢNBkvëĶ,X:K"cڪ6V|qDl/A#oӍU! 2?UV0qYo\*tlKUfCk,6I|ٯ^d5KBv A=7&pE.L4$d A0>ZQ@25;l)Cuqcd(0SwulSڮdj.u<j'YЂݮ}ֲ0\nk 2V^gY@dw4$a?~-S|F9)֩ ;n'3"љcfͿGo!mByHOp1;wEl0$[U5NjCZx%fI|`:7lL߱24uS}i,^/EmE< 5!J<[m4 ҥ W`GH0< )#)tK &UKKmh>ﭪV`'?:ZF:}*sHguR ]#>! nv 9S$Ɔ9QµJ7)tr9,rEvJ9G4s>lkaM4X} {gI #]u(%$'~1W1A}I/zӷ_Ѳ>3A*h͆亩 %^}gCDN~P<,#%Uܯf50:3Dws<aŨD4B\hO~J2z$=ΝRыc| AeMa ;5^5瘑`Aޝ;*v=C[YAk`? ~&xdخ/ 潊KﱕaU BĬnӃv,&ݚ`\eQ& O_A 耥!.j.O(D#$Y*SUZm4.-|r Ye +igId0RsN;|z m>=fҀ?R0:9QMNEO= TyɫШ(̰J˾Z.om[<"8Vπko~cI`6TV ^do&lWզXә2vtׄBBX> zir6$p 6sD#8L Cm&;p97P4&mvMaСCh|!r~dvN bu7>*ZQ@Nۮ}ԧ_X$DDESi;U-A>l}tiMpGbfpk șy#Sػ`_\~#iq;HoiRrNSUxiQJ<8Z +詿ޖWI4q d U A𜟔'[+Ӟ ^Ӱ״ .TdžYKs0wy3"5x)^3*m$s[8פּف_Su5B+ _?7璻B@IT0=׏:3հJGA6 V.6P#CLq9``LoBǾ%XF/!)JliRsV |vV՚1zUB6=)CϻJCrǩ0[Vɵ !I{3PL%<LeK_5Ƙz-!/"?=}W%IL/j:wdlQ%^=k}/8hߴhEXt&I~X7Dt_U00!x ]ZLrkO5sa / i;&7]d}ɑ딑Q=vo 4S:-x ~1|`wڪRgcɃ}/΍g 7v_ A7 Ȇ+Yu[ۑ{N50Aanzɲ6z?$U1WkԸ,ڒΪVY!Pˤ3u\`]mB`2RɩP^{bjljvavV@fN]LG"r.4 \3uU L X9*Zn߉DX١ 26f{8@Ϊk8:Wuܼ 5rgSJ$|IK!:F"{IVڰ9>?!Zn0N|<8ډ2؀yYvqށax7Z3L3>!C>d:/׋[[BKy>6h6;.^]ɔ]bn&>#oPq{FC>̬γۖ_Tnqi)Y;Z [åU7#0E nXƄ8󭆗k4`8H9 5[]Fߕvzx/QpQeJ_}vM_W J3:ܥL*n"%rkg M\+e;%˷=v22A`QɖqN8ytaU.$8M?l I08-.;VT`'{vBI[%)nCEd] ir6`A_ }QM#k: \#>>Jk$g +p1,=3IFX5H H2t6jeN>6_7s 5VfGzht``T6w\6idz3TK/S@J*Þ]UTc[ |:'?g" 4ɳW+;CޱH=t*f1ijSI4Zm2`<{aJuQgGN`jnV3[mܪd)^"W`g'2U{Rk~)Nwd7aR&s\])Y= @"wOߒafh#qdڃِT[,}(ķ^NNRcĕ J9 GMfY*1_gi?.8㙔fnxacQdۣrYs`v82;v-S 9'GdGmZc0Ձ5íYQ2,3Zç녊,x͡r.o\p"ipZ@u#Shg1ӻG)ipˮjژԐ*@z}Y:ya}U+gš ۜGGZ&wكTXx ~̠#2[1՝K 'S slDaF"eN]Ԫf;M?;T6]&ܢT׳ %m(R=H/ 6Avn ^pˋs{ܐՂ|Nc5<"J|wwNLHUuF8CYEؐV"Gƞyb2s"/Ł>ZXxXC- 2zEbZUa[ <ʿ3m=dR6 %b:/uVMn:N& +aoMA;m-szT;q:ãK=Ő9Ş/M"&W e]p,]0)Mi{ʭy-p/I)1%V;5vQtÊ8b,E꦳1E;/ϯz̉a4'3m  ̟{zБ&NҲv`͇HI!(H4谖V׍(ƈbV҉)wNǘl>]H!6k4U2Aceb- M) 1T'(A1:m1.W@GAS/״+PTpwG!Ԇ[œ5Q}YVC_ ax+Ra $&S{kM<<'⬎$9,$y[wcS):SN΋Z uMʓ0|]]*"%FYX PruI(ulSInˀdL6+Xދ-|yQ2Z۫K48ȽB G2z)D{愳GK ?*ᱱ#KH~o.z QխA&W-A:[|Ca Mdx%mHat"&\\5,DJ7p.+YN(8_P:cм;Z.bK$oHsWB? WM؅h1,`qyHAN ۡx_~ (dVejEMXcuxy`67RhQ:KI)BDŽI2iݱP\/jwToZ #3)X+,OٝFZ~lw , n 5^@Cpe{)_=~$٣ qoL+qٜ9)iG`-:phjl(MW2Srt"ޛ"P,8 5/|0r /;̇SapƂ] ƒb)#. ڈ6B."LhW\~ dCs|4Mr&9?@M1/<, GK }+鑵a. xQ^jSA??9.ؚ,4t }a$#i1 a1r"[eJqm' ܎3= Z/y%-@}ǀpŀgN84bPwJHFWWnfBjy,EkI}Yu2(-v.0}=Nr)Wj] 4u|:H)L tP-;9ѿW]؈Ak%E9(Hķ,үo|t:۷<I㛰غ4V\<sQ'˲xxu٤M Q**^B9t՞psCq ́!s8|h4MFo!r"<WSalDژUT#􋈗F~D9)˃]e`ڮ R"NypӀ~ybJYW+PC1`W#~gκMQaN?>\ClJRSg|~#voݎW f3 Ez13:6ў@cvi% W45C\׿(p[@)lDlh!` r{󎧑R-QQK4Ҭ[>o*Zw qR֫>Ggʬ^2{~fd?im 4sKyd==#UR~E V'[iH~1ON$:.7lUNZpPØ;Fh #h Ց<|.5G≑9safpToɇ![XzzXЎ:leo>ʕdP%|`y\JU)aLE+_.Ǯ pnx0J‚cK:4zu:`[o, {+e_k׵$G~OT*` E4@?䙡H7Br}E3yA hL~IgW~w@OLB BG 5L ^ V>:o M1e0 5'+Z҃LΙbp^#$z&{L8>N,оd\H_ t[@mS[NuGf<%# Vt X7 M2[I?[{rfL)KPȾrTh& w* I!\ܷXfl:{~pu ӻc`{\|Mq3RFԶ! 0*Q/!{4 қzI zdgiU伂Ji>mWxF.!՛zZ ړASvBSQy5uOwZ-7?_}y_ wqfrzm-leE?aoNRdFXjW!Vf+^B5E F&X,Y-^|x U5է#y?U<]f;AVm`N1eQ _ңW#Hq3F}tsQ툠IdMW{8*eHEC!Zj̽SlNQ#b`$g "[x$Qv=}@؉NwP` -@n-TA3t, z&98(lue#W; ~2wEy ׍/ݍs!OXrTPW.=uml(s5,`#+C5ԉ2 䅏 v?ؑ>%5ݸ*bwh)+Ӡ.T²։]dPu1sn6B\T)|w}- VS&MBF\DNh47EB~I"B\5k"_ M5'KktĦm^ AV Q'"$\y+ojtS]Kf7;/47a?R)LY ~BfჅz BpX|;<E>M 2Vx>aG _N6P.s8J($V e'+<V8)xVéLRTiG*}%M U̺Wo-zK&ff֐~_#ygu0Lj,]"͛ZR%-x߭eՃs^ͦ6qZiRԒ+mr '$ZI&Kf#~DX}>i 9Hm6%$+Y#IsG;6X=`A(8MepE4<9!xzϘBM DO >@@EҞmY߭V ռ՜qzmɋm2̎+ph/fmQSicڪ @',aYn|iaG]#|vίA1Jr~ o oB1ùqJ0>=}48.do!Zޒ{YDjb]jt$39Xr,'l/UozA!"Sͮi^e@F'qf}i"Op|A^掠rd Z͊Zˎ,K)7>N.gKĄG1MQAKOh} O|6誽4HS*|g N'%ӀC!X5"E>~2.rN匋K&L n>]:+}kobr1W°bӁSӯnZ%1uJm#FMʨY1dq;uW/iHylwBi:>ꓙXVZx3VjXG2%p*l<ƙJFz73f{WFW3ě)]`Ėh \ |ۂZ;H "Xz]ƷI%YAv}X-Tl,.trL;eV”u5\hiAA+YJȉj(  JggݳXx%95P,)2˾%6:ai&`F(pW?s:&t0bV#<%DWN0'v>Qhˍ`3FP8iX&tHiOcmCr$U+K.iƟ&zm1 j-(ά[x˰XFE=dc*kKVYз'.;ahju-g ֒>>rh>6ں)/ ë_o&:Pb, -Zuc$85XR(+̱6L 4/2`I߉L8|;cҘ T7cy.vί'OPV yjWY'W}oK!_6칦]bƍ~ΩiqK'n#QDĦ|*{[\?9?ΉCyH PX}ȟ2~Kjf[d Dմ peTVfĽZn]U,₁fL2 vidtn>0 YZqgam/man/0000755000176200001440000000000013736636731011771 5ustar liggesusersqgam/man/tuneLearn.Rd0000644000176200001440000001537113763713453014221 0ustar liggesusers% Generated by roxygen2: do not edit by hand % Please edit documentation in R/tuneLearn.R \name{tuneLearn} \alias{tuneLearn} \title{Tuning the learning rate for Gibbs posterior} \usage{ tuneLearn( form, data, lsig, qu, err = NULL, multicore = !is.null(cluster), cluster = NULL, ncores = detectCores() - 1, paropts = list(), control = list(), argGam = NULL ) } \arguments{ \item{form}{A GAM formula, or a list of formulae. See ?mgcv::gam details.} \item{data}{A data frame or list containing the model response variable and covariates required by the formula. By default the variables are taken from environment(formula): typically the environment from which gam is called.} \item{lsig}{A vector of value of the log learning rate (log(sigma)) over which the calibration loss function is evaluated.} \item{qu}{The quantile of interest. Should be in (0, 1).} \item{err}{An upper bound on the error of the estimated quantile curve. Should be in (0, 1). Since qgam v1.3 it is selected automatically, using the methods of Fasiolo et al. (2017). The old default was \code{err=0.05}.} \item{multicore}{If TRUE the calibration will happen in parallel.} \item{cluster}{An object of class \code{c("SOCKcluster", "cluster")}. This allowes the user to pass her own cluster, which will be used if \code{multicore == TRUE}. The user has to remember to stop the cluster.} \item{ncores}{Number of cores used. Relevant if \code{multicore == TRUE}.} \item{paropts}{a list of additional options passed into the foreach function when parallel computation is enabled. This is important if (for example) your code relies on external data or packages: use the .export and .packages arguments to supply them so that all cluster nodes have the correct environment set up for computing.} \item{control}{A list of control parameters for \code{tuneLearn} with entries: \itemize{ \item{\code{loss} = loss function use to tune log(sigma). If \code{loss=="cal"} is chosen, then log(sigma) is chosen so that credible intervals for the fitted curve are calibrated. See Fasiolo et al. (2017) for details. If \code{loss=="pin"} then log(sigma) approximately minimizes the pinball loss on the out-of-sample data.} \item{\code{sam} = sampling scheme use: \code{sam=="boot"} corresponds to bootstrapping and \code{sam=="kfold"} to k-fold cross-validation. The second option can be used only if \code{ctrl$loss=="pin"}.} \item{\code{K} = if \code{sam=="boot"} this is the number of boostrap datasets, while if \code{sam=="kfold"} this is the number of folds. By default \code{K=50}.} \item{\code{b} = offset parameter used by the mgcv::gauslss. By default \code{b=0}.} \item{\code{vtype} = type of variance estimator used to standardize the deviation from the main fit in the calibration. If set to \code{"m"} the variance estimate obtained by the full data fit is used, if set to \code{"b"} than the variance estimated produced by the bootstrap fits are used. By default \code{vtype="m"}.} \item{\code{epsB} = positive tolerance used to assess convergence when fitting the regression coefficients on bootstrap data. In particular, if \code{|dev-dev_old|/(|dev|+0.1). } qgam/man/AUDem.Rd0000644000176200001440000000402713616054731013205 0ustar liggesusers\name{AUDem} \alias{AUDem} %- Also NEED an `\alias' for EACH other topic documented here. \title{Australian electricity demand data} \description{ Data set on electricity demand from Sidney, Australia. The data has been downloaded from \url{https://www.ausgrid.com.au}, and it originally contained electricity demand from 300 customers, at 30min resolution. We discarded 53 customers because their demand was too irregular, and we integrated the demand data with temperature data from the National Climatic Data Center, covering the same period. } %- end description \usage{ data(AUDem) } %- maybe also `usage' for other objects documented here. \format{\code{AUDem} is a list, where \code{AUDem$meanDem} is a \code{data.frame} containing the following variables: \describe{ \item{doy}{the day of the year, from 1 to 365;} \item{tod}{the time of day, ranging from 18 to 22, where 18 indicates the period from 17:00 to 17:30, 18.5 the period from 17:30 to 18:00 and so on;} \item{dem}{the demand (in KW) during a 30min period, averaged over the 247 households;} \item{dow}{factor variable indicating the day of the week;} \item{temp}{the external temperature at Sidney airport, in degrees Celsius;} \item{date}{local date and time;} \item{dem48}{the lagged mean demand, that is the average demand (dem) during the same 30min period of the previous day;} } The second element is \code{AUDem$qDem48} which is a matrix with as many rows as \code{AUDem$meanDem}. Each rows contains 20 equally spaced empirical quantiles of the lagged individual electricity demand of the 247 customers. } \value{A list where \code{AUDem$meanDem} is a data.frame and \code{AUDem$qDem48} a matrix. } \examples{ library(qgam) data(AUDem) # Mean demand over the period plot(AUDem$meanDem$dem, type = 'l') # 20 quantiles of individual demand over 5 days matplot(seq(0.01, 0.99, length.out = 20), t(AUDem$qDem48[c(1, 50, 75, 100, 250), ]), type = 'l', ylab = "Electricity demand (KW)", xlab = expression("Probability level " * "(p)"), lty = 1) } qgam/man/log1pexp.Rd0000644000176200001440000000146513301766721014014 0ustar liggesusers% Generated by roxygen2: do not edit by hand % Please edit documentation in R/log1pexp.R \name{log1pexp} \alias{log1pexp} \title{Calculating log(1+exp(x)) accurately} \usage{ log1pexp(x) } \arguments{ \item{x}{a numeric vector.} } \value{ A numeric vector where the i-th entry is equal to \code{log(1+exp(x[i]))}, but computed more stably. } \description{ Calculates \code{log(1+exp(x))} in a numerically stable fashion. } \details{ We follow the recipe of Machler (2012), that is formula (10) page 7. } \examples{ set.seed(141) library(qgam); x <- rnorm(100, 0, 100) log1pexp(x) - log1p(exp(x)) } \references{ Machler, M. (2012). Accurately computing log(1-exp(-|a|)). URL: \url{https://cran.r-project.org/package=Rmpfr/vignettes/log1mexp-note.pdf}. } \author{ Matteo Fasiolo . } qgam/man/tuneLearnFast.Rd0000644000176200001440000002170013763713453015030 0ustar liggesusers% Generated by roxygen2: do not edit by hand % Please edit documentation in R/tuneLearnFast.R \name{tuneLearnFast} \alias{tuneLearnFast} \title{Fast learning rate calibration for the Gibbs posterior} \usage{ tuneLearnFast( form, data, qu, err = NULL, multicore = !is.null(cluster), cluster = NULL, ncores = detectCores() - 1, paropts = list(), control = list(), argGam = NULL ) } \arguments{ \item{form}{A GAM formula, or a list of formulae. See ?mgcv::gam details.} \item{data}{A data frame or list containing the model response variable and covariates required by the formula. By default the variables are taken from environment(formula): typically the environment from which gam is called.} \item{qu}{The quantile of interest. Should be in (0, 1).} \item{err}{An upper bound on the error of the estimated quantile curve. Should be in (0, 1). Since qgam v1.3 it is selected automatically, using the methods of Fasiolo et al. (2017). The old default was \code{err=0.05}.} \item{multicore}{If TRUE the calibration will happen in parallel.} \item{cluster}{An object of class \code{c("SOCKcluster", "cluster")}. This allowes the user to pass her own cluster, which will be used if \code{multicore == TRUE}. The user has to remember to stop the cluster.} \item{ncores}{Number of cores used. Relevant if \code{multicore == TRUE}.} \item{paropts}{a list of additional options passed into the foreach function when parallel computation is enabled. This is important if (for example) your code relies on external data or packages: use the .export and .packages arguments to supply them so that all cluster nodes have the correct environment set up for computing.} \item{control}{A list of control parameters for \code{tuneLearn} with entries: \itemize{ \item{\code{loss} = loss function use to tune log(sigma). If \code{loss=="cal"} is chosen, then log(sigma) is chosen so that credible intervals for the fitted curve are calibrated. See Fasiolo et al. (2017) for details. If \code{loss=="pin"} then log(sigma) approximately minimizes the pinball loss on the out-of-sample data.} \item{\code{sam} = sampling scheme use: \code{sam=="boot"} corresponds to bootstrapping and \code{sam=="kfold"} to k-fold cross-validation. The second option can be used only if \code{ctrl$loss=="pin"}.} \item{\code{vtype} = type of variance estimator used to standardize the deviation from the main fit in the calibration. If set to \code{"m"} the variance estimate obtained by the full data fit is used, if set to \code{"b"} than the variance estimated produced by the bootstrap fits are used. By default \code{vtype="m"}.} \item{\code{epsB} = positive tolerance used to assess convergence when fitting the regression coefficients on bootstrap data. In particular, if \code{|dev-dev_old|/(|dev|+0.1) 0 and values > 0.1 don't quite make sense. By default \code{aTol=0.05}.} \item{\code{redWd} = parameter which determines when the bracket will be reduced. If \code{redWd==10} then the bracket is halved if the nearest solution falls within the central 10\% of the bracket's width. By default \code{redWd = 10}.} \item{\code{b} = offset parameter used by the mgcv::gauslss, which we estimate to initialize the quantile fit (when a variance model is used). By default \code{b=0}.} \item{\code{link} = Link function to be used. See \code{?elf} and \code{?elflss} for defaults.} \item{\code{verbose} = if TRUE some more details are given. By default \code{verbose=FALSE}.} \item{\code{progress} = if TRUE progress in learning rate estimation is reported via printed text. \code{TRUE} by default.} }} \item{argGam}{A list of parameters to be passed to \code{mgcv::gam}. This list can potentially include all the arguments listed in \code{?gam}, with the exception of \code{formula}, \code{family} and \code{data}.} } \value{ A list with entries: \itemize{ \item{\code{lsig} = a vector containing the values of log(sigma) that minimize the loss function, for each quantile.} \item{\code{err} = the error bound used for each quantile. Generally each entry is identical to the argument \code{err}, but in some cases the function increases it to enhance stabily.} \item{\code{ranges} = the search ranges by the Brent algorithm to find log-sigma, for each quantile. } \item{\code{store} = a list, where the i-th entry is a matrix containing all the locations (1st row) at which the loss function has been evaluated and its value (2nd row), for the i-th quantile.} } } \description{ The learning rate (sigma) of the Gibbs posterior is tuned either by calibrating the credible intervals for the fitted curve, or by minimizing the pinball loss on out-of-sample data. This is done by bootrapping or by k-fold cross-validation. Here the loss function is minimized, for each quantile, using a Brent search. } \examples{ library(qgam); library(MASS) ### # Single quantile fit ### # Calibrate learning rate on a grid set.seed(5235) tun <- tuneLearnFast(form = accel~s(times,k=20,bs="ad"), data = mcycle, qu = 0.2) # Fit for quantile 0.2 using the best sigma fit <- qgam(accel~s(times, k=20, bs="ad"), data = mcycle, qu = 0.2, lsig = tun$lsig) pred <- predict(fit, se=TRUE) plot(mcycle$times, mcycle$accel, xlab = "Times", ylab = "Acceleration", ylim = c(-150, 80)) lines(mcycle$times, pred$fit, lwd = 1) lines(mcycle$times, pred$fit + 2*pred$se.fit, lwd = 1, col = 2) lines(mcycle$times, pred$fit - 2*pred$se.fit, lwd = 1, col = 2) ### # Multiple quantile fits ### # Calibrate learning rate on a grid quSeq <- c(0.25, 0.5, 0.75) set.seed(5235) tun <- tuneLearnFast(form = accel~s(times, k=20, bs="ad"), data = mcycle, qu = quSeq) # Fit using estimated sigmas fit <- mqgam(accel~s(times, k=20, bs="ad"), data = mcycle, qu = quSeq, lsig = tun$lsig) # Plot fitted quantiles plot(mcycle$times, mcycle$accel, xlab = "Times", ylab = "Acceleration", ylim = c(-150, 80)) for(iq in quSeq){ pred <- qdo(fit, iq, predict) lines(mcycle$times, pred, col = 2) } \dontrun{ # You can get a better fit by letting the learning rate change with "accel" # For instance tun <- tuneLearnFast(form = list(accel ~ s(times, k=20, bs="ad"), ~ s(times)), data = mcycle, qu = quSeq) fit <- mqgam(list(accel ~ s(times, k=20, bs="ad"), ~ s(times)), data = mcycle, qu = quSeq, lsig = tun$lsig) # Plot fitted quantiles plot(mcycle$times, mcycle$accel, xlab = "Times", ylab = "Acceleration", ylim = c(-150, 80)) for(iq in quSeq){ pred <- qdo(fit, iq, predict) lines(mcycle$times, pred, col = 2) } } } \references{ Fasiolo, M., Wood, S.N., Zaffran, M., Nedellec, R. and Goude, Y., 2020. Fast calibrated additive quantile regression. Journal of the American Statistical Association (to appear). \url{https://www.tandfonline.com/doi/full/10.1080/01621459.2020.1725521}. } \author{ Matteo Fasiolo . } qgam/man/mqgam.Rd0000644000176200001440000001160514146704252013353 0ustar liggesusers% Generated by roxygen2: do not edit by hand % Please edit documentation in R/mqgam.R \name{mqgam} \alias{mqgam} \title{Fit multiple smooth additive quantile regression models} \usage{ mqgam( form, data, qu, lsig = NULL, err = NULL, multicore = !is.null(cluster), cluster = NULL, ncores = detectCores() - 1, paropts = list(), control = list(), argGam = NULL ) } \arguments{ \item{form}{A GAM formula, or a list of formulae. See ?mgcv::gam details.} \item{data}{A data frame or list containing the model response variable and covariates required by the formula. By default the variables are taken from environment(formula): typically the environment from which gam is called.} \item{qu}{A vectors of quantiles of interest. Each entry should be in (0, 1).} \item{lsig}{The value of the log learning rate used to create the Gibbs posterior. By defauls \code{lsig=NULL} and this parameter is estimated by posterior calibration described in Fasiolo et al. (2017). Obviously, the function is much faster if the user provides a value.} \item{err}{An upper bound on the error of the estimated quantile curve. Should be in (0, 1). If it is a vector, it should be of the same length of \code{qu}. Since qgam v1.3 it is selected automatically, using the methods of Fasiolo et al. (2017). The old default was \code{err=0.05}.} \item{multicore}{If TRUE the calibration will happen in parallel.} \item{cluster}{An object of class \code{c("SOCKcluster", "cluster")}. This allowes the user to pass her own cluster, which will be used if \code{multicore == TRUE}. The user has to remember to stop the cluster.} \item{ncores}{Number of cores used. Relevant if \code{multicore == TRUE}.} \item{paropts}{a list of additional options passed into the foreach function when parallel computation is enabled. This is important if (for example) your code relies on external data or packages: use the .export and .packages arguments to supply them so that all cluster nodes have the correct environment set up for computing.} \item{control}{A list of control parameters. The only one relevant here is \code{link}, which is the link function used (see \code{?elf} and \code{?elflss} for defaults). All other control parameters are used by \code{tuneLearnFast}. See \code{?tuneLearnFast} for details.} \item{argGam}{A list of parameters to be passed to \code{mgcv::gam}. This list can potentially include all the arguments listed in \code{?gam}, with the exception of \code{formula}, \code{family} and \code{data}.} } \value{ A list with entries: \itemize{ \item{\code{fit} = a \code{gamObject}, one for each entry of \code{qu}. Notice that the slots \code{model} and \code{smooth} of each object has been removed to save memory. See \code{?gamObject}. } \item{\code{model} = the \code{model} slot of the \code{gamObject}s in the \code{fit} slot. This is the same for every fit, hence only one copy is stored.} \item{\code{smooth} = the \code{smooth} slot of the \code{gamObject}s in the \code{fit} slot. This is the same for every fit, hence only one copy is stored.} \item{\code{calibr} = a list which is the output of an internal call to \code{tuneLearnFast}, which is used for calibrating the learning rate. See \code{?tuneLearnFast} for details.} } } \description{ This function fits a smooth additive regression model to several quantiles. } \examples{ ##### # Multivariate Gaussian example #### library(qgam) set.seed(2) dat <- gamSim(1, n=300, dist="normal", scale=2) fit <- mqgam(y~s(x0)+s(x1)+s(x2)+s(x3), data=dat, qu = c(0.2, 0.8)) invisible( qdo(fit, 0.2, plot, pages = 1) ) ##### # Univariate "car" example #### library(qgam); library(MASS) # Fit for quantile 0.8 using the best sigma quSeq <- c(0.2, 0.4, 0.6, 0.8) set.seed(6436) fit <- mqgam(accel~s(times, k=20, bs="ad"), data = mcycle, qu = quSeq) # Plot the fit xSeq <- data.frame(cbind("accel" = rep(0, 1e3), "times" = seq(2, 58, length.out = 1e3))) plot(mcycle$times, mcycle$accel, xlab = "Times", ylab = "Acceleration", ylim = c(-150, 80)) for(iq in quSeq){ pred <- qdo(fit, iq, predict, newdata = xSeq) lines(xSeq$times, pred, col = 2) } } \references{ Fasiolo, M., Wood, S.N., Zaffran, M., Nedellec, R. and Goude, Y., 2020. Fast calibrated additive quantile regression. Journal of the American Statistical Association (to appear). \url{https://www.tandfonline.com/doi/full/10.1080/01621459.2020.1725521}. Fasiolo, M., Wood, S.N., Zaffran, M., Nedellec, R. and Goude, Y., 2021. qgam: Bayesian Nonparametric Quantile Regression Modeling in R. Journal of Statistical Software, 100(9), 1-31, \doi{10.18637/jss.v100.i09}. } \author{ Matteo Fasiolo . } qgam/man/elf.Rd0000644000176200001440000000443313763713453013027 0ustar liggesusers% Generated by roxygen2: do not edit by hand % Please edit documentation in R/elf.R \name{elf} \alias{elf} \title{Extended log-F model with fixed scale} \usage{ elf(theta = NULL, link = "identity", qu, co) } \arguments{ \item{theta}{a scalar representing the log-scale log(sigma).} \item{link}{the link function between the linear predictor and the quantile location.} \item{qu}{parameter in (0, 1) representing the chosen quantile. For instance, to fit the median choose \code{qu=0.5}.} \item{co}{positive constant used to determine parameter lambda of the ELF density (lambda = co / sigma). Can be vector valued.} } \value{ An object inheriting from mgcv's class \code{extended.family}. } \description{ The \code{elf} family implements the Extended log-F density of Fasiolo et al. (2017) and it is supposed to work in conjuction with the extended GAM methods of Wood et al. (2017), implemented by \code{mgcv}. It differs from the \code{elflss} family, because here the scale of the density (sigma, aka the learning rate) is a single scalar, while in \code{elflss} it can depend on the covariates. At the moment the family is mainly intended for internal use, use the \code{qgam} function to fit quantile GAMs based on ELF. } \details{ This function is meant for internal use only. } \examples{ library(qgam) set.seed(2) dat <- gamSim(1,n=400,dist="normal",scale=2) # Fit median using elf directly: FAST BUT NOT RECOMMENDED fit <- gam(y~s(x0)+s(x1)+s(x2)+s(x3), family = elf(co = 0.1, qu = 0.5), data = dat) plot(fit, scale = FALSE, pages = 1) # Using qgam: RECOMMENDED fit <- qgam(y~s(x0)+s(x1)+s(x2)+s(x3), data=dat, qu = 0.8) plot(fit, scale = FALSE, pages = 1) } \references{ Fasiolo, M., Wood, S.N., Zaffran, M., Nedellec, R. and Goude, Y., 2020. Fast calibrated additive quantile regression. Journal of the American Statistical Association (to appear). \url{https://www.tandfonline.com/doi/full/10.1080/01621459.2020.1725521}. Wood, Simon N., Pya, N. and Safken, B. (2017). Smoothing parameter and model selection for general smooth models. Journal of the American Statistical Association. } \author{ Matteo Fasiolo and Simon N. Wood. } qgam/man/check.learn.Rd0000644000176200001440000000353113763713343014432 0ustar liggesusers% Generated by roxygen2: do not edit by hand % Please edit documentation in R/check_learn.R \name{check.learn} \alias{check.learn} \title{Visual checks for the output of tuneLearn()} \usage{ \method{check}{learn}(obj, sel = 1:2, ...) } \arguments{ \item{obj}{the output of a call to \code{tuneLearn}.} \item{sel}{this function produces two plots, set this parameter to 1 to plot only the first, to 2 to plot only the second or leave it to 1:2 to plot both.} \item{...}{currently not used, here only for compatibility reasons.} } \value{ It produces several plots. } \description{ Provides some visual plots showing how the calibration criterion and the effective degrees of freedom of each smooth component vary with the learning rate. } \details{ The first plot shows how the calibrations loss, which we are trying to minimize, varies with the log learning rate. This function should look quite smooth, if it doesn't then try to increase \code{err} or \code{control$K} (the number of bootstrap samples) in the original call to \code{tuneLearn}. The second plot shows how the effective degrees of freedom of each smooth term vary with log(sigma). Generally as log(sigma) increases the complexity of the fit decreases, hence the slope is negative. } \examples{ library(qgam) set.seed(525) dat <- gamSim(1, n=200) b <- tuneLearn(lsig = seq(-0.5, 1, length.out = 10), y~s(x0)+s(x1)+s(x2)+s(x3), data=dat, qu = 0.5) check(b) } \references{ Fasiolo, M., Wood, S.N., Zaffran, M., Nedellec, R. and Goude, Y., 2020. Fast calibrated additive quantile regression. Journal of the American Statistical Association (to appear). \url{https://www.tandfonline.com/doi/full/10.1080/01621459.2020.1725521}. } \author{ Matteo Fasiolo . } qgam/man/qdo.Rd0000644000176200001440000000313313462144020013020 0ustar liggesusers% Generated by roxygen2: do not edit by hand % Please edit documentation in R/qdo.R \name{qdo} \alias{qdo} \title{Manipulating the output of \code{mqgam}} \usage{ qdo(obj, qu = NULL, fun = I, ...) } \arguments{ \item{obj}{the output of a \code{mqgam} call.} \item{qu}{A vector whose elements must be in (0, 1). Each element indicates a quantile of interest, which should be an element of \code{names(obj$fit)}. If left to \code{NULL} the function \code{fun} will be applied to each of the quantile fits in \code{obj}.} \item{fun}{The method or function that we want to use on the \code{gamObject} corresponding to quantile \code{qu}. For instance \code{predict}, \code{plot} or \code{summary}. By default this is the identity function (\code{I}), which means that the fitted model for quantile \code{qu} is returned.} \item{...}{Additional arguments to be passed to \code{fun}.} } \value{ A list where the i-th entry is the output of \code{fun} (whatever that is) corresponding to quantile \code{qu[i]}. } \description{ Contrary to \code{qgam}, \code{mqgam} does not output a standard \code{gamObject}, hence methods such as \code{predict.gam} or \code{plot.gam} cannot be used directly. \code{qdo} provides a simple wrapper for such methods. } \examples{ library(qgam); library(MASS) quSeq <- c(0.4, 0.6) set.seed(737) fit <- mqgam(accel~s(times, k=20, bs="ad"), data = mcycle, qu = quSeq) qdo(fit, 0.4, summary) invisible(qdo(fit, 0.4, plot, pages = 1)) # Return the object for qu = 0.6 and then plot it tmp <- qdo(fit, 0.6) plot(tmp) } \author{ Matteo Fasiolo . } qgam/man/check.learnFast.Rd0000644000176200001440000000432613763713343015253 0ustar liggesusers% Generated by roxygen2: do not edit by hand % Please edit documentation in R/check_learnFast.R \name{check.learnFast} \alias{check.learnFast} \title{Visual checks for the output of tuneLearnFast()} \usage{ \method{check}{learnFast}(obj, sel = NULL, ...) } \arguments{ \item{obj}{the output of a call to \code{tuneLearnFast}.} \item{sel}{integer vector determining which of the plots will be produced. For instance if \code{sel = c(1, 3)} only the 1st and 3rd plots are showed. No entry of \code{sel} can be bigger than one plus the number of quantiles considered in the original \code{tuneLearnFast()} call. That is, if we estimated the learning rate for \code{qu = c(0.1, 0.4)}, then \code{max(sel)} must be <= 3.} \item{...}{currently not used, here only for compatibility reasons.} } \value{ It produces several plots. } \description{ Provides some visual checks to verify whether the Brent optimizer used by \code{tuneLearnFast()} worked correctly. } \details{ The top plot in the first page shows the bracket used to estimate log(sigma) for each quantile. The brackets are delimited by the crosses and the red dots are the estimates. If a dot falls very close to one of the crosses, that might indicate problems. The bottom plot shows, for each quantile, the value of parameter \code{err} used. Sometimes the algorithm needs to increase \code{err} above its user-defined value to achieve convergence. Subsequent plots show, for each quantile, the value of the loss function corresponding to each value of log(sigma) explored by Brent algorithm. } \examples{ library(qgam) set.seed(525) dat <- gamSim(1, n=200) b <- tuneLearnFast(y ~ s(x0)+s(x1)+s(x2)+s(x3), data = dat, qu = c(0.4, 0.5), control = list("tol" = 0.05)) # <- sloppy tolerance to speed-up calibration check(b) check(b, 3) # Produces only third plot } \references{ Fasiolo, M., Wood, S.N., Zaffran, M., Nedellec, R. and Goude, Y., 2020. Fast calibrated additive quantile regression. Journal of the American Statistical Association (to appear). \url{https://www.tandfonline.com/doi/full/10.1080/01621459.2020.1725521}. } \author{ Matteo Fasiolo . } qgam/man/check.Rd0000644000176200001440000000152313462142537013326 0ustar liggesusers% Generated by roxygen2: do not edit by hand % Please edit documentation in R/generic_functions.R \docType{methods} \name{check} \alias{check} \title{Generic checking function} \usage{ check(obj, ...) } \arguments{ \item{obj}{the object to be checked.} \item{...}{extra arguments, mainly used by graphic functions.} } \value{ Reports the results of convergence tests and/or produces diagnostic plots. } \description{ Generic function for checking R objects which produces, for instance, convergence tests or diagnostic plots. For \code{qgam} objects \code{check.qgam()} will be used. } \examples{ ####### # Using check.qgam ####### library(qgam) set.seed(0) dat <- gamSim(1, n=200) b<-qgam(y~s(x0)+s(x1)+s(x2)+s(x3), data=dat, qu = 0.5) plot(b, pages=1) check(b, pch=19, cex=.3) } \author{ Matteo Fasiolo . } qgam/man/check.qgam.Rd0000644000176200001440000000454513763713343014264 0ustar liggesusers% Generated by roxygen2: do not edit by hand % Please edit documentation in R/check_qgam.R \name{check.qgam} \alias{check.qgam} \title{Some diagnostics for a fitted qgam model} \usage{ \method{check}{qgam}(obj, nbin = 10, lev = 0.05, ...) } \arguments{ \item{obj}{the output of a \code{qgam()} call.} \item{nbin}{number of bins used in the internal call to \code{cqcheck()}.} \item{lev}{the significance levels used by \code{cqcheck()}, which determines the width of the confidence intervals.} \item{...}{extra arguments to be passed to \code{plot()}} } \value{ Simply produces some plots and prints out some diagnostics. } \description{ Takes a fitted gam object produced by \code{qgam()} and produces some diagnostic information about the fitting procedure and results. It is partially based on \code{mgcv::gam.check}. } \details{ This function provides two plots. The first shows how the number of responses falling below the fitted quantile (y-axis) changes with the fitted quantile (x-axis). To be clear: if the quantile is fixed to, say, 0.5 we expect 50\% of the responses to fall below the fit. See \code{?cqcheck()} for details. The second plot related to \code{|F(hat(mu)) - F(mu0)|}, which is the absolute bias attributable to the fact that qgam is using a smoothed version of the pinball-loss. The absolute bias is evaluated at each observation, and an histogram is produced. See Fasiolo et al. (2017) for details. The function also prints out the integrated absolute bias, and the proportion of observations lying below the regression line. It also provides some convergence diagnostics (regarding the optimization), which are the same as in \code{mgcv::gam.check}. It reports also the maximum (k') and the selected degrees of freedom of each smooth term. } \examples{ library(qgam) set.seed(0) dat <- gamSim(1, n=200) b<-qgam(y~s(x0)+s(x1)+s(x2)+s(x3), data=dat, qu = 0.5) plot(b, pages=1) check.qgam(b, pch=19, cex=.3) } \references{ Fasiolo, M., Wood, S.N., Zaffran, M., Nedellec, R. and Goude, Y., 2020. Fast calibrated additive quantile regression. Journal of the American Statistical Association (to appear). \url{https://www.tandfonline.com/doi/full/10.1080/01621459.2020.1725521}. } \author{ Matteo Fasiolo , Simon N. Wood. } qgam/man/UKload.Rd0000644000176200001440000000270114041552722013422 0ustar liggesusers\name{UKload} \alias{UKload} %- Also NEED an `\alias' for EACH other topic documented here. \title{UK electricity load data} \description{ Dataset on UK electricity demand, taken from the national grid (\url{https://www.nationalgrid.com/}). } %- end description \usage{ data(UKload) } %- maybe also `usage' for other objects documented here. \format{ \code{UKload} contains the following variables: \describe{ \item{NetDemand}{net electricity demand between 11:30am and 12am.} \item{wM}{instantaneous temperature, averaged over several English cities.} \item{wM_s95}{exponential smooth of \code{wM}, that is \code{wM_s95[i] = a*wM_s95[i-1] + (1-a)*wM[i]} with \code{a=0.95}}. \item{Posan}{periodic index in \code{[0, 1]} indicating the position along the year.} \item{Dow}{factor variable indicating the day of the week.} \item{Trend}{progressive counter, useful for defining the long term trend.} \item{NetDemand.48}{lagged version of \code{NetDemand}, that is \code{NetDemand.48[i] = NetDemand[i-2]}.} \item{Holy}{binary variable indicating holidays.} \item{Year}{should be obvious.} \item{Date}{should be obvious.} } } \details{ See Fasiolo et al. (2017) for details.} \value{matrix of replicate data series } \references{ Fasiolo, M., Goude, Y., Nedellec, R. and Wood, S. N. (2017). Fast calibrated additive quantile regression. Available at \url{https://arxiv.org/abs/1707.03307}. } \examples{ library(qgam) data(UKload) plot(UKload$NetDemand, type = 'l') } qgam/man/cqcheck.Rd0000644000176200001440000001216013616051551013645 0ustar liggesusers% Generated by roxygen2: do not edit by hand % Please edit documentation in R/cqcheck.R \name{cqcheck} \alias{cqcheck} \title{Visually checking a fitted quantile model} \usage{ cqcheck( obj, v, X = NULL, y = NULL, nbin = c(10, 10), bound = NULL, lev = 0.05, scatter = FALSE, ... ) } \arguments{ \item{obj}{the output of a \code{qgam} call.} \item{v}{if a 1D plot is required, \code{v} should be either a single character or a numeric vector. In the first case \code{v} should be the names of one of the variables in the dataframe \code{X}. In the second case, the length of \code{v} should be equal to the number of rows of \code{X}. If a 2D plot is required, \code{v} should be either a vector of two characters or a matrix with two columns.} \item{X}{a dataframe containing the data used to obtain the conditional quantiles. By default it is NULL, in which case predictions are made using the model matrix in \code{obj$model}.} \item{y}{vector of responses. Its i-th entry corresponds to the i-th row of X. By default it is NULL, in which case it is internally set to \code{obj$y}.} \item{nbin}{a vector of integers of length one (1D case) or two (2D case) indicating the number of bins to be used in each direction. Used only if \code{bound==NULL}.} \item{bound}{in the 1D case it is a numeric vector whose increasing entries represent the bounds of each bin. In the 2D case a list of two vectors should be provided. \code{NULL} by default.} \item{lev}{the significance levels used in the plots, this determines the width of the confidence intervals. Default is 0.05.} \item{scatter}{if TRUE a scatterplot is added (using the \code{points} function). FALSE by default.} \item{...}{extra graphical parameters to be passed to \code{plot()}.} } \value{ Simply produces a plot. } \description{ Given an additive quantile model, fitted using \code{qgam}, \code{cqcheck} provides some plots that allow to check what proportion of responses, \code{y}, falls below the fitted quantile. } \details{ Having fitted an additive model for, say, quantile \code{qu=0.4} one would expect that about 40% of the responses fall below the fitted quantile. This function allows to visually compare the empirical number of responses (\code{qu_hat}) falling below the fit with its theoretical value (\code{qu}). In particular, the responses are binned, which the bins being constructed along one or two variables (given be arguments \code{v}). Let (\code{qu_hat[i]}) be the proportion of responses below the fitted quantile in the ith bin. This should be approximately equal to \code{qu}, for every i. In the 1D case, when \code{v} is a single character or a numeric vector, \code{cqcheck} provides a plot where: the horizontal line is \code{qu}, the dots correspond to \code{qu_hat[i]} and the grey lines are confidence intervals for \code{qu}. The confidence intervals are based on \code{qbinom(lev/2, siz, qu)}, if the dots fall outside them, then \code{qu_hat[i]} might be deviating too much from \code{qu}. In the 2D case, when \code{v} is a vector of two characters or a matrix with two columns, we plot a grid of bins. The responses are divided between the bins as before, but now don't plot the confidence intervals. Instead we report the empirical proportions \code{qu_hat[i]} for the non-empty bin, and with colour the bins in red if \code{qu_hat[i]. } qgam/man/sigmoid.Rd0000644000176200001440000000242113301766721013701 0ustar liggesusers% Generated by roxygen2: do not edit by hand % Please edit documentation in R/sigmoid.R \name{sigmoid} \alias{sigmoid} \title{Sigmoid function and its derivatives} \usage{ sigmoid(y, deriv = FALSE) } \arguments{ \item{y}{a numeric vector.} \item{deriv}{if \code{TRUE} alse the first three derivatives of the sigmoid function will be computed.} } \value{ If \code{deriv==FALSE}, it returns a numeric vector equal to \code{1/(1+exp(-x))}. If \code{deriv==TRUE} it returns a list where the slot \code{$D0} contains \code{1/(1+exp(-x))}, while \code{$D1}, \code{$D2} and \code{$D3} contain its first three derivatives. } \description{ Calculates the sigmoid function and its derivatives. } \examples{ library(qgam) set.seed(90) h <- 1e-6 p <- rnorm(1e4, 0, 1e6) sigmoid(p[1:50]) - 1/(1+exp(-p[1:50])) ##### Testing sigmoid derivatives e1 <- abs((sigmoid(p+h) - sigmoid(p-h)) / (2*h) - sigmoid(p, TRUE)[["D1"]]) / (2*h) e2 <- abs((sigmoid(p+h, TRUE)$D1 - sigmoid(p-h, TRUE)$D1) / (2*h) - sigmoid(p, TRUE)[["D2"]]) / (2*h) e3 <- abs((sigmoid(p+h, TRUE)$D2 - sigmoid(p-h, TRUE)$D2) / (2*h) - sigmoid(p, TRUE)[["D3"]]) / (2*h) if( any(c(e1, e2, e3) > 1) ) stop("Sigmoid derivatives are not estimated accurately") } \author{ Matteo Fasiolo . } qgam/man/cqcheckI.Rd0000644000176200001440000000700613616051551013761 0ustar liggesusers% Generated by roxygen2: do not edit by hand % Please edit documentation in R/cqcheckI.R \name{cqcheckI} \alias{cqcheckI} \title{Interactive visual checks for additive quantile fits} \usage{ cqcheckI( obj, v, X = NULL, y = NULL, run = TRUE, width = "100\%", height = "680px" ) } \arguments{ \item{obj}{the output of a \code{qgam} call.} \item{v}{if a 1D plot is required, \code{v} should be either a single character or a numeric vector. In the first case \code{v} should be the names of one of the variables in the dataframe \code{X}. In the second case, the length of \code{v} should be equal to the number of rows of \code{X}. If a 2D plot is required, \code{v} should be either a vector of two characters or a matrix with two columns.} \item{X}{a dataframe containing the data used to obtain the conditional quantiles. By default it is NULL, in which case predictions are made using the model matrix in \code{obj$model}.} \item{y}{vector of responses. Its i-th entry corresponds to the i-th row of X. By default it is NULL, in which case it is internally set to \code{obj$y}.} \item{run}{if TRUE (default) the function produces an interactive plot, otherwise it returns the corresponding shiny app.} \item{width}{the width of the main plot. Default is "100\%".} \item{height}{width the width of the main plot. Default is "680px".} } \value{ Simply produces an interactive plot. } \description{ Given an additive quantile model, fitted using \code{qgam}, \code{cqcheck2DI} provides some interactive 2D plots that allow to check what proportion of responses, \code{y}, falls below the fitted quantile. This is an interactive version of the \code{cqcheck} function. } \details{ This is an interactive version of the \code{cqcheck}, see \code{?cqcheck} for details. The main interactive feature is that one can select an area by brushing, and then double-click to zoom in. In the 1D case the vertical part of the selected area is not use: we zoom only along the x axis. Double-clicking without brushing zooms out. } \examples{ \dontrun{ ####### # Example 1: Bivariate additive model y~1+x+x^2+z+x*z/2+e, e~N(0, 1) ####### library(qgam) set.seed(15560) n <- 1000 x <- rnorm(n, 0, 1); z <- rnorm(n) X <- cbind(1, x, x^2, z, x*z) beta <- c(0, 1, 1, 1, 0.5) y <- drop(X \%*\% beta) + rnorm(n) dataf <- data.frame(cbind(y, x, z)) names(dataf) <- c("y", "x", "z") #### Fit a constant model for median qu <- 0.5 fit <- qgam(y~1, qu = qu, data = dataf) # Look at what happens along x: clearly there is non linear pattern here cqcheckI(obj = fit, v = c("x"), X = dataf, y = y) #### Add a smooth for x fit <- qgam(y~s(x), qu = qu, data = dataf) cqcheckI(obj = fit, v = c("x"), X = dataf, y = y) # Better! # Lets look across across x and z. As we move along z (x2 in the plot) # the colour changes from green to red cqcheckI(obj = fit, v = c("x", "z"), X = dataf, y = y) # The effect look pretty linear cqcheckI(obj = fit, v = c("z"), X = dataf, y = y) #### Lets add a linear effect for z fit <- qgam(y~s(x)+z, qu = qu, data = dataf) # Looks better! cqcheckI(obj = fit, v = c("z")) # Lets look across x and y again: green prevails on the top-left to bottom-right # diagonal, while the other diagonal is mainly red. cqcheckI(obj = fit, v = c("x", "z")) ### Maybe adding an interaction would help? fit <- qgam(y~s(x)+z+I(x*z), qu = qu, data = dataf) # It does! The real model is: y ~ 1 + x + x^2 + z + x*z/2 + e, e ~ N(0, 1) cqcheckI(obj = fit, v = c("x", "z")) } } \author{ Matteo Fasiolo . } qgam/man/qgam.Rd0000644000176200001440000001174214146704252013200 0ustar liggesusers% Generated by roxygen2: do not edit by hand % Please edit documentation in R/qgam.R \name{qgam} \alias{qgam} \title{Fit a smooth additive quantile regression model} \usage{ qgam( form, data, qu, lsig = NULL, err = NULL, multicore = !is.null(cluster), cluster = NULL, ncores = detectCores() - 1, paropts = list(), control = list(), argGam = NULL ) } \arguments{ \item{form}{A GAM formula, or a list of formulae. See ?mgcv::gam details.} \item{data}{A data frame or list containing the model response variable and covariates required by the formula. By default the variables are taken from environment(formula): typically the environment from which gam is called.} \item{qu}{The quantile of interest. Should be in (0, 1).} \item{lsig}{The value of the log learning rate used to create the Gibbs posterior. By defauls \code{lsig=NULL} and this parameter is estimated by posterior calibration described in Fasiolo et al. (2017). Obviously, the function is much faster if the user provides a value.} \item{err}{An upper bound on the error of the estimated quantile curve. Should be in (0, 1). Since qgam v1.3 it is selected automatically, using the methods of Fasiolo et al. (2017). The old default was \code{err=0.05}.} \item{multicore}{If TRUE the calibration will happen in parallel.} \item{cluster}{An object of class \code{c("SOCKcluster", "cluster")}. This allowes the user to pass her own cluster, which will be used if \code{multicore == TRUE}. The user has to remember to stop the cluster.} \item{ncores}{Number of cores used. Relevant if \code{multicore == TRUE}.} \item{paropts}{a list of additional options passed into the foreach function when parallel computation is enabled. This is important if (for example) your code relies on external data or packages: use the .export and .packages arguments to supply them so that all cluster nodes have the correct environment set up for computing.} \item{control}{A list of control parameters. The only one relevant here is \code{link}, which is the link function used (see \code{?elf} and \code{?elflss} for defaults). All other control parameters are used by \code{tuneLearnFast}. See \code{?tuneLearnFast} for details.} \item{argGam}{A list of parameters to be passed to \code{mgcv::gam}. This list can potentially include all the arguments listed in \code{?gam}, with the exception of \code{formula}, \code{family} and \code{data}.} } \value{ A \code{gamObject}. See \code{?gamObject}. } \description{ This function fits a smooth additive regression model for a single quantile. } \examples{ ##### # Univariate "car" example #### library(qgam); library(MASS) # Fit for quantile 0.5 using the best sigma set.seed(6436) fit <- qgam(accel~s(times, k=20, bs="ad"), data = mcycle, qu = 0.5) # Plot the fit xSeq <- data.frame(cbind("accel" = rep(0, 1e3), "times" = seq(2, 58, length.out = 1e3))) pred <- predict(fit, newdata = xSeq, se=TRUE) plot(mcycle$times, mcycle$accel, xlab = "Times", ylab = "Acceleration", ylim = c(-150, 80)) lines(xSeq$times, pred$fit, lwd = 1) lines(xSeq$times, pred$fit + 2*pred$se.fit, lwd = 1, col = 2) lines(xSeq$times, pred$fit - 2*pred$se.fit, lwd = 1, col = 2) \dontrun{ # You can get a better fit by letting the learning rate change with "accel" # For instance fit <- qgam(list(accel ~ s(times, k=20, bs="ad"), ~ s(times)), data = mcycle, qu = 0.8) pred <- predict(fit, newdata = xSeq, se=TRUE) plot(mcycle$times, mcycle$accel, xlab = "Times", ylab = "Acceleration", ylim = c(-150, 80)) lines(xSeq$times, pred$fit, lwd = 1) lines(xSeq$times, pred$fit + 2*pred$se.fit, lwd = 1, col = 2) lines(xSeq$times, pred$fit - 2*pred$se.fit, lwd = 1, col = 2) } ##### # Multivariate Gaussian example #### library(qgam) set.seed(2) dat <- gamSim(1,n=400,dist="normal",scale=2) fit <- qgam(y~s(x0)+s(x1)+s(x2)+s(x3), data=dat, qu = 0.5) plot(fit, scale = FALSE, pages = 1) ###### # Heteroscedastic example ###### \dontrun{ set.seed(651) n <- 2000 x <- seq(-4, 3, length.out = n) X <- cbind(1, x, x^2) beta <- c(0, 1, 1) sigma = 1.2 + sin(2*x) f <- drop(X \%*\% beta) dat <- f + rnorm(n, 0, sigma) dataf <- data.frame(cbind(dat, x)) names(dataf) <- c("y", "x") fit <- qgam(list(y~s(x, k = 30, bs = "cr"), ~ s(x, k = 30, bs = "cr")), data = dataf, qu = 0.95) plot(x, dat, col = "grey", ylab = "y") tmp <- predict(fit, se = TRUE) lines(x, tmp$fit) lines(x, tmp$fit + 2 * tmp$se.fit, col = 2) lines(x, tmp$fit - 2 * tmp$se.fit, col = 2) } } \references{ Fasiolo, M., Wood, S.N., Zaffran, M., Nedellec, R. and Goude, Y., 2020. Fast calibrated additive quantile regression. Journal of the American Statistical Association (to appear). \url{https://www.tandfonline.com/doi/full/10.1080/01621459.2020.1725521}. Fasiolo, M., Wood, S.N., Zaffran, M., Nedellec, R. and Goude, Y., 2021. qgam: Bayesian Nonparametric Quantile Regression Modeling in R. Journal of Statistical Software, 100(9), 1-31, \doi{10.18637/jss.v100.i09}. } \author{ Matteo Fasiolo . } qgam/man/elflss.Rd0000644000176200001440000000556013763713343013551 0ustar liggesusers% Generated by roxygen2: do not edit by hand % Please edit documentation in R/elflss.R \name{elflss} \alias{elflss} \title{Extended log-F model with variable scale} \usage{ elflss(link = list("identity", "log"), qu, co, theta, remInter = TRUE) } \arguments{ \item{link}{vector of two characters indicating the link function for the quantile location and for the log-scale.} \item{qu}{parameter in (0, 1) representing the chosen quantile. For instance, to fit the median choose \code{qu=0.5}.} \item{co}{positive vector of constants used to determine parameter lambda of the ELF density (lambda = co / sigma).} \item{theta}{a scalar representing the intercept of the model for the log-scale log(sigma).} \item{remInter}{if TRUE the intercept of the log-scale model is removed.} } \value{ An object inheriting from mgcv's class \code{general.family}. } \description{ The \code{elflss} family implements the Extended log-F (ELF) density of Fasiolo et al. (2017) and it is supposed to work in conjuction with the general GAM fitting methods of Wood et al. (2017), implemented by \code{mgcv}. It differs from the \code{elf} family, because here the scale of the density (sigma, aka the learning rate) can depend of the covariates, while in while in \code{elf} it is a single scalar. NB this function was use within the \code{qgam} function, but since \code{qgam} version 1.3 quantile models with varying learning rate are fitted using different methods (a parametric location-scale model, see Fasiolo et al. (2017) for details.). } \details{ This function is meant for internal use only. } \examples{ \dontrun{ set.seed(651) n <- 1000 x <- seq(-4, 3, length.out = n) X <- cbind(1, x, x^2) beta <- c(0, 1, 1) sigma = 1.2 + sin(2*x) f <- drop(X \%*\% beta) dat <- f + rnorm(n, 0, sigma) dataf <- data.frame(cbind(dat, x)) names(dataf) <- c("y", "x") # Fit median using elflss directly: NOT RECOMMENDED fit <- gam(list(y~s(x, bs = "cr"), ~ s(x, bs = "cr")), family = elflss(theta = 0, co = rep(0.2, n), qu = 0.5), data = dataf) plot(x, dat, col = "grey", ylab = "y") tmp <- predict(fit, se = TRUE) lines(x, tmp$fit[ , 1]) lines(x, tmp$fit[ , 1] + 3 * tmp$se.fit[ , 1], col = 2) lines(x, tmp$fit[ , 1] - 3 * tmp$se.fit[ , 1], col = 2) } } \references{ Fasiolo, M., Wood, S.N., Zaffran, M., Nedellec, R. and Goude, Y., 2020. Fast calibrated additive quantile regression. Journal of the American Statistical Association (to appear). \url{https://www.tandfonline.com/doi/full/10.1080/01621459.2020.1725521}. Wood, Simon N., Pya, N. and Safken, B. (2017). Smoothing parameter and model selection for general smooth models. Journal of the American Statistical Association. } \author{ Matteo Fasiolo and Simon N. Wood. } qgam/man/pinLoss.Rd0000644000176200001440000000130613467305150013674 0ustar liggesusers% Generated by roxygen2: do not edit by hand % Please edit documentation in R/pinLoss.R \name{pinLoss} \alias{pinLoss} \title{Pinball loss function} \usage{ pinLoss(y, mu, qu, add = TRUE) } \arguments{ \item{y}{points at which the loss is evaluated.} \item{mu}{location parameter of the pinball loss.} \item{qu}{quantile level of the loss.} \item{add}{if TRUE the losses at which quantile level will be added up.} } \value{ A numeric vector or matrix of evaluate losses. } \description{ Evaluates the pinball loss. } \examples{ n <- 1000 x <- seq(0, 4, length.out = n) plot(x, pinLoss(x, rep(2, n), qu = 0.9, add = FALSE), type = 'l', ylab = "loss") } \author{ Matteo Fasiolo . } qgam/DESCRIPTION0000644000176200001440000000323414146767672012733 0ustar liggesusersPackage: qgam Type: Package Title: Smooth Additive Quantile Regression Models Version: 1.3.4 Date: 2021-11-21 Authors@R: c(person("Matteo", "Fasiolo", email = "matteo.fasiolo@gmail.com", role = c("aut", "cre")), person("Simon", "N. Wood", role = c("ctb")), person("Margaux", "Zaffran", role = c("ctb")), person("Yannig", "Goude", role = c("ctb")), person("Raphael", "Nedellec", role = c("ctb"))) Maintainer: Matteo Fasiolo Description: Smooth additive quantile regression models, fitted using the methods of Fasiolo et al. (2020) . See Fasiolo at al. (2021) for an introduction to the package. Differently from 'quantreg', the smoothing parameters are estimated automatically by marginal loss minimization, while the regression coefficients are estimated using either PIRLS or Newton algorithm. The learning rate is determined so that the Bayesian credible intervals of the estimated effects have approximately the correct coverage. The main function is qgam() which is similar to gam() in 'mgcv', but fits non-parametric quantile regression models. License: GPL (>= 2) Depends: R (>= 3.5.0), mgcv (>= 1.8-28) Imports: shiny, plyr, doParallel, parallel, grDevices Suggests: knitr, rmarkdown, MASS, RhpcBLASctl, testthat VignetteBuilder: knitr RoxygenNote: 7.1.1 NeedsCompilation: yes Packaged: 2021-11-22 12:23:36 UTC; mf15002 Author: Matteo Fasiolo [aut, cre], Simon N. Wood [ctb], Margaux Zaffran [ctb], Yannig Goude [ctb], Raphael Nedellec [ctb] Repository: CRAN Date/Publication: 2021-11-22 19:30:02 UTC qgam/build/0000755000176200001440000000000014146705710012303 5ustar liggesusersqgam/build/vignette.rds0000644000176200001440000000031314146705710014637 0ustar liggesusersb```b`a`e`b2 1#PHs'妠&d'eVIseIAZ,  *ּb4.y) 3Gwwjey~L67/`d9.nP&c0Gq?gQ~`nݣ9JI,IK+MΨqgam/build/partial.rdb0000644000176200001440000003104114146705543014433 0ustar liggesusers}~Ǖ&D](ںزY%Ҳ@ARԅ-SŌEIus[iF7ezv';N߾+3HN`l08i}B USN_d2}#LaOϙÄCsCnsylnr{zUf;>Uӧ/ S@7Va$~r۸_ U3-.#k{}F7vũv;L{G}=ߨ7o//Wm/W,k}!OV^U0/?ӡlFѹ6E#{d4 LuQd|]1}ն__ dҮmutzvۡc?;ώ< &6#PwXw/hd/7<wթ4 tlfڬl;m|>%B&&]nxb8dbz`,є^/ލDhx8"S/)2CO^!xR!Q' *TJe^t!+DO*'%x Kif̫;ƌp7:go G\r#Sw43~ u蛾U7;^W ≩Ó_\|r~ D< eP6A(fpc5ǭ N99tT 0Qcg 1,\eGY#YdnlRJ#.}L\~{޺\PdzyiYي|T%ȗ<(.@2 JLl*FVKR3zd xJƇ(=-aMf`R6Fk9ٹrj 6U+z^ñ= 4ʢolU苰o S?CE*ar#I%jh1dM'gۢ::67f9o5̊aYA֔wfVyc`vo!8$Bv{_)yOA<1J MM^Q GY\`?5pؗ`ڢ'=?…[[sⳅ׵vq2[y1ǽ73JxzM(& q^3XMN&NrV Wy4ʚh|C\XáνyjR<#!K(<;ՏN/ÁueE} !ۉ1k[1܄ϻAa (A8UN뷌3UU\,^~ᙎ03ʳtax=)NӳrAմ+,Mkn(#:w%. fPDrYu*_AUmA.EPnA.(ɦ7/F?P!v9]W B>l#w 6k6ŠNSQduE F][N[O@,ۑ~UWAC}b;hq{_#fFa)#GRk:G="Cvŋ K0ӮpFl}*?%,KaA6'q5*F(g}~"o0H|  W0_fu-B%Rrg9m)O6=d/CƔLA<)J G[HAȱHZ?+>/ݡo [t-X.@^辅+V nM_TOi|;>u !Z}'I :-"W+Y &|%;dhncJ ǁc^cJɍAʧ,VGw7j\aO:.NKMĴ?F,cX1(+ՙ˔l}eeѼtF.' Zgi܇<|bMDd- u^/N#hCm$;u j=k8J 43MnW3AGxre'6NҒ%YQt ۏᜈ3rN!wNk_'t]wC/)**Q|K7֎j͍6R|{Xı Bd&wDÊE3&[8k|)TOi\C إ`S&L޶yY2RćG3W:[uޅ|^gE9,n۞C5,_('+b"72IDE@)'flԦs3R1S"=mD6iP-(関xTR'-)qQA< K&9 )ȧbSF׳PBul`䶲=4%^| %p4SgF*+],>9fco@X3[C3r buԔcy| eo Z:g,P} |Mbbn",מ"ʪ,_5kTexr>%ހېcGe5XD} /%V4y_鋱}VƗ9im;{.>[k/ '?sa|_ |P$CO݉(R2.m %V__>^B7)eȣN98(/Lg~.yJ<%{?u4ƛ8]A\Rr?k΁~ѣq9tf{GA%T `7܃4C ȱ Kkȯ4"&5@3 ?BpU`6{rQCrQ\ P\i`kS 럈E`C.J#`z!׬\6Qc:M :hOE^O?Skc"NmhCk)jֺz 랈?%fygmM1X=?fHiu%Sш Qr_AN>h3Qy4 ]Ȳ!2rJ 2?$Y,:d C։|.JOA<)@I^KMdCCp`2:4 pRA< KNC] :![Z )u[Gcڑ({KʓR|xVb2+_:?ZHt\A<*<QMFχLe:{7FS1Ҧ 2,1XEThy;r9)fw+Kw/_ "'!''<=Ϋi}JAP;!B 1qŬgr̞)r*z k4<1h?UH;t=R̲[󲛅şSixN7/ȉfP+B~: E+' Cx iiNL{̲ųmAKYZ bC7Ϧƙ~)'7۔!R hcV5Z1 "Tn<%'LVVx^|[bJb=n+Z7z O\TYuZӲ& daԵ[yiS7gg:9\Rӵ:: |9x}Qρ/ SÐBLmINjN;!YSj>klq9رqEW'-nOs.4';x% T&loPT%RS3SG6@0yЇooVf(e:|ޭ/>ca#^;dARl&!J3lgdӍn6hl7m%Ϩ7ЉYmxTqNFr$|Km/(5+*"U;ru'eڞ jjkgK䛾qdW){Z/ zv~(V՗ѕ72 (uzr@_[,{A3J5{7 9;ߑ`_W h۲gz;sE*|C˺avwli=XU8\jJI ś{Y`΅3T_9ߧH[cܮ5孱Z]eު:l7#t{VuC)wdتVeD=+?_BLQڝ-*if:ΎW:TӶ>'SjUu|Nr-ήٕذCU Joֆv+UD;MѪB<&T,ʀ晕EqQ}g71fWZGw1 {F9+opVrip 2wl[)氷{ "-}o3$36wv[5umm=) v[36=_R2WJFe8wrVw~.H|yFRQѓrV-4- jƦ(ψl+-8D^/mbqti ߅s,(E:SPN͋#.eLH\JE"1ݏt;IB{ŠCAihKKDun~*;þ|)#4J+2;Msoێx,>[z8(:Ge@=)7M ѕݖ; 3Vvk]f̓]-:[ܞcUَx鋵1g\||q29nZ_?H6:9ْMђ&joe:9쭡(N}W+$]l|VA<1hDL%Qe[xx 8"S #=<xAɟP\8Xjk-2C^R%? 2Igt>!0b7"ZN/됯;lL0lFuGTOLݝ/nY,m2`fHLfё36!P)'C9 GS3D's ,\eGY)d&1YdnlRJKu&aLRْ%MVsw\'/A)@)v*uHLb0fSљtUHlMT- O<Ø: X҅}t@> -egĥ>輟*3ՇF&MK2?EPgEK ):h b_hRR:z鄂zB};)ZdFsD=HF ²c^_¿5w꫕9Ž0 \bg7,xčHxJƇ(=-aMf`R6FՅ$Byٹrj O NBñ=Qe32*NN o *ar#I%jh1dM'gۢ::61f9o54 +ȚN q H`vo!8$Bv{_)yOA<1J MM^Q G?[SD'/ڸ5i*i]g9k8&XC@z,9qIi,SOi Kh& 0g \3&k՝}?G饇f챆Cxqzx,FC>vɷQ*yv!S.^mvpL#V 7!o&qqIz8m2;gggԍ<{R^7gm*HPof/2xy]`M-5\gݬRTE"h]D߭䂒lzC\ڮbk %bwZu5 (䣩5 + 6k6ŠNS'[5MvnQΣ'(c'CׂzŤ i抭Og+\֦LriAEƣG;*š#6 [9s:pmQˆ ¬nMtG!v9#m (O "dŇvÔg 5OQ!v58ښӤG @I\Xg;T Ӳ,QhYҵpb} y. nM_TOi|;>bMrW`ӄÐc-H1SI˃\X G!j"[ʓoPx96㺍)1.Benz)%7^b+3yp;Άp`#P㚐 ;\q{/6!7 ?сnQR'Zk}QrU6IZ00y;y M $ &0\N~9yWK_+4Q!v %ir Z=cMX3͎C85 R,S^.@]_/Y6x-Xi0sÖ ze+oiP{X\NM~h݂RQ!J(=K M!WNIK2]C|w!יe.BN>K?`as7ӮX*%o뼪[A=OJækZQ; CpR cXQ9ígUI'ZVsNQG{X?Yz>vQpr,Ǥ+k[i_L9% ԌI/dxr2%w9S+& _C7vO<!{% S3t;&lm!\L S/ēRwܶ+vx W@(SO%VKpxTsN"\f!GqɫJ*sȟֽz.gx9RJOd/)/2A_)D< KeY[+LjE~84})ײD]rC,P ôBLffKӣa I2d@/wP ɜcm'ڰъᎆ~VYIbJk*&rAn|rt ~\^XY?.x^|{#Eb:?7ar=…=_8պMhFy5{u}#Y-=!-gJ[Y^.Xٛ5`ȱ-|^yOVIVSB7 1 EqO,^L{Y*0Oz*ّ>W增wٙ+M_az'wKj-|>Dk4-F .bAVzv*g_+rU<Ώڎ[7ќW1,>s< ' O _DS\9%`r^d քspUaM:,4?q-fmR!ˤ}Xw)~ 2 IOΖb>!bx{w.̗Bl*?-ignjkaxIb;TZOȾb^KC[y*ODeʕ/O ?)<mx GΜ/Bj/+dtfrFbN4H:tB]2ͱ7erW;# Kg2=,wz8'M$b:pEv6e&B]u2J_i@0<1a;^]cutb^|11S1,y)vx]l S8w?%azv3H`cRƑ5R]X/M)<3 :QxzNB_ȟLJ3$4pH-sTsl%Xȿ6j"/*,^ɱb^R|4<ǾM (0H 5ҠEZM+je Mhᒚ_:My#Ysפ+t'a Tʬ0p/dR 5;~v%]779I|ũOVsִɩB~p0Y>=umV^ԍ)Ͷq 9TKM0XQe_@~{`H!_KŋN;D1ZZαӡg_~zeڠDL򩛥l&ړԹaPJWaB!SB>OMLi&L0[p#Jfw97V|iZcyssskzpaѱgɗݼ>sc˯,­N |Y*w;1A;0盝6 qX'CA|E1>/>ca{#^;dqRl@A1 ,FJSvkG(!5rwH:}Ձ*H )llYθoOf1Қ|˦etv7\)kK^,2YupvƉ7Nܮ8 _vb5FY+ȯR+%w]udG&AϘ7 i'J} 裸c&8%(D< ۳ϗ Nu9_ >Z7LFԶ e*V-J$6~ 6bl\w'd;Vu;Iw$n.ls=g&zc(XdϾZ,[驙$ 0M [ 0SwUHf Jf8R2T̟?JҤmqgam/tests/0000755000176200001440000000000013736625376012363 5ustar liggesusersqgam/tests/testthat/0000755000176200001440000000000014146767672014225 5ustar liggesusersqgam/tests/testthat/test-check_qgam.R0000644000176200001440000000171313615323244017370 0ustar liggesuserscontext("check_qgam") # par(mfrow = c(1, 1)) # test_that("check_qgam_gamlss", { # # #set.seed(857758) # n <- 1000 # x <- seq(-4, 3, length.out = n) # X <- cbind(1, x, x^2) # beta <- c(0, 1, 1) # sigma = 1.2 + sin(2*x) # f <- drop(X %*% beta) # dat <- f + rnorm(n, 0, sigma) # dataf <- data.frame(cbind(dat, x)) # names(dataf) <- c("y", "x") # # expect_error({ # # fit <- qgam(list(y~s(x, k = 15, bs = "cr"), ~ s(x, k = 15, bs = "cr")), data=dataf, err = 0.1, qu = 0.8, # control = list("progress" = FALSE)) # invisible(capture.output( check(fit) )) # # } , NA) # # }) # test_that("check_qgam_egam", { # # set.seed(57576) # dat <- gamSim(1,n=1000,dist="normal",scale=2, verbose = FALSE) # # expect_error({ # # fit <- qgam(y~s(x0)+s(x1)+s(x2)+s(x3), data=dat, qu = 0.9, control = list("progress" = FALSE)) # invisible(capture.output( check(fit) )) # # } , NA) # # })qgam/tests/testthat/test-tuneLearn.R0000644000176200001440000000443513615323253017247 0ustar liggesuserscontext("tuneLearn") # test_that("tuneLearn_gamlss", { # # #set.seed(651) # n <- 1000 # x <- seq(-4, 3, length.out = n) # X <- cbind(1, x, x^2) # beta <- c(0, 1, 1) # sigma = 1.2 + sin(2*x) # f <- drop(X %*% beta) # dat <- f + rnorm(n, 0, sigma) # dataf <- data.frame(cbind(dat, x)) # names(dataf) <- c("y", "x") # form <- list(y~s(x, k = 30, bs = "cr"), ~ s(x, k = 30, bs = "cr")) # # QU <- 0.9 # lossType <- rep(c("calFast", "cal", "pin"), each = 2) # # par(mfrow = c(3, 2)) # par(mar = c(5.1, 4.1, 0.1, 2.1)) # for(ii in c(1, 3, 5)){ # Set to 1:6 if you want to test all calibration methods # # expect_error({ # Actually we expect NO error!! # tun <- tuneLearn(form, data = dataf, qu = QU, # lsig = seq(-4, 5, length.out = 15), # control = list("loss" = lossType[ii], "progress" = "none", "K" = 20), # multicore = ((ii %% 2) == 0), ncores = 2) # # fit <- qgam(form, qu = QU, lsig = tun$lsig, data = dataf) # # ylb <- if((ii %% 2) == 0) { paste(lossType[ii], "multicore") } else { lossType[ii] } # plot(x, dat, col = "grey", ylab = ylb) # tmp <- predict(fit, se = TRUE) # lines(x, tmp$fit) # lines(x, tmp$fit + 3 * tmp$se.fit, col = 2) # lines(x, tmp$fit - 3 * tmp$se.fit, col = 2) # # check(tun, sel = 1) # } # , NA) # # } # # }) # # # # test_that("tuneLearn_egam", { # # set.seed(211) # dataf <- gamSim(1,n=400,dist="normal",scale=2,verbose=FALSE) # form <- y~s(x0)+s(x1)+s(x2)+s(x3) # # QU <- 0.9 # lossType <- rep(c("calFast", "cal", "pin"), each = 2) # # #par(mfrow = c(3, 2)) # for(ii in 1:2){ # # expect_error({ # Actually we expect NO error!! # tun <- tuneLearn(form, data = dataf, qu = QU, # lsig = seq(-4, 2, length.out = 20), # control = list("loss" = lossType[ii], "progress" = "none", "K" = 20), # multicore = ((ii %% 2) == 0), ncores = 2) # # fit <- qgam(form, qu = QU, lsig = tun$lsig, data = dataf) # # ylb <- if((ii %% 2) == 0) { paste(lossType[ii], "multicore") } else { lossType[ii] } # plot(fit, select = 3, ylab = ylb) # check(tun, sel = 1) # } # , NA) # # } # # }) qgam/tests/testthat/test-calFastTuneLearnFast.R0000644000176200001440000000637113615323236021325 0ustar liggesusers context("calFastTuneLearnFast") test_that("calFastTuneLearnFast", { set.seed(41334) par(mfrow = c(2, 2)) par(mar = c(5.1, 4.1, 0.1, 0.1)) for(ii in 1:1){ #### !!!!!!!!!! set to 1:4 to test also elfss if(ii == 1){ ### 1) 4D Gaussian example dat <- gamSim(1, n=1000, dist="normal", scale=2, verbose=FALSE) form <- y ~ s(x0)+s(x1)+s(x2)+s(x3) qus <- c(0.01, 0.5, 0.99) } if(ii == 2){ ### 2) 1D Gamma esample n <- 1000 x <- seq(-4, 4, length.out = n) X <- cbind(1, x, x^2) beta <- c(0, 1, 1) sigma <- 1 # sigma = .1+(x+4)*.5 ## sigma grows with x f <- drop(X %*% beta) tauSim <- 0.9 y <- f + rgamma(n, 3, 1)# rlf(n, 0, tau = tauSim, sig = sigma, lam)# # # rnorm(n, 0, sigma) form <- y ~ s(x, k = 30) dat <- data.frame(cbind(y, x)) names(dat) <- c("y", "x") qus <- c(0.1, 0.95, 0.99) } if( ii == 3 ){ ### 3) 3D Gamma esample n <- 1000 x <- runif(n, -4, 4); z <- runif(n, -8, 8); w <- runif(n, -4, 4) X <- cbind(1, x, x^2, z, sin(z), w^3, cos(w)) beta <- c(0, 1, 1, -1, 2, 0.1, 3) sigma <- 0.5 f <- drop(X %*% beta) dat <- f + rgamma(n, 3, 1) dat <- data.frame(cbind(dat, x, z, w)) names(dat) <- c("y", "x", "z", "w") bs <- "cr" formF <- y~s(x, k = 30, bs = bs) + s(z, k = 30, bs = bs) + s(w, k = 30, bs = bs) qus <- c(0.01, 0.5, 0.95) } if(ii == 4){ ### 1) 4D Gaussian example BUT gamlss version dat <- gamSim(1, n=1000, dist="normal", scale=2, verbose=FALSE) form <- list(y ~ s(x0)+s(x1)+s(x2)+s(x3), ~ s(x0)) qus <- c(0.01, 0.5, 0.99) } # Checking that the loss evaluated by tuneLearn is close to that evaluated # by tuneLearnFast. They can't be exactly the same, because the order with which # the losses are evaluated are different (hence different initializations) expect_error({ calibr <- list("fast" = list(), "slow" = list()) calibr[["fast"]] <- tuneLearnFast(form, data = dat, qu = qus, control = list("progress" = FALSE)) calibr[["slow"]] <- lapply(1:length(qus), function(.kk){ tuneLearn(form, data = dat, qu = qus[.kk], lsig = calibr[["fast"]]$store[[.kk]][1, ], control = list("progress" = FALSE))}) }, NA) x <- lapply(calibr[["fast"]]$store, function(.inp) .inp[1, ]) y1 <- lapply(calibr[["fast"]]$store, function(.inp) log(.inp[2, ])) y2 <- sapply(calibr[["slow"]], function(.inp) log(.inp$loss)) plot(x[[1]], y1[[1]], col = 1, xlim = range(do.call("c", x)), ylim = range(c(do.call("c", y1), do.call("c", y2))), ylab = "log-loss", xlab = expression(log(sigma))) points(x[[2]], y1[[2]], col = 2) points(x[[3]], y1[[3]], col = 3) lines(sort(x[[1]]), y2[[1]], col = 1) lines(sort(x[[2]]), y2[[2]], col = 2) lines(sort(x[[3]]), y2[[3]], col = 3) } }) qgam/tests/testthat/test-tuneLearnFast.R0000644000176200001440000000476613615323264020076 0ustar liggesuserscontext("tuneLearnFast") # test_that("tuneLearnFast_gamlss", { # # #set.seed(651) # n <- 1000 # x <- seq(-4, 3, length.out = n) # X <- cbind(1, x, x^2) # beta <- c(0, 1, 1) # sigma = 1.2 + sin(2*x) # f <- drop(X %*% beta) # dat <- f + rnorm(n, 0, sigma) # dataf <- data.frame(cbind(dat, x)) # names(dataf) <- c("y", "x") # form <- list(y~s(x, k = 30, bs = "cr"), ~ s(x, k = 30, bs = "cr")) # # QU <- 0.9 # lossType <- rep(c("calFast", "cal", "pin"), each = 2) # # par(mfrow = c(3, 2)) # par(mar = c(5.1, 4.1, 0.1, 2.1)) # for(ii in c(1, 3, 5)){ # Set to 1:6 if you want to test all calibration methods # # expect_error({ # Actually we expect NO error!! # tun <- tuneLearnFast(form, data = dataf, qu = QU, # control = list("loss" = lossType[ii], "progress" = FALSE, "K" = 20), # multicore = ((ii %% 2) == 0), ncores = 2) # # fit <- qgam(form, qu = QU, lsig = tun$lsig, data = dataf) # # ylb <- if((ii %% 2) == 0) { paste(lossType[ii], "multicore") } else { lossType[ii] } # plot(x, dat, col = "grey", ylab = ylb) # tmp <- predict(fit, se = TRUE) # lines(x, tmp$fit) # lines(x, tmp$fit + 3 * tmp$se.fit, col = 2) # lines(x, tmp$fit - 3 * tmp$se.fit, col = 2) # # plot(sort(tun$store[[1]][1, ]), tun$store[[1]][2, ][order(tun$store[[1]][1, ])], type = "b", # ylab = "Calibration loss", xlab = expression(log(sigma))) # } # , NA) # # } # # }) # # # # # test_that("tuneLearnFast_egam", { # # set.seed(211) # dataf <- gamSim(1,n=400,dist="normal",scale=2,verbose=FALSE) # form <- y~s(x0)+s(x1)+s(x2)+s(x3) # # QU <- 0.9 # lossType <- rep(c("calFast", "cal", "pin"), each = 2) # # #par(mfrow = c(3, 2)) # for(ii in 1:2){ # # expect_error({ # Actually we expect NO error!! # tun <- tuneLearnFast(form, data = dataf, qu = QU, # control = list("loss" = lossType[ii], "K" = 20, "progress" = FALSE), # multicore = ((ii %% 2) == 0), ncores = 2) # # fit <- qgam(form, qu = QU, lsig = tun$lsig, data = dataf) # # ylb <- if((ii %% 2) == 0) { paste(lossType[ii], "multicore") } else { lossType[ii] } # plot(fit, select = 3, ylab = ylb) # # plot(sort(tun$store[[1]][1, ]), tun$store[[1]][2, ][order(tun$store[[1]][1, ])], type = "b", # ylab = "Calibration loss", xlab = expression(log(sigma))) # } # , NA) # # } # # }) qgam/tests/testthat/test-calFastTuneLearn.R0000644000176200001440000000537213615323232020503 0ustar liggesusers context("calFastTuneLearn") test_that("calFastTuneLearn", { set.seed(414) #par(mfrow = c(2, 2)) #par(mar = c(5.1, 4.1, 0.1, 0.1)) for(ii in 1:1){ #### !!!!!!!!!! set to 1:4 to test also elfss if(ii == 1){ ### 1) 4D Gaussian example dat <- gamSim(1, n=1000, dist="normal", scale=2, verbose=FALSE) form <- y ~ s(x0)+s(x1)+s(x2)+s(x3) lsig <- seq(-5.5, 4, length.out = 15) qus <- c(0.01, 0.5, 0.99) } if(ii == 2){ ### 2) 1D Gamma esample n <- 1000 x <- seq(-4, 4, length.out = n) X <- cbind(1, x, x^2) beta <- c(0, 1, 1) sigma <- 1 # sigma = .1+(x+4)*.5 ## sigma grows with x f <- drop(X %*% beta) tauSim <- 0.9 y <- f + rgamma(n, 3, 1)# rlf(n, 0, tau = tauSim, sig = sigma, lam)# # # rnorm(n, 0, sigma) form <- y ~ s(x, k = 30) dat <- data.frame(cbind(y, x)) names(dat) <- c("y", "x") lsig <- seq(-5, 3, length.out = 15) qus <- c(0.01, 0.5, 0.95) } if( ii == 3 ){ ### 3) 3D Gamma esample n <- 1000 x <- runif(n, -4, 4); z <- runif(n, -8, 8); w <- runif(n, -4, 4) X <- cbind(1, x, x^2, z, sin(z), w^3, cos(w)) beta <- c(0, 1, 1, -1, 2, 0.1, 3) sigma <- 0.5 f <- drop(X %*% beta) dat <- f + rgamma(n, 3, 1) dat <- data.frame(cbind(dat, x, z, w)) names(dat) <- c("y", "x", "z", "w") bs <- "cr" formF <- y~s(x, k = 30, bs = bs) + s(z, k = 30, bs = bs) + s(w, k = 30, bs = bs) lsig <- seq(-3, 4, length.out = 15) qus <- c(0.01, 0.5, 0.95) } if(ii == 4){ ### 1) 4D Gaussian example BUT gamlss version dat <- gamSim(1, n=1000, dist="normal", scale=2, verbose=FALSE) form <- list(y ~ s(x0)+s(x1)+s(x2)+s(x3), ~ s(x0)) lsig <- seq(-5.5, 4, length.out = 15) qus <- c(0.01, 0.5, 0.99) } expect_error({ calibr <- list("fast" = list(), "slow" = list()) for(met in c("calFast", "cal")){ calibr[[met]] <- lapply(qus, function(.q){ tuneLearn(form, data = dat, lsig = lsig, qu = .q, control = list("loss" = met, "progress" = "none"))}) } } , NA) tmp <- cbind(sapply(calibr[["calFast"]], "[[", "loss"), sapply(calibr[["cal"]], "[[", "loss")) matplot(lsig, log(tmp), type = 'l', lty = c(1:3, 1:3), col = c(1, 1, 1, 2, 2, 2), ylab = "log-loss", xlab = expression(log(sigma))) legend("topright", col = 1:2, legend = c("calFast", "cal"), lty = 1) legend("top", lty = 1:3, legend = qus) } }) qgam/tests/testthat.R0000644000176200001440000000006413330600076014322 0ustar liggesuserslibrary(testthat) library(qgam) test_check("qgam") qgam/src/0000755000176200001440000000000014146705710011773 5ustar liggesusersqgam/src/init.c0000644000176200001440000000310213151257162013074 0ustar liggesusers/* Symbol registration initialization: original provided by Brian Ripley. Anything called from R should be registered here. (See also NAMESPACE:1) */ #include #include SEXP qgam_pmmult2(SEXP b, SEXP c, SEXP bt, SEXP ct, SEXP nthreads); void qgam_pls_fit1(double *y, double *X, double *w,double *wy, double *E, double *Es, int *n, int *q, int *rE, double *eta, double *penalty, double *rank_tol, int *nt, int *use_wy); void qgam_gdi2(double *X,double *E,double *Es,double *rS,double *U1, double *sp,double *theta,double *z,double *w,double *wz,double *wf, double *Dth,double *Det,double *Det2,double *Dth2,double *Det_th, double *Det2_th,double *Det3,double *Det_th2, double *Det4, double *Det3_th, double *Det2_th2, double *beta,double *b1,double *w1, double *D1,double *D2,double *P0,double *P1,double *P2, double *ldet, double *ldet1,double *ldet2,double *rV, double *rank_tol,int *rank_est, int *n,int *q, int *M,int *n_theta, int *Mp,int *Enrow,int *rSncol,int *deriv, int *fixed_penalty,int *nt,int *type,double *dVkk); static const R_CallMethodDef CallMethods[] = { {"qgam_pmmult2", (DL_FUNC) &qgam_pmmult2, 5}, {NULL, NULL, 0} }; R_CMethodDef CEntries[] = { {"qgam_pls_fit1", (DL_FUNC) &qgam_pls_fit1, 14}, {"qgam_gdi2",(DL_FUNC) &qgam_gdi2,48}, {NULL, NULL, 0} }; void R_init_qgam(DllInfo *dll) { R_registerRoutines(dll, CEntries, CallMethods, NULL, NULL); R_useDynamicSymbols(dll, FALSE); } qgam/src/mgcv_wrap.c0000644000176200001440000000550213151257162014124 0ustar liggesusers#include #include #include typedef SEXP (*sexpPmmultPtr) (SEXP, SEXP, SEXP, SEXP, SEXP); typedef void (*voidPLS1Ptr) (double*, double*, double*, double*, double*, double*, int*, int*, int*, double*, double*, double*, int*, int*); typedef void (*voidGdi2Ptr) (double *,double *,double *,double *,double *, double *,double *,double *,double *, double *,double *, double *,double *,double *,double *,double *, double *,double *, double *, double *, double *, double *, double *,double *,double *, double *, double *,double *,double *,double *, double *, double *, double *,double *, double *,int *, int *, int *, int *,int *, int *,int *,int *,int *, int *, int *, int *, double *); SEXP qgam_pmmult2(SEXP b, SEXP c, SEXP bt, SEXP ct, SEXP nthreads) { static sexpPmmultPtr fun = NULL; if (fun==NULL) fun = (sexpPmmultPtr) R_GetCCallable("mgcv", "mgcv_pmmult2"); SEXP a = fun(b, c, bt, ct, nthreads); return(a); } /* qgam_pmmult2 */ void qgam_pls_fit1(double *y, double *X, double *w,double *wy, double *E, double *Es, int *n, int *q, int *rE, double *eta, double *penalty, double *rank_tol, int *nt, int *use_wy) { static voidPLS1Ptr fun = NULL; if (fun==NULL) fun = (voidPLS1Ptr) R_GetCCallable("mgcv", "pls_fit1"); fun(y, X, w, wy, E, Es, n, q, rE, eta, penalty, rank_tol, nt, use_wy); } /* qgam_pls_fit1 */ void qgam_gdi2(double *X,double *E,double *Es,double *rS,double *U1, double *sp,double *theta,double *z,double *w,double *wz,double *wf, double *Dth,double *Det,double *Det2,double *Dth2,double *Det_th, double *Det2_th,double *Det3,double *Det_th2, double *Det4, double *Det3_th, double *Det2_th2, double *beta,double *b1,double *w1, double *D1,double *D2,double *P0,double *P1,double *P2, double *ldet, double *ldet1,double *ldet2,double *rV, double *rank_tol,int *rank_est, int *n,int *q, int *M,int *n_theta, int *Mp,int *Enrow,int *rSncol,int *deriv, int *fixed_penalty,int *nt,int *type,double *dVkk) { static voidGdi2Ptr fun = NULL; if (fun==NULL) fun = (voidGdi2Ptr) R_GetCCallable("mgcv", "gdi2"); fun(X, E, Es, rS, U1, sp, theta, z, w, wz, wf, Dth, Det, Det2, Dth2, Det_th, Det2_th, Det3, Det_th2, Det4, Det3_th, Det2_th2, beta, b1, w1, D1, D2, P0, P1, P2, ldet, ldet1, ldet2, rV, rank_tol, rank_est, n, q, M, n_theta, Mp, Enrow, rSncol, deriv, fixed_penalty, nt, type, dVkk); } /* qgam_gdi2 */ qgam/vignettes/0000755000176200001440000000000014146705710013214 5ustar liggesusersqgam/vignettes/qgam.Rmd0000644000176200001440000005252014073267073014615 0ustar liggesusers--- title: "qgam: quantile non-parametric additive models" date: '`r format(Sys.Date(), "%B %d %Y")`' author: "Matteo Fasiolo, Simon N. Wood, Yannig Goude, and Raphael Nedellec" output: html_document: toc: true number_sections: true vignette: > %\VignetteEngine{knitr::rmarkdown} %\VignetteIndexEntry{quantile_mgcViz} %\VignetteEncoding{UTF-8} --- ```{r setup, include=FALSE} library(knitr) opts_chunk$set(out.extra='style="display:block; margin: auto"', fig.align="center", tidy=FALSE) ``` This R package offers methods for fitting additive quantile regression models based on splines, using the methods described in [Fasiolo et al., 2017](https://arxiv.org/abs/1707.03307). The main fitting functions are: - `qgam()` fits an additive quantile regression model to a single quantile. Very similar to `mgcv::gam()`. It returns an object of class `qgam`, which inherits from `mgcv::gamObject`. - `mqgam()` fits the same additive quantile regression model to several quantiles. It is more efficient that calling `qgam()` several times, especially in terms of memory usage. - `tuneLearn()` useful for tuning the learning rate of the Gibbs posterior. It evaluates a calibration loss function on a grid of values provided by the user. - `tuneLearnFast()` similar to `tuneLearn()`, but here the learning rate is selected by minimizing the calibration loss, using Brent method. A first example: smoothing the motorcycle dataset ======================= Let's start with a simple example. Here we are fitting a regression model with an adaptive spline basis to quantile 0.8 of the motorcycle dataset. ```{r 1, message = F} library(qgam); library(MASS) if( suppressWarnings(require(RhpcBLASctl)) ){ blas_set_num_threads(1) } # Optional fit <- qgam(accel~s(times, k=20, bs="ad"), data = mcycle, qu = 0.8) # Plot the fit xSeq <- data.frame(cbind("accel" = rep(0, 1e3), "times" = seq(2, 58, length.out = 1e3))) pred <- predict(fit, newdata = xSeq, se=TRUE) plot(mcycle$times, mcycle$accel, xlab = "Times", ylab = "Acceleration", ylim = c(-150, 80)) lines(xSeq$times, pred$fit, lwd = 1) lines(xSeq$times, pred$fit + 2*pred$se.fit, lwd = 1, col = 2) lines(xSeq$times, pred$fit - 2*pred$se.fit, lwd = 1, col = 2) ``` `qgam` automatically calls `tuneLearnFast` to select the learning rate. The results of the calibrations are stored in `fit$calibr`. We can check whether the optimization succeded as follows: ```{r 2} check(fit$calibr, 2) ``` The plot suggest that the calibration criterion has a single minimum, and that the optimizer has converged to its neighbourhood. Alternatively, we could have selected the learning rate by evaluating the loss function on a grid. ```{r 3, message = F} set.seed(6436) cal <- tuneLearn(accel~s(times, k=20, bs="ad"), data = mcycle, qu = 0.8, lsig = seq(1, 3, length.out = 20), control = list("progress" = "none")) #<- sequence of values for learning rate check(cal) ``` Here the generic `check` function produces a different output. The first plot is the calibration criterion as a function of $log(\sigma)$, which should look fairly smooth. The second plot shows how the effective degrees of freedom (EDF) vary with $log(\sigma)$. Notice that here we are using an adaptive smoother, which includes five smoothing parameters. We might want to fit several quantiles at once. This can be done with `mqgam`. ```{r 4} quSeq <- c(0.2, 0.4, 0.6, 0.8) set.seed(6436) fit <- mqgam(accel~s(times, k=20, bs="ad"), data = mcycle, qu = quSeq) ``` To save memory `mqgam` does not return one `mgcv::gamObject` for each quantile, but it avoids storing some redundant data (such as several copies of the design matrix). The output of `mqgam` can be manipulated using the `qdo` function. ```{r 5} # Plot the data xSeq <- data.frame(cbind("accel" = rep(0, 1e3), "times" = seq(2, 58, length.out = 1e3))) plot(mcycle$times, mcycle$accel, xlab = "Times", ylab = "Acceleration", ylim = c(-150, 80)) # Predict each quantile curve and plot for(iq in quSeq){ pred <- qdo(fit, iq, predict, newdata = xSeq) lines(xSeq$times, pred, col = 2) } ``` Using `qdo` we can print out the summary for each quantile, for instance: ```{r 6} # Summary for quantile 0.4 qdo(fit, qu = 0.4, summary) ``` Notice that here the generic function `summary` is calling `summary.gam`, because `summary.qgam` has not been implemented yet. Hence one cannot quite rely on the p-value provided by this function, because their are calculated using result that apply to parametric, not quantile, regression. Dealing with heteroscedasticity ======================= Let us simulate some data from an heteroscedastic model. ```{r h1} set.seed(651) n <- 2000 x <- seq(-4, 3, length.out = n) X <- cbind(1, x, x^2) beta <- c(0, 1, 1) sigma = 1.2 + sin(2*x) f <- drop(X %*% beta) dat <- f + rnorm(n, 0, sigma) dataf <- data.frame(cbind(dat, x)) names(dataf) <- c("y", "x") qus <- seq(0.05, 0.95, length.out = 5) plot(x, dat, col = "grey", ylab = "y") for(iq in qus){ lines(x, qnorm(iq, f, sigma)) } ``` We now fit ten quantiles between 0.05 and 0.95, using a quantile GAM with scalar learning rate. ```{r h2} fit <- mqgam(y~s(x, k = 30, bs = "cr"), data = dataf, qu = qus) qus <- seq(0.05, 0.95, length.out = 5) plot(x, dat, col = "grey", ylab = "y") for(iq in qus){ lines(x, qnorm(iq, f, sigma), col = 2) lines(x, qdo(fit, iq, predict)) } legend("top", c("truth", "fitted"), col = 2:1, lty = rep(1, 2)) ``` With the exception of `qu = 0.95`, the fitted quantiles are close to the true ones, but their credible intervals don't vary much with x. Indeed, let's look at intervals for quantile 0.95. ```{r h3} plot(x, dat, col = "grey", ylab = "y") tmp <- qdo(fit, 0.95, predict, se = TRUE) lines(x, tmp$fit) lines(x, tmp$fit + 3 * tmp$se.fit, col = 2) lines(x, tmp$fit - 3 * tmp$se.fit, col = 2) ``` We can get better credible intervals, and solve the "wigglines" problem for the top quantile, by letting the learning rate vary with the covariate. In particular, we can use an additive model for quantile location and one for learning rate. ```{r h4} fit <- qgam(list(y~s(x, k = 30, bs = "cr"), ~ s(x, k = 30, bs = "cr")), data = dataf, qu = 0.95) plot(x, dat, col = "grey", ylab = "y") tmp <- predict(fit, se = TRUE) lines(x, tmp$fit) lines(x, tmp$fit + 3 * tmp$se.fit, col = 2) lines(x, tmp$fit - 3 * tmp$se.fit, col = 2) ``` Now the credible intervals correctly represent the underlying uncertainty, and the fit has the correct amount of smoothness. Neglecting to take the heteroscedasticity into account can lead to bias, in addition to inadequate coverage of the credible intervals. Let's go back the motorcycle data set, and to the first model we fitted: ```{r mcy2rnd, message = F} fit <- qgam(accel~s(times, k=20, bs="ad"), data = mcycle, qu = 0.8) # Plot the fit xSeq <- data.frame(cbind("accel" = rep(0, 1e3), "times" = seq(2, 58, length.out = 1e3))) pred <- predict(fit, newdata = xSeq, se=TRUE) plot(mcycle$times, mcycle$accel, xlab = "Times", ylab = "Acceleration", ylim = c(-150, 80)) lines(xSeq$times, pred$fit, lwd = 1) lines(xSeq$times, pred$fit + 2*pred$se.fit, lwd = 1, col = 2) lines(xSeq$times, pred$fit - 2*pred$se.fit, lwd = 1, col = 2) ``` The slightly disturbing thing about this quantile fit is that for `Times < 10` the fit is clearly above all the responses. But we are fitting quantile 0.8, hence we should expect around 20$\%$ of the responses to be above the fit. The problem here is that the variance of the response (`accel`) varies wildly with `Times`, so that the bias induced by the smoothed pinball loss used by `qgam` is not constant (see Fasiolo et al. 2017 for details). This issue is solved by letting the learning rate change with `Times`: ```{r mcy2rnd2, message = F} fit <- qgam(list(accel ~ s(times, k=20, bs="ad"), ~ s(times)), data = mcycle, qu = 0.8) pred <- predict(fit, newdata = xSeq, se=TRUE) plot(mcycle$times, mcycle$accel, xlab = "Times", ylab = "Acceleration", ylim = c(-150, 80)) lines(xSeq$times, pred$fit, lwd = 1) lines(xSeq$times, pred$fit + 2*pred$se.fit, lwd = 1, col = 2) lines(xSeq$times, pred$fit - 2*pred$se.fit, lwd = 1, col = 2) ``` Model checking ======================= The `qgam` package provides some functions that can be useful for model checking, but a more complete set of visualisation and checking tools can be found in the `mgcViz` R package (Fasiolo et al., 2018). In `qgam` we have: - `cqcheck` if we are fitting, say, quantile 0.2 we expect roughly $20\%$ of the observations to fall below the fitted quantile. This function produces some plots to verify this. - `cqcheckI` interactive version of `cqcheckI`. Implemented using the `shiny` package. Not demonstrated here, but see `?cqcheckI`. - `check.qgam` provides some diagnostics regarding the optimization. Mainly based to `gam.check`. - `check.learn` diagnostic checks to verify that the learning rate selection went well. It can be used on the output of `tuneLearn`. - `check.tuneLearn` similar to `check.learn`, but it can be used on the output of `tuneLearn` or on the `$calibr` slot of a `qgam` object. We start by illustrating the `cqcheck` function. In particular, let us consider the additive model: $$ y \sim x+x^2+z+xz/2+e,\;\;\; e \sim N(0, 1) $$ We start by simulating some data from it. ```{r c1} library(qgam) set.seed(15560) n <- 1000 x <- rnorm(n, 0, 1); z <- rnorm(n) X <- cbind(1, x, x^2, z, x*z) beta <- c(0, 1, 1, 1, 0.5) y <- drop(X %*% beta) + rnorm(n) dataf <- data.frame(cbind(y, x, z)) names(dataf) <- c("y", "x", "z") ``` We fit a linear model to the median and we use `cqcheck` produce a diagnostic plot. ```{r c2} qu <- 0.5 fit <- qgam(y~x, qu = qu, data = dataf) cqcheck(obj = fit, v = c("x"), X = dataf, y = y) ``` The `cqcheck` function takes a `qgam` object as input and it predicts the conditional quantile using the data in `X`. Then it bins the responses `y` using the corresponding values of `v` and it calculates, for every bin, what fraction of responses falls below the fitted quantile. Given that we are fitting the median, we would expect that around $50\%$ of the point falls below the fit. But, as the plot shows, this fraction varies widely along `x`. There is clearly a non-linear relation between the quantile location and `x`, hence we add a smooth for `x`. ```{r c3, message = F} fit <- qgam(y~s(x), qu = qu, data = dataf) cqcheck(obj = fit, v = c("x"), X = dataf, y = y) ``` The deviations from the theoretical quantile ($0.5$) are much reduced, but let's look across both `x` and `z`. ```{r c4, message = F} cqcheck(obj = fit, v = c("x", "z"), X = dataf, y = y, nbin = c(5, 5)) ``` This plot uses binning as before, if a bin is red (green) this means that the fraction of responses falling below the fit is smaller (larger) than 0.5. Bright colours means that the deviation is statistically significant. As we move along `z` (`x2` in the plot) the colour changes from green to red, so it make sense drawing a marginal plot for `z`: ```{r c5, message = F} cqcheck(obj = fit, v = c("z"), X = dataf, y = y, nbin = c(10)) ``` We are clearly missing an effect here. Given that effect looks pretty linear, we simply add a parametric term to the fit, which seems to solve the problem: ```{r c6, message = F} fit <- qgam(y~s(x)+z, qu = qu, data = dataf) cqcheck(obj = fit, v = c("z")) ``` But if we look again across both `x` and `z` we see that green prevails on the top-left to bottom-right diagonal, while the other diagonal is mainly red. ```{r c7, message = F} cqcheck(obj = fit, v = c("x", "z"), nbin = c(5, 5)) ``` This suggests that adding an interaction between `x` and `z` might be a good idea. Indeed, now `cqcheck` does not signal any problem: ```{r c8, message = F} fit <- qgam(y~s(x)+z+I(x*z), qu = qu, data = dataf) cqcheck(obj = fit, v = c("x", "z"), nbin = c(5, 5)) ``` Now that we are fairly satisfied with the model structure, we can, for instance, fit several quantiles by doing: ```{r c9, message = F} fit <- mqgam(y~s(x)+z+I(x*z), qu = c(0.2, 0.4, 0.6, 0.8), data = dataf) ``` We can then check whether the learning rate was selected correctly. Recall that the `qgam` function calls internally `tuneLearnFast`, hence we can look at how the calibration went by doing: ```{r c10, message = F} check.learnFast(fit$calibr, 2:5) ``` For each quantile, the calibration loss seems to have a unique minimum, which is what one would hope. Objects of class `qgam` can also be checked using the generic function `check`, which defaults to `check.qgam`. To use this function on the output of `mqgam`, we must use the `qdo` function: ```{r c11, message = F} qdo(fit, 0.2, check) ``` The printed output gives some information about the optimizer used to estimate the smoothing parameters, for fixed learning rate. See `?check.qgam` for more information. The plot has been obtained using `cqcheck`, where each data point has been binned using the fitted values. On the right side of the plot there seems to be some large deviations, but the rug shows that there are very few data points there. Setting the loss-smoothing parameter and checking convergence ======================= Let's simulate some data: ```{r check1, message = F} set.seed(5235) n <- 1000 x <- seq(-3, 3, length.out = n) X <- cbind(1, x, x^2) beta <- c(0, 1, 1) f <- drop(X %*% beta) dat <- f + rgamma(n, 4, 1) dataf <- data.frame(cbind(dat, x)) names(dataf) <- c("y", "x") ``` Assume that we want to estimate quantiles 0.05, 0.5 and 0.95: ```{r check2, message = F} qus <- c(0.05, 0.5, 0.95) fit <- mqgam(y ~ s(x), data = dataf, qu = qus) plot(x, dat, col = "grey", ylab = "y") lines(x, f + qgamma(0.95, 4, 1), lty = 2) lines(x, f + qgamma(0.5, 4, 1), lty = 2) lines(x, f + qgamma(0.05, 4, 1), lty = 2) lines(x, qdo(fit, qus[1], predict), col = 2) lines(x, qdo(fit, qus[2], predict), col = 2) lines(x, qdo(fit, qus[3], predict), col = 2) ``` Since `qgam` version 1.3 the parameter `err`, which determines the smoothness of the loss function used by `qgam`, is determined automatically. But there might be scenarios where you might want to chose is manually, so let's try to use several values of `err`: ```{r check2b, message = F} lfit <- lapply(c(0.01, 0.05, 0.1, 0.2, 0.3, 0.5), function(.inp){ mqgam(y ~ s(x), data = dataf, qu = qus, err = .inp, control = list("progress" = F)) }) plot(x, dat, col = "grey", ylab = "y", ylim = c(-2, 20)) colss <- rainbow(length(lfit)) for(ii in 1:length(lfit)){ lines(x, qdo(lfit[[ii]], qus[1], predict), col = colss[ii]) lines(x, qdo(lfit[[ii]], qus[2], predict), col = colss[ii]) lines(x, qdo(lfit[[ii]], qus[3], predict), col = colss[ii]) } lines(x, f + qgamma(0.95, 4, 1), lty = 2) lines(x, f + qgamma(0.5, 4, 1), lty = 2) lines(x, f + qgamma(0.05, 4, 1), lty = 2) ``` The bias increases with `err`, and it is upward (downward) for high (low) quantiles. The median fit is not much affected by `err`. The bias really starts appearing for `err > 0.1`. Decreasing `err` tends to slow down computation: ```{r check3, message = F} system.time( fit1 <- qgam(y ~ s(x), data = dataf, qu = 0.95, err = 0.05, control = list("progress" = F)) )[[3]] system.time( fit2 <- qgam(y ~ s(x), data = dataf, qu = 0.95, err = 0.001, control = list("progress" = F)) )[[3]] ``` Even worse, it can lead to numeric problems. Here we check that we have found the minimum of the calibration loss: ```{r check4, message = F} check(fit1$calibr, sel = 2) check(fit2$calibr, sel = 2) ``` In the first case the loss looks smooth and with as single minimum, in the second case we have some instabilities. If the calibration loss looks like this, you generally have to increase `err`. We can use `check` to have an estimate of the bias and to have information regarding the convergence of the smoothing parameter estimation routine: ```{r check5, message = F} check(fit1) ``` The second plot suggest that the actual bias is much lower than the bound `err = 0.05`. This is also supported by the first two lines of text, which say that 95.1\% of the residuals are negative, which is very close to the theoretical 95\%. The text says that full convergence in smoothing parameter estimation has been achieved, it is important to check this. In summary, practical experience suggests that: - the automatic procedure for selecting `err` offer a good compromise between bias and stability; - the old default (`qgam` version < 1.3) was `err = 0.05`, which generally does not imply too much bias; - if the calibration loss plotted by `check(fit$learn)` is irregular, try to increase `err`; - same if the text printed by `check(fit)` does not say that `full convergence` was achieved; - you can estimate the bias using `check(fit)`; - if you have to increase `err` to 0.2 or higher, there might be something wrong with your model; - you might get messages saying that `outer Newton did not converge fully` during estimation. This might not be problematic as long as the calibration loss is smooth and `full convergence` was achieved; - in preliminary studies do not decrease `err` too much, as it slows down computation; - setting `err` too low is not a good idea: it is much better to have some bias than numerical problems. Application to probabilistic electricity load forecasting ======================= Here we consider a UK electricity demand dataset, taken from the national grid [website](https://www.nationalgrid.com/). The dataset covers the period January 2011 to June 2016 and it contains the following variables: - `NetDemand` net electricity demand between 11:30am and 12am. - `wM` instantaneous temperature, averaged over several English cities. - `wM_s95` exponential smooth of `wM`, that is `wM_s95[i] = a*wM[i] + (1-a)*wM_s95[i]` with `a=0.95`. - `Posan` periodic index in `[0, 1]` indicating the position along the year. - `Dow` factor variable indicating the day of the week. - `Trend` progressive counter, useful for defining the long term trend. - `NetDemand.48` lagged version of `NetDemand`, that is `NetDemand.48[i] = NetDemand[i-2]`. - `Holy` binary variable indicating holidays. - `Year` and `Date` should obvious, and partially redundant. See [Fasiolo et al., 2017](https://arxiv.org/abs/1707.03307) for more details. This is how the demand over the period looks like: ```{r edf1} data("UKload") tmpx <- seq(UKload$Year[1], tail(UKload$Year, 1), length.out = nrow(UKload)) plot(tmpx, UKload$NetDemand, type = 'l', xlab = 'Year', ylab = 'Load') ``` To estimate the median demand, we consider the following model ```{r edf2} qu <- 0.5 form <- NetDemand~s(wM,k=20,bs='cr') + s(wM_s95,k=20,bs='cr') + s(Posan,bs='ad',k=30,xt=list("bs"="cc")) + Dow + s(Trend,k=4) + NetDemand.48 + Holy ``` Notice that we use very few knots for the long term trend, this is because we don't want to end up interpolating the data. We use an adaptive cyclic smooth for `Posan`, we'll explain later why adaptivity is needed here. Now we tune the learning rate on a grid, on two cores. As the first plot shows, the calibrations loss is minimized at $\log (\sigma)\approx 6$, the second plot shows how the effective degrees of freedom of each smooth term changes with $\log (\sigma)$. ```{r edf3, message=FALSE} set.seed(41241) sigSeq <- seq(4, 8, length.out = 16) closs <- tuneLearn(form = form, data = UKload, lsig = sigSeq, qu = qu, control = list("K" = 20), multicore = TRUE, ncores = 2) check(closs) ``` Now let's fit the model with the learning rate corresponding to the lowest loss and let's look at the resulting smooth effects. ```{r edf4} lsig <- closs$lsig fit <- qgam(form = form, data = UKload, lsig = lsig, qu = qu) plot(fit, scale = F, page = 1) ``` The effect of temperature (`wM`) is minimized around 18 degrees, which is reasonable. The cyclic effect of `Posan` has a very sharp drop corresponding to the winter holidays, we used an adaptive smooth in order to have more flexibility during this period. Now we can have a look as some diagnostic plot: ```{r edf5} par(mfrow = c(2, 2)) cqcheck(fit, v = c("wM"), main = "wM") cqcheck(fit, v = c("wM_s95"), main = "wM_s95") cqcheck(fit, v = c("Posan"), main = "Posan") cqcheck(fit, v = c("Trend"), main = "Trend", xaxt='n') axis(1, at = UKload$Trend[c(1, 500, 1000, 1500, 2000)], UKload$Year[c(1, 500, 1000, 1500, 2000)] ) ``` The plots for `wM_s95` and `Posan` don't show any important deviation from 0.5, the target quantile. Along `wM` we see a large deviation, but we have essentially no data for very high temperatures. If we look at deviations along the `Trend` variable, which is just a time counter, we see several important deviations. It would be interesting verifying why these occur (we have no answer currently). Finally, recall that we can produce 2D versions of these diagnostic plots, for instance: ```{r edf6} par(mfrow = c(1, 1)) cqcheck(fit, v = c("wM", "Posan"), scatter = T) ``` References ======================= * Fasiolo, M., Goude, Y., Nedellec, R. and Wood, S. N. (2017). Fast calibrated additive quantile regression. Available at https://arxiv.org/abs/1707.03307 * Fasiolo, M., Nedellec, R., Goude, Y. and Wood, S.N. (2018). Scalable visualisation methods for modern Generalized Additive Models. Available at https://arxiv.org/abs/1809.10632 qgam/R/0000755000176200001440000000000013736640224011407 5ustar liggesusersqgam/R/I_adTest.R0000644000176200001440000000061213033231203013204 0ustar liggesusers#### Andreson-Darling test for STANDARD normality .adTest <- function(.x){ n <- length(.x) .x <- sort(.x) # Cramer-von Mises statistic # out <- 1/(12*n) + sum( ((2*1:n - 1)/(2*n) - pnorm(.x))^2 ) logp1 <- pnorm(.x, log.p = TRUE) logp2 <- pnorm(.x, lower.tail = F, log.p = TRUE) h <- (2 * seq(1:n) - 1) * (logp1 + rev(logp2)) out <- -n - mean(h) return( out ) }qgam/R/qdo.R0000644000176200001440000000411513462143642012314 0ustar liggesusers########################## #' Manipulating the output of \code{mqgam} #' #' @description Contrary to \code{qgam}, \code{mqgam} does not output a standard \code{gamObject}, hence #' methods such as \code{predict.gam} or \code{plot.gam} cannot be used directly. \code{qdo} #' provides a simple wrapper for such methods. #' #' @param obj the output of a \code{mqgam} call. #' @param qu A vector whose elements must be in (0, 1). Each element indicates a quantile of interest, #' which should be an element of \code{names(obj$fit)}. If left to \code{NULL} the function #' \code{fun} will be applied to each of the quantile fits in \code{obj}. #' @param fun The method or function that we want to use on the \code{gamObject} corresponding to quantile \code{qu}. For instance #' \code{predict}, \code{plot} or \code{summary}. By default this is the identity function (\code{I}), which #' means that the fitted model for quantile \code{qu} is returned. #' @param ... Additional arguments to be passed to \code{fun}. #' @return A list where the i-th entry is the output of \code{fun} (whatever that is) corresponding to quantile \code{qu[i]}. #' @author Matteo Fasiolo . #' @examples #' library(qgam); library(MASS) #' #' quSeq <- c(0.4, 0.6) #' set.seed(737) #' fit <- mqgam(accel~s(times, k=20, bs="ad"), data = mcycle, qu = quSeq) #' #' qdo(fit, 0.4, summary) #' invisible(qdo(fit, 0.4, plot, pages = 1)) #' #' # Return the object for qu = 0.6 and then plot it #' tmp <- qdo(fit, 0.6) #' plot(tmp) #' qdo <- function(obj, qu=NULL, fun=I, ...){ if( is.null(qu) ) { qu <- names(obj$fit) } if( length(qu)>1 ){ out <- lapply(qu, function(.q) qdo(obj, .q, fun, ...)) } else { if( !(qu %in% names(obj[["fit"]])) ) stop("qu is not in obj[[\"qu\"]].") tmpObj <- obj[["fit"]][[ which(names(obj[["fit"]]) == qu) ]] tmpObj[["model"]] <- obj[["model"]] tmpObj[["smooth"]] <- obj[["smooth"]] tmpObj[["call"]][["data"]] <- obj[["data"]] out <- fun(tmpObj, ...) } return( out ) } qgam/R/I_getVp.R0000644000176200001440000000067713151257162013075 0ustar liggesusers.getVp <- function(.fit, .obj, .lsp, .lpi) { .Vp <- if(inherits(.obj$family, "general.family")){ .fit$gcv.ubre <- as.numeric(.fit$REML) .fit$outer.info <- NULL .fit$sp <- exp(.lsp) .fit$scale.estimated <- FALSE .fit$scale <- 1 .fit$method <- "REML" .Vp <- gam.fit5.post.proc(.fit,.obj$Sl,.obj$L,.obj$lsp0,.obj$S,.obj$off)$Vb .Vp <- .Vp[.lpi[[1]], .lpi[[1]]] } else { .Vp <- .fit$Vb } return( .Vp ) } qgam/R/log1pexp.R0000644000176200001440000000215613630431605013267 0ustar liggesusers########################## #' Calculating log(1+exp(x)) accurately #' #' @description Calculates \code{log(1+exp(x))} in a numerically stable fashion. #' #' @param x a numeric vector. #' @return A numeric vector where the i-th entry is equal to \code{log(1+exp(x[i]))}, but computed more stably. #' @details We follow the recipe of Machler (2012), that is formula (10) page 7. #' @author Matteo Fasiolo . #' @references Machler, M. (2012). Accurately computing log(1-exp(-|a|)). #' URL: \url{https://cran.r-project.org/package=Rmpfr/vignettes/log1mexp-note.pdf}. #' @examples #' set.seed(141) #' library(qgam); #' x <- rnorm(100, 0, 100) #' log1pexp(x) - log1p(exp(x)) log1pexp <- function(x) { indx <- .bincode(x, c(-Inf, -37, 18, 33.3, Inf), right = TRUE, include.lowest = TRUE) kk <- which(indx==1) if( length(kk) ){ x[kk] <- exp(x[kk]) } kk <- which(indx==2) if( length(kk) ){ x[kk] <- log1p( exp(x[kk]) ) } kk <- which(indx==3) if( length(kk) ){ x[kk] <- x[kk] + exp(-x[kk]) } return(x) } qgam/R/tuneLearn.R0000644000176200001440000002504614015713510013464 0ustar liggesusers########################## #' Tuning the learning rate for Gibbs posterior #' #' @description The learning rate (sigma) of the Gibbs posterior is tuned either by calibrating the credible intervals for the fitted #' curve, or by minimizing the pinball loss on out-of-sample data. This is done by bootrapping or by k-fold cross-validation. #' Here the calibration loss function is evaluated on a grid of values provided by the user. #' #' @param form A GAM formula, or a list of formulae. See ?mgcv::gam details. #' @param data A data frame or list containing the model response variable and covariates required by the formula. #' By default the variables are taken from environment(formula): typically the environment from which gam is called. #' @param lsig A vector of value of the log learning rate (log(sigma)) over which the calibration loss function is evaluated. #' @param qu The quantile of interest. Should be in (0, 1). #' @param err An upper bound on the error of the estimated quantile curve. Should be in (0, 1). #' Since qgam v1.3 it is selected automatically, using the methods of Fasiolo et al. (2017). #' The old default was \code{err=0.05}. #' @param multicore If TRUE the calibration will happen in parallel. #' @param ncores Number of cores used. Relevant if \code{multicore == TRUE}. #' @param cluster An object of class \code{c("SOCKcluster", "cluster")}. This allowes the user to pass her own cluster, #' which will be used if \code{multicore == TRUE}. The user has to remember to stop the cluster. #' @param paropts a list of additional options passed into the foreach function when parallel computation is enabled. #' This is important if (for example) your code relies on external data or packages: #' use the .export and .packages arguments to supply them so that all cluster nodes #' have the correct environment set up for computing. #' @param control A list of control parameters for \code{tuneLearn} with entries: \itemize{ #' \item{\code{loss} = loss function use to tune log(sigma). If \code{loss=="cal"} is chosen, then log(sigma) is chosen so that #' credible intervals for the fitted curve are calibrated. See Fasiolo et al. (2017) for details. #' If \code{loss=="pin"} then log(sigma) approximately minimizes the pinball loss on the out-of-sample #' data.} #' \item{\code{sam} = sampling scheme use: \code{sam=="boot"} corresponds to bootstrapping and \code{sam=="kfold"} to k-fold #' cross-validation. The second option can be used only if \code{ctrl$loss=="pin"}.} #' \item{\code{K} = if \code{sam=="boot"} this is the number of boostrap datasets, while if \code{sam=="kfold"} this is the #' number of folds. By default \code{K=50}.} #' \item{\code{b} = offset parameter used by the mgcv::gauslss. By default \code{b=0}.} #' \item{\code{vtype} = type of variance estimator used to standardize the deviation from the main fit in the calibration. #' If set to \code{"m"} the variance estimate obtained by the full data fit is used, if set to \code{"b"} #' than the variance estimated produced by the bootstrap fits are used. By default \code{vtype="m"}.} #' \item{\code{epsB} = positive tolerance used to assess convergence when fitting the regression coefficients on bootstrap data. #' In particular, if \code{|dev-dev_old|/(|dev|+0.1). #' @references Fasiolo, M., Wood, S.N., Zaffran, M., Nedellec, R. and Goude, Y., 2020. #' Fast calibrated additive quantile regression. #' Journal of the American Statistical Association (to appear). #' \url{https://www.tandfonline.com/doi/full/10.1080/01621459.2020.1725521}. #' @examples #' library(qgam); library(MASS) #' #' # Calibrate learning rate on a grid #' set.seed(41444) #' sigSeq <- seq(1.5, 5, length.out = 10) #' closs <- tuneLearn(form = accel~s(times,k=20,bs="ad"), #' data = mcycle, #' lsig = sigSeq, #' qu = 0.5) #' #' plot(sigSeq, closs$loss, type = "b", ylab = "Calibration Loss", xlab = "log(sigma)") #' #' # Pick best log-sigma #' best <- sigSeq[ which.min(closs$loss) ] #' abline(v = best, lty = 2) #' #' # Fit using the best sigma #' fit <- qgam(accel~s(times,k=20,bs="ad"), data = mcycle, qu = 0.5, lsig = best) #' summary(fit) #' #' pred <- predict(fit, se=TRUE) #' plot(mcycle$times, mcycle$accel, xlab = "Times", ylab = "Acceleration", #' ylim = c(-150, 80)) #' lines(mcycle$times, pred$fit, lwd = 1) #' lines(mcycle$times, pred$fit + 2*pred$se.fit, lwd = 1, col = 2) #' lines(mcycle$times, pred$fit - 2*pred$se.fit, lwd = 1, col = 2) #' tuneLearn <- function(form, data, lsig, qu, err = NULL, multicore = !is.null(cluster), cluster = NULL, ncores = detectCores() - 1, paropts = list(), control = list(), argGam = NULL) { if( length(qu) > 1 ) stop("length(qu) > 1, but this method works only for scalar qu") # Removing all NAs, unused variables and factor levels from data data <- .cleanData(.dat = data, .form = form, .drop = argGam$drop.unused.levels) lsig <- sort( lsig ) # Setting up control parameter ctrl <- list( "loss" = "calFast", "sam" = "boot", "K" = 50, "b" = 0, "vtype" = "m", "epsB" = 1e-5, "verbose" = FALSE, "link" = "identity", "progress" = ifelse(multicore, "none", "text") ) # Checking if the control list contains unknown names. Entries in "control" substitute those in "ctrl" ctrl <- .ctrlSetup(innerCtrl = ctrl, outerCtrl = control) if( ctrl$progress == FALSE ) { ctrl$progress <- "none" } if( !(ctrl$vtype%in%c("m", "b")) ) stop("control$vtype should be either \"m\" or \"b\" ") if( !(ctrl$loss%in%c("calFast", "cal", "pin")) ) stop("control$loss should be either \"cal\", \"pin\" or \"calFast\" ") if( !(ctrl$sam%in%c("boot", "kfold")) ) stop("control$sam should be either \"boot\" or \"kfold\" ") if( (ctrl$loss=="cal") && (ctrl$sam=="kfold") ) stop("You can't use control$sam == \"kfold\" when ctrl$loss==\"cal\" ") if( length(argGam$sp) && ctrl$loss != c("calFast") ){ stop("Cannot fix smoothing parameters unless control$loss == \"calFast\".") } n <- nrow(data) nt <- length(lsig) # Gaussian fit, used for initializations # NB Initializing smoothing parameters using gausFit is a very BAD idea if( is.formula(form) ) { gausFit <- do.call("gam", c(list("formula" = form, "data" = quote(data), "family" = gaussian(link=ctrl[["link"]])), argGam)) varHat <- gausFit$sig2 initM <- list("start" = coef(gausFit) + c(quantile(gausFit$residuals, qu), rep(0, length(coef(gausFit))-1)), "in.out" = NULL) # let gam() initialize sp via initial.spg() formL <- form } else { gausFit <- do.call("gam", c(list("formula" = form, "data" = quote(data), "family" = gaulss(link=list(ctrl[["link"]], "logb"), b=ctrl[["b"]])), argGam)) varHat <- 1/gausFit$fit[ , 2]^2 initM <- list("start" = NULL, "in.out" = NULL) # Have no cluse formL <- form[[1]] } # Get loss smoothness if( is.null(err) ){ err <- .getErrParam(qu = qu, gFit = gausFit) } # For each value of 'lsig' fit on full data main <- .tuneLearnFullFits(lsig = lsig, form = formL, fam = "elf", qu = qu, err = err, ctrl = ctrl, data = data, argGam = argGam, gausFit = gausFit, varHat = varHat, initM = initM) # Get score for each value of 'lsig' outLoss <- if( ctrl$loss == "calFast" ){ # Fast calibration (loss already calculated) OR ... sapply(main[["store"]], "[[", "loss") } else { # ... bootstrapping or cross-validation .tuneLearnBootstrapping(lsig = lsig, form = formL, fam = "elf", qu = qu, ctrl = ctrl, data = data, store = main[["store"]], pMat = main[["pMat"]], argGam = argGam, multicore = multicore, cluster = cluster, ncores = ncores, paropts = paropts) } names( outLoss ) <- lsig # convProb indicates whether there have been convergence problems during smoothing parameter estimation convProb <- sapply(main[["store"]], "[[", "convProb") names(convProb) <- lsig out <- list("lsig" = lsig[which.min(outLoss)], "loss" = outLoss, "edf" = main[["edfStore"]], "convProb" = convProb) attr(out, "class") <- "learn" return( out ) } qgam/R/I_biasedCov.R0000644000176200001440000000320113332314437013671 0ustar liggesusers########## # Internal function estimates the covariance matrix of the score function, under the assumption that # Dllk/Deta is independent of the covariate vector x. # INPUT # - fit: a gamObject fitted using the elf or elflss family # - EXXT: E(xx^T) # - EXEXT: E(x)E(x)^T # - X: the full "lpmatrix" corresponding to both linear predictors # - lpi: a list of indexes indicating the position of the coefficients corresponding to each linear predictor # # OUTPUT # - the estimate of cov(DlDeta * X) and the mixture parameter alpha in (0, 1) to be used. # .biasedCov <- function(fit, X, EXXT, EXEXT, lpi) { npar <- length( coef(fit) ) DllDeta <- .llkGrads(gObj = fit, X = X, jj = lpi, type = "DllkDeta") ESS <- sum(abs(DllDeta$l1[ , 1]))^2 / sum(DllDeta$l1[ , 1]^2) # If DlDeta and X are independent: cov(DlDeta * X) = E(X*X^T) * E(DlDeta^2) - E(DlDeta)^2 * E(X)*E(X)^T if( is.null(lpi) ){ # ELF OR... V <- EXXT * mean(DllDeta$l1^2) - mean(DllDeta$l1)^2 * EXEXT } else { # ... ELFLSS sig <- drop( DllDeta$sig ) Deta <- DllDeta$l1 Deta[ , 1] <- Deta[ , 1] * sig j1 <- lpi[[1]]; j2 <- lpi[[2]] X[ , j1] <- X[ , j1] / sig EXXT <- crossprod(X, X) / nrow(X) EXEXT <- tcrossprod(colMeans(X), colMeans(X)) Vmm <- mean(Deta[ , 1]^2) * EXXT[j1, j1] - mean(Deta[ , 1])^2 * EXEXT[j1, j1] Vss <- mean(Deta[ , 2]^2) * EXXT[j2, j2] - mean(Deta[ , 2])^2 * EXEXT[j2, j2] Vms <- mean(Deta[ , 1]*Deta[ , 2]) * EXXT[j1, j2] - mean(Deta[ , 1]) * mean(Deta[ , 2]) * EXEXT[j1, j2] V <- cbind(rbind(Vmm, t(Vms)), rbind(Vms, Vss)) } alpha <- min(ESS / npar^2, 1) return( list("V" = V, "alpha" = alpha ) ) }qgam/R/I_sandwichLoss.R0000644000176200001440000000603513561027577014455 0ustar liggesusers########## # Internal function which calculated calibration loss function based on the sandwich estimator # wrt the regression coefficients. # INPUT # - mFit: a gamObject fitted using the elflss family # - X: the submatrix of XFull corresponding to the first linear predictor (X = XFull[ , lpi[[2]]]). # Notice that X = XFull in the extended GAM case. # - XFull: the full "lpmatrix" corresponding to both linear predictors # - sdev: the posterior standard deviation of the first linear predicto # OUTPUT # - a scalar indicating the loss # .sandwichLoss <- function(mFit, X, XFull, sdev, repar, alpha = NULL, VSim = NULL){ lpi <- attr(X, "lpi") # Posterior variance of fitted quantile (mu) varOFI <- sdev ^ 2 if( !is.null(lpi) ){ # GAMLSS version OR ... if( is.null(mFit$rp) ) { stop("mFit$rp is NULL, but a re-parametrization list is needed") } mFit$Sl <- repar # Add reparametrization list # Extract observed Fisher information and invert the transformations OFI <- - mFit$lbb OFI <- Sl.repara(mFit$rp, OFI, inverse = TRUE) OFI <- Sl.initial.repara(mFit$Sl, OFI, inverse = TRUE, cov = FALSE) # Extract penalty matrix and invert transformations P <- mFit$St P <- Sl.repara(mFit$rp, P, inverse = TRUE) P <- Sl.initial.repara(mFit$Sl, P, inverse = TRUE, cov = FALSE) # Calculate variance of the score grad <- .llkGrads(gObj = mFit, X = XFull, jj = lpi) } else { # Extended GAM version # Observed Fisher information and penalty matrix woW <- mFit$working.weights # NB: these can be negative OFI <- crossprod(sign(woW)*sqrt(abs(woW))*X, sqrt(abs(woW))*X) P <- .getPenMatrix(q = ncol(X), UrS = repar$UrS, sp = log(mFit$sp), Mp = repar$Mp, U1 = repar$U1) } # Calculate variance of the score grad <- .llkGrads(gObj = mFit, X = XFull, jj = lpi) # Covariance matrix of the score V <- cov( grad ) if( !is.null(alpha) ){ V <- alpha * V + (1-alpha) * VSim } V <- V * nrow(X) # Compute eigen-decomposition of V, get its rank and produce pseudo-inverse eV <- eigen( V ) rv <- sum( eV$values > eV$values[1] * .Machine$double.eps ) Q <- t( t(eV$vectors[ , 1:rv]) / sqrt(eV$values[1:rv]) ) # Inverse 'Sandwich' posterior covariance iS <- (OFI %*% Q) %*% crossprod(Q, OFI) + P # Computed the Cholesky factor relevant to mu if( !is.null(lpi) ){ c22 <- chol( iS[lpi[[2]], lpi[[2]]] ) A <- forwardsolve(t(c22), iS[lpi[[2]], lpi[[1]]]) C <- chol( iS[lpi[[1]], lpi[[1]]] - crossprod(A) ) } else { C <- chol( iS ) } # Posterior variance using sandwich: var(mu) = diag( X %*% iS^-1 %*% t(X) ) varSand <- rowSums(t(backsolve(C, forwardsolve(t(C), t(X)))) * X) bias <- numeric( nrow(X) ) * 0 # Excluded observ where the variance is 0 (probably because all corresponding covariate values are 0) not0 <- which( !(varOFI == 0 & varSand == 0) ) # Average distance KL(sand_i, post_i) on non-zeros outLoss <- mean( sqrt(varSand/varOFI + bias^2/varOFI + log(varOFI/varSand))[not0] ) return( outLoss ) }qgam/R/I_shashCDF.R0000644000176200001440000000033613447126422013425 0ustar liggesusers#### # CDF of shash density .shashCDF <- function(q, param) { mu <- param[1] sig <- exp(param[2]) eps <- param[3] del <- exp(param[4]) return( pnorm(sinh((asinh((q - mu)/(del * sig)) - eps/del) * del)) ) }qgam/R/I_prepBootObj.R0000644000176200001440000000274613263361332014233 0ustar liggesusers############################################### # Function that prepares the bootstrapped gamObject for use # with gam.fit4 or gam.fit5. It manly works out the re-parametrization # to be used, which does not seem to depend on the smoothing parameters # (hence it only needs to be calculated once). # .prepBootObj <- function(obj, eps, control) { if( is.null(control) ){ control <- list() } ctrl <- do.call("gam.control", control) # Overwriting default tolerance, useful for using sloppy convergence test on # bootstrapped fits if( !is.null(eps) ) { ctrl$epsilon <- eps } obj$control <- ctrl if (inherits(obj$family,"general.family")) { obj$Sl <- Sl.setup(obj) ## prepare penalty sequence obj$X <- Sl.initial.repara(obj$Sl,obj$X,both.sides=FALSE) ## re-parameterize accordingly } obj$rS <- mini.roots(obj$S, obj$off, ncol(obj$X), obj$rank) Ssp <- totalPenaltySpace(obj$S,obj$H,obj$off,ncol(obj$X)) obj$Eb <- Ssp$E ## balanced penalty square root for rank determination purposes obj$U1 <- cbind(Ssp$Y,Ssp$Z) ## eigen space basis obj$Mp <- ncol(Ssp$Z) ## null space dimension obj$UrS <- list() ## need penalty matrices in overall penalty range space... if (length(obj$S)>0) for (i in 1:length(obj$S)) obj$UrS[[i]] <- t(Ssp$Y)%*%obj$rS[[i]] else i <- 0 if (!is.null(obj$H)) { ## then the sqrt fixed penalty matrix H is needed for (RE)ML obj$UrS[[i+1]] <- t(Ssp$Y)%*%mroot(obj$H) } obj$family <- fix.family.link(obj$family) return(obj) }qgam/R/I_fitShash.R0000644000176200001440000000453213457635232013562 0ustar liggesusers####### # Fitting the SHASH density using optim. ####### # .fitShash <- function(x){ shashF0 <- function(param, x){ - .llkShash(x = x, mu = param[1], tau = param[2], eps = param[3], phi = param[4], deriv = 0)$l0 } shashF1 <- function(param, x){ - .llkShash(x = x, mu = param[1], tau = param[2], eps = param[3], phi = param[4], deriv = 1)$l1 } shashF2 <- function(param, x){ - .llkShash(x = x, mu = param[1], tau = param[2], eps = param[3], phi = param[4], deriv = 2)$l2 } # Seems to work OK, we could use Newton method in optimx but the Hessian was giving problems fitSH <- optim(par = c(mean(x), log(sd(x)), 0, 0), fn = shashF0, gr = shashF1, method = "BFGS", x = x) return( fitSH ) } # # # Testing by finite differences # library(numDeriv) # y <- rchisq(1000, df = 3) # parSH <- qgam:::.fitShash( y )$par # # # Checking gradient # jacobian(func = function(x){ # - qgam:::.llkShash(x = y, mu = x[1], tau = x[2], eps = x[3], phi = x[4], deriv = 0)$l0 # }, x = parSH) # # - qgam:::.llkShash(x = y, mu = parSH[1], tau = parSH[2], eps = parSH[3], phi = parSH[4], deriv = 1)$l1 # # par(mfrow = c(2, 2)) # # [1] Test of t-distributed # x <- rt(1000, 5) # parSH <- qgam:::.fitShash( x )$par # # xseq <- seq(-4, 4, length.out = 1000) # den <- unlist(sapply(xseq, qgam:::.llkShash, mu = parSH[1], tau = parSH[2], eps = parSH[3], phi = parSH[4])) # # plot(xseq, exp(den), col = 1, ylim = range(exp(den), dt(xseq, 5))) # lines(xseq, dt(xseq, 5), col = 2) # abline(v = qgam:::.shashMode(parSH), lty = 2) # # # [2] Test of chi-squ # x <- rchisq(1000, df = 3) # parSH <- qgam:::.fitShash( x )$par # # xseq <- seq(0, 6, length.out = 1000) # den <- unlist(sapply(xseq, qgam:::.llkShash, mu = parSH[1], tau = parSH[2], eps = parSH[3], phi = parSH[4])) # # plot(xseq, exp(den), col = 1, ylim = range(exp(den), dt(xseq, 5))) # lines(xseq, dchisq(xseq, df = 4), col = 2) # abline(v = qgam:::.shashMode(parSH), lty = 2) # # # [3] Test of gamma(3, 1) # x <- rgamma(1000, 3, 1) # parSH <- qgam:::.fitShash( x )$par # # xseq <- seq(0, 6, length.out = 1000) # den <- unlist(sapply(xseq, qgam:::.llkShash, mu = parSH[1], tau = parSH[2], eps = parSH[3], phi = parSH[4])) # # plot(xseq, exp(den), col = 1, ylim = range(exp(den), dt(xseq, 5))) # lines(xseq, dgamma(xseq, 3, 1), col = 2) # abline(v = qgam:::.shashMode(parSH), lty = 2) qgam/R/I_gamlssFit.R0000644000176200001440000003202713151257162013733 0ustar liggesusers.gamlssFit <- function(x,y,lsp,Sl,weights,offset,family,control,Mp,start,needVb){ ## NOTE: offset handling - needs to be passed to ll code ## fit models by general penalized likelihood method, ## given doubly extended family in family. lsp is log smoothing parameters ## Stabilization strategy: ## 1. Sl.repara ## 2. Hessian diagonally pre-conditioned if +ve diagonal elements ## (otherwise indefinite anyway) ## 3. Newton fit with perturbation of any indefinite hessian ## 4. At convergence test fundamental rank on balanced version of ## penalized Hessian. Drop unidentifiable parameters and ## continue iteration to adjust others. ## 5. All remaining computations in reduced space. ## ## Idea is that rank detection takes care of structural co-linearity, ## while preconditioning and step 1 take care of extreme smoothing parameters ## related problems. deriv <- 0 penalized <- if (length(Sl)>0) TRUE else FALSE nSp <- length(lsp) q <- ncol(x) nobs <- length(y) if (penalized) { Eb <- attr(Sl,"E") ## balanced penalty sqrt ## the stability reparameterization + log|S|_+ and derivs... rp <- ldetS(Sl,rho=lsp,fixed=rep(FALSE,length(lsp)),np=q,root=TRUE) x <- Sl.repara(rp$rp,x) ## apply re-parameterization to x Eb <- Sl.repara(rp$rp,Eb) ## root balanced penalty St <- crossprod(rp$E) ## total penalty matrix E <- rp$E ## root total penalty attr(E,"use.unscaled") <- TRUE ## signal initialization code that E not to be further scaled for(jj in 1:length(start)){ start[[jj]] <- as.numeric(Sl.repara(rp$rp, start[[jj]])) } ## re-para start ## NOTE: it can be that other attributes need re-parameterization here ## this should be done in 'family$initialize' - see mvn for an example. } else { ## unpenalized so no derivatives required deriv <- 0 rp <- list(ldetS=0,rp=list()) St <- matrix(0,q,q) E <- matrix(0,0,q) ## can be needed by initialization code } if (is.null(weights)) weights <- rep.int(1, nobs) if (is.null(offset)) offset <- rep.int(0, nobs) # Choose best initialization llf <- family$ll tmp <- sapply(start, function(.str){ .llp <- llf(y,x,.str,weights,family,offset=offset,deriv=0)$l - (t(.str)%*%St%*%.str)/2 return( .llp ) }) coef <- start[[ which.max(tmp) ]] ## get log likelihood, grad and Hessian (w.r.t. coefs - not s.p.s) ... ll <- llf(y,x,coef,weights,family,offset=offset,deriv=1) ll0 <- ll$l - (t(coef)%*%St%*%coef)/2 rank.checked <- FALSE ## not yet checked the intrinsic rank of problem rank <- q;drop <- NULL eigen.fix <- FALSE converged <- FALSE check.deriv <- FALSE; eps <- 1e-5 drop <- NULL;bdrop <- rep(FALSE,q) ## by default nothing dropped perturbed <- 0 ## counter for number of times perturbation tried on possible saddle for (iter in 1:(2*control$maxit)) { ## main iteration ## get Newton step... if (check.deriv) { fdg <- ll$lb*0; fdh <- ll$lbb*0 for (k in 1:length(coef)) { coef1 <- coef;coef1[k] <- coef[k] + eps ll.fd <- llf(y,x,coef1,weights,family,offset=offset,deriv=1) fdg[k] <- (ll.fd$l-ll$l)/eps fdh[,k] <- (ll.fd$lb-ll$lb)/eps } } grad <- ll$lb - St%*%coef Hp <- -ll$lbb+St D <- diag(Hp) indefinite <- FALSE if (sum(D <= 0)) { ## Hessian indefinite, for sure D <- rep(1,ncol(Hp)) if (eigen.fix) { eh <- eigen(Hp,symmetric=TRUE); ev <- abs(eh$values) Hp <- eh$vectors%*%(ev*t(eh$vectors)) } else { Ib <- diag(rank)*abs(min(D)) Ip <- diag(rank)*abs(max(D)*.Machine$double.eps^.5) Hp <- Hp + Ip + Ib } indefinite <- TRUE } else { ## Hessian could be +ve def in which case Choleski is cheap! D <- D^-.5 ## diagonal pre-conditioner Hp <- D*t(D*Hp) ## pre-condition Hp Ip <- diag(rank)*.Machine$double.eps^.5 } L <- suppressWarnings(chol(Hp,pivot=TRUE)) mult <- 1 while (attr(L,"rank") < rank) { ## rank deficient - add ridge penalty if (eigen.fix) { eh <- eigen(Hp,symmetric=TRUE);ev <- eh$values thresh <- max(min(ev[ev>0]),max(ev)*1e-6)*mult mult <- mult*10 ev[ev0) { ## limit step length to .1 of coef length s.norm <- sqrt(sum(step^2)) c.norm <- sqrt(c.norm) if (s.norm > .1*c.norm) step <- step*0.1*c.norm/s.norm } ## try the Newton step... coef1 <- coef + step ll <- llf(y,x,coef1,weights,family,offset=offset,deriv=1) ll1 <- ll$l - (t(coef1)%*%St%*%coef1)/2 khalf <- 0;fac <- 2 while ((!is.finite(ll1)||ll1 < ll0) && khalf < 25) { ## step halve until it succeeds... step <- step/fac;coef1 <- coef + step ll <- llf(y,x,coef1,weights,family,offset=offset,deriv=0) ll1 <- ll$l - (t(coef1)%*%St%*%coef1)/2 if (ll1>=ll0) { ll <- llf(y,x,coef1,weights,family,offset=offset,deriv=1) } else { ## abort if step has made no difference if (max(abs(coef1-coef))==0) khalf <- 100 } khalf <- khalf + 1 if (khalf>5) fac <- 5 } ## end step halve if (!is.finite(ll1) || ll1 < ll0) { ## switch to steepest descent... step <- -.5*drop(grad)*mean(abs(coef))/mean(abs(grad)) khalf <- 0 } while ((!is.finite(ll1)||ll1 < ll0) && khalf < 25) { ## step cut until it succeeds... step <- step/10;coef1 <- coef + step ll <- llf(y,x,coef1,weights,family,offset=offset,deriv=0) ll1 <- ll$l - (t(coef1)%*%St%*%coef1)/2 if (ll1>=ll0) { ll <- llf(y,x,coef1,weights,family,offset=offset,deriv=1) } else { ## abort if step has made no difference if (max(abs(coef1-coef))==0) khalf <- 100 } khalf <- khalf + 1 } if ((is.finite(ll1)&&ll1 >= ll0)||iter==control$maxit) { ## step ok. Accept and test coef <- coef + step ## convergence test... ok <- (iter==control$maxit||(abs(ll1-ll0) < control$epsilon*abs(ll0) && max(abs(grad)) < .Machine$double.eps^.5*abs(ll0))) if (ok) { ## appears to have converged if (indefinite) { ## not a well defined maximum if (perturbed==5) stop("indefinite penalized likelihood in gam.fit5 ") if (iter<4||rank.checked) { perturbed <- perturbed + 1 coef <- coef*(1+(runif(length(coef))*.02-.01)*perturbed) + (runif(length(coef)) - 0.5 ) * mean(abs(coef))*1e-5*perturbed ll <- llf(y,x,coef,weights,family,offset=offset,deriv=1) ll0 <- ll$l - (t(coef)%*%St%*%coef)/2 } else { rank.checked <- TRUE if (penalized) { Sb <- crossprod(Eb) ## balanced penalty Hb <- -ll$lbb/norm(ll$lbb,"F")+Sb/norm(Sb,"F") ## balanced penalized hessian } else Hb <- -ll$lbb/norm(ll$lbb,"F") ## apply pre-conditioning, otherwise badly scaled problems can result in ## wrong coefs being dropped... D <- abs(diag(Hb)) D[D<1e-50] <- 1;D <- D^-.5 Hb <- t(D*Hb)*D qrh <- qr(Hb,LAPACK=TRUE) rank <- Rrank(qr.R(qrh)) if (rank < q) { ## rank deficient. need to drop and continue to adjust other params drop <- sort(qrh$pivot[(rank+1):q]) ## set these params to zero bdrop <- 1:q %in% drop ## TRUE FALSE version ## now drop the parameters and recompute ll0... lpi <- attr(x,"lpi") xat <- attributes(x) xat$dim <- xat$dimnames <- NULL coef <- coef[-drop] St <- St[-drop,-drop] x <- x[,-drop] ## dropping columns from model matrix if (!is.null(lpi)) { ## need to adjust column indexes as well ii <- (1:q)[!bdrop];ij <- rep(NA,q) ij[ii] <- 1:length(ii) ## col i of old model matrix is col ij[i] of new for (i in 1:length(lpi)) { lpi[[i]] <- ij[lpi[[i]][!(lpi[[i]]%in%drop)]] # drop and shuffle up } } ## lpi adjustment done for (i in 1:length(xat)) attr(x,names(xat)[i]) <- xat[[i]] attr(x,"lpi") <- lpi attr(x,"drop") <- drop ## useful if family has precomputed something from x ll <- llf(y,x,coef,weights,family,offset=offset,deriv=1) ll0 <- ll$l - (t(coef)%*%St%*%coef)/2 } } } else { ## not indefinite really converged converged <- TRUE break } } else ll0 <- ll1 ## step ok but not converged yet } else { ## step failed. converged <- FALSE if (is.null(drop)) bdrop <- rep(FALSE,q) warning(paste("step failed: max abs grad =",max(abs(grad)))) break } } ## end of main fitting iteration ## at this stage the Hessian (of pen lik. w.r.t. coefs) should be +ve definite, ## so that the pivoted Choleski factor should exist... if (iter == 2*control$maxit&&converged==FALSE) warning(gettextf("iteration limit reached: max abs grad = %g",max(abs(grad)))) ldetHp <- 2*sum(log(diag(L))) - 2 * sum(log(D)) ## log |Hp| if (!is.null(drop)) { ## create full version of coef with zeros for unidentifiable fcoef <- rep(0,length(bdrop));fcoef[!bdrop] <- coef } else fcoef <- coef if( !needVb ){ return( list("coefficients" = Sl.repara(rp$rp,fcoef,inverse=TRUE)) ) ## undo re-parameterization of coef } else { dVkk <- d1l <- d2l <- d1bSb <- d2bSb <- d1b <- d2b <- d1ldetH <- d2ldetH <- d1b <- d2b <- NULL ## get grad and Hessian of REML score... REML <- -as.numeric(ll$l - t(coef)%*%St%*%coef/2 + rp$ldetS/2 - ldetHp/2 + Mp*log(2*pi)/2) REML1 <- if (deriv<1) NULL else -as.numeric( # d1l # cancels - d1bSb/2 + rp$ldet1/2 - d1ldetH/2 ) if (control$trace) { cat("\niter =",iter," ll =",ll$l," REML =",REML," bSb =",t(coef)%*%St%*%coef/2,"\n") cat("log|S| =",rp$ldetS," log|H+S| =",ldetHp," n.drop =",length(drop),"\n") if (!is.null(REML1)) cat("REML1 =",REML1,"\n") } REML2 <- if (deriv<2) NULL else -( d2l - d2bSb/2 + rp$ldet2/2 - d2ldetH/2 ) ## bSb <- t(coef)%*%St%*%coef lpi <- attr(x,"lpi") if (is.null(lpi)) { linear.predictors <- if (is.null(offset)) as.numeric(x%*%coef) else as.numeric(x%*%coef+offset) fitted.values <- family$linkinv(linear.predictors) } else { fitted.values <- linear.predictors <- matrix(0,nrow(x),length(lpi)) if (!is.null(offset)) offset[[length(lpi)+1]] <- 0 for (j in 1:length(lpi)) { linear.predictors[,j] <- as.numeric(x[,lpi[[j]],drop=FALSE] %*% coef[lpi[[j]]]) if (!is.null(offset[[j]])) linear.predictors[,j] <- linear.predictors[,j] + offset[[j]] fitted.values[,j] <- family$linfo[[j]]$linkinv( linear.predictors[,j]) } } coef <- Sl.repara(rp$rp,fcoef,inverse=TRUE) ## undo re-parameterization of coef if (!is.null(drop)&&!is.null(d1b)) { ## create full version of d1b with zeros for unidentifiable db.drho <- matrix(0,length(bdrop),ncol(d1b));db.drho[!bdrop,] <- d1b } else db.drho <- d1b ## and undo re-para... if (!is.null(d1b)) db.drho <- t(Sl.repara(rp$rp,t(db.drho),inverse=TRUE,both.sides=FALSE)) ret <- list(coefficients=coef,family=family,y=y,prior.weights=weights, fitted.values=fitted.values, linear.predictors=linear.predictors, scale.est=1, ### NOTE: needed by newton, but what is sensible here? REML= REML,REML1= REML1,REML2=REML2, rank=rank,aic = -2*ll$l, ## 2*edf needs to be added ##deviance = -2*ll$l, l= ll$l,## l1 =d1l,l2 =d2l, lbb = ll$lbb, ## Hessian of log likelihood L=L, ## chol factor of pre-conditioned penalized hessian bdrop=bdrop, ## logical index of dropped parameters D=D, ## diagonal preconditioning matrix St=St, ## total penalty matrix rp = rp$rp, db.drho = db.drho, ## derivative of penalty coefs w.r.t. log sps. #bSb = bSb, bSb1 = d1bSb,bSb2 = d2bSb, S1=rp$ldet1, #S=rp$ldetS,S1=rp$ldet1,S2=rp$ldet2, #Hp=ldetHp,Hp1=d1ldetH,Hp2=d2ldetH, #b2 = d2b) niter=iter,H = ll$lbb,dH = ll$d1H,dVkk=dVkk)#,d2H=llr$d2H) ## debugging code to allow components of 2nd deriv of hessian w.r.t. sp.s ## to be passed to deriv.check.... #if (!is.null(ll$ghost1)&&!is.null(ll$ghost2)) { # ret$ghost1 <- ll$ghost1; ret$ghost2 <- ret$ghost2 #} return( ret ) } } ## end of gam.fit5qgam/R/I_llkShash.R0000644000176200001440000000440213446732774013565 0ustar liggesusers####### # Log-likelihood of shash density and its derivatives ####### .llkShash <- function(x, mu, tau, eps, phi, deriv = 0){ sech <- function(.x){ 1 / cosh(.x) } sig <- exp( tau ) del <- exp( phi ) # 1) Calculate derivative of likelihood to appropriate order z <- (x - mu) / (sig*del) dTasMe <- del*asinh(z) - eps g <- -dTasMe CC <- cosh( dTasMe ) SS <- sinh( dTasMe ) l <- sum( -tau - 0.5*log(2*pi) + log(CC) - 0.5*log1pexp(2*log(abs(z))) - 0.5*SS^2 ) out <- list("l0" = l) # Compute sqrt(x^2 + m) when |x| >> 0 and m is reasonably small (e.g. + 1 or - 1) sqrtX2pm <- function(x, m){ x <- abs(x) kk <- which( x < 1e8 ) if( length(kk) ){ x[kk] <- sqrt(x[kk]^2 + m) } return(x) } # Compute (x^2 + m1) / (x^2 + m2)^2 when |x| >> 0 and m1, m2 are reasonably small (e.g. + 1 or - 1) x2m1DivX2m2SQ <- function(x, m1, m2){ x <- abs(x) kk <- (x^2 + m1) < 0 o <- x * 0 if( any(kk) ){ o[kk] <- (x[kk]^2 + m1) / (x[kk]^2 + m2)^2 } if( sum(kk) < length(x) ){ o[!kk] <- ((sqrtX2pm(x[!kk], m1) / sqrtX2pm(x[!kk], m2)) / sqrtX2pm(x[!kk], m2))^2 } return(o) } if( deriv > 0 ) { zsd <- z*sig*del sSp1 <- sqrtX2pm(z, 1) # sqrt(z^2+1) asinhZ <- asinh(z) ## First derivatives De <- tanh(g) - 0.5*sinh(2*g) Dm <- 1/(del*sig*sSp1)*(del*(De)+z/sSp1) Dt <- zsd*Dm - 1 Dp <- Dt + 1 - del*asinhZ*De out$l1 <- c(sum(Dm), sum(Dt), sum(De), sum(Dp)) if( deriv > 1 ){ Dme <- (sech(g)^2 - cosh(2*g)) / (sig*sSp1) Dte <- zsd*Dme Dmm <- Dme/(sig*sSp1) + z*De/(sig^2*del*sSp1^3) + x2m1DivX2m2SQ(z, -1, 1)/(del*sig*del*sig) Dmt <- zsd*Dmm - Dm Dee <- -2*cosh(g)^2 + sech(g)^2 + 1 Dtt <- zsd*Dmt Dep <- Dte - del*asinhZ*Dee Dmp <- Dmt + De/(sig*sSp1) - del*asinhZ*Dme Dtp <- zsd*Dmp Dpp <- Dtp - del*asinhZ*Dep + del*(z/sSp1-asinhZ)*De out$l2 <- matrix(c(sum(Dmm), sum(Dmt), sum(Dme), sum(Dmp), sum(Dmt), sum(Dtt), sum(Dte), sum(Dtp), sum(Dme), sum(Dte), sum(Dee), sum(Dep), sum(Dmp), sum(Dtp), sum(Dep), sum(Dpp)), 4, 4, byrow = TRUE) } } return( out ) } qgam/R/sigmoid.R0000644000176200001440000000271113151257162013162 0ustar liggesusers########################## #' Sigmoid function and its derivatives #' #' @description Calculates the sigmoid function and its derivatives. #' @param y a numeric vector. #' @param deriv if \code{TRUE} alse the first three derivatives of the sigmoid function will be computed. #' @return If \code{deriv==FALSE}, it returns a numeric vector equal to \code{1/(1+exp(-x))}. If #' \code{deriv==TRUE} it returns a list where the slot \code{$D0} contains \code{1/(1+exp(-x))}, #' while \code{$D1}, \code{$D2} and \code{$D3} contain its first three derivatives. #' @author Matteo Fasiolo . #' @examples #' library(qgam) #' set.seed(90) #' h <- 1e-6 #' p <- rnorm(1e4, 0, 1e6) #' sigmoid(p[1:50]) - 1/(1+exp(-p[1:50])) #' #' ##### Testing sigmoid derivatives #' e1 <- abs((sigmoid(p+h) - sigmoid(p-h)) / (2*h) - sigmoid(p, TRUE)[["D1"]]) / (2*h) #' e2 <- abs((sigmoid(p+h, TRUE)$D1 - sigmoid(p-h, TRUE)$D1) / #' (2*h) - sigmoid(p, TRUE)[["D2"]]) / (2*h) #' e3 <- abs((sigmoid(p+h, TRUE)$D2 - sigmoid(p-h, TRUE)$D2) / #' (2*h) - sigmoid(p, TRUE)[["D3"]]) / (2*h) #' #' if( any(c(e1, e2, e3) > 1) ) stop("Sigmoid derivatives are not estimated accurately") #' #' sigmoid <- function(y, deriv = FALSE) { l0 <- plogis(y, 0, 1) if( deriv ){ l1 <- l0 * (1-l0) l2 <- l1 - 2*l1*l0 l3 <- l2 - 2*l2*l0 - 2*l1*l1 out <- list("D0" = l0, "D1" = l1, "D2" = l2, "D3" = l3) return( out ) } else { return( l0 ) } }qgam/R/I_clusterExport.R0000644000176200001440000000056113033231203014646 0ustar liggesusers# Export objects with names in "toExport" from "envir" enviroment to "cluster". # If ALL == TRUE it exports all the objects in "envir" .clusterExport <- function(cluster, envir = parent.frame(), toExport = c(), ALL = FALSE) { allNames <- toExport if(ALL) allNames <- c(allNames, ls(envir = envir)) clusterExport(cluster, varlist = allNames, envir = envir) }qgam/R/I_checkLoss.R0000644000176200001440000000074713033231203013707 0ustar liggesusers####### Check loss function .checkloss <- function(y, mu, qu, add = TRUE){ tau <- 1 - qu d <- y - mu l <- d * 0 l[d < 0] <- - tau*d[d<0] l[d > 0] <- - (tau-1)*d[d>0] if( add ) l <- sum(l) return( l ) } #### Vettorize check Loss function .checklossVett <- function(y, mu, p){ n <- length( p ) out <- sapply(1:n, function(ii){ return( .checkloss(y, mu[ , ii], p[ii]) ) }) return( out ) }qgam/R/I_tuneLearnBootstrapping.R0000644000176200001440000001647613457642436016540 0ustar liggesusers############### #### Internal function that does the bootstrapping or cross-validation ############### .tuneLearnBootstrapping <- function(lsig, form, fam, qu, ctrl, data, store, pMat, argGam, multicore, cluster, ncores, paropts){ n <- nrow(data) nt <- length(lsig) if( ctrl$sam == "boot" ){ # Create weights for K boostrap dataset OR... wb <- lapply(1:ctrl[["K"]], function(nouse) tabulate(sample(1:n, n, replace = TRUE), n)) } else { # ... OR for K training sets for CV tmp <- sample(rep(1:ctrl[["K"]], length.out = n), n, replace = FALSE) wb <- lapply(1:ctrl[["K"]], function(ii) tabulate(which(tmp != ii), n)) } # Create gam object for bootstrap fits bObj <- do.call("gam", c(list("formula" = form, "family" = quote(elf(qu = qu, co = NA, theta = NA, link = ctrl$link)), "data" = quote(data), "sp" = if(length(store[[1]]$sp)){store[[1]]$sp}else{NULL}, fit = FALSE), argGam)) # Preparing bootstrap object for gam.fit3 bObj <- .prepBootObj(obj = bObj, eps = ctrl$epsB, control = argGam$control) # Internal function that fits the bootstrapped datasets and returns standardized deviations from full data fit. To be run in parallel. # GLOBALS: lsig, ctrl, store, pMat, bObj, argGam .getBootDev <- function(.wb) { # # # # # # # # # .getBootDev START # # # # # # # # # y <- bObj$y ns <- length(lsig); n <- length(y) # Number of test observations nt <- ifelse(ctrl$loss == "cal", n, sum(!.wb)) # Creating boot weights from boot indexes bObj$w <- .wb lpi <- attr(pMat, "lpi") glss <- inherits(bObj$family, "general.family") init <- NULL .z <- vector("list", ns) for( ii in ns:1 ) # START lsigma loop, from largest to smallest (because when lsig is small the estimation is harded) { # In the gamlss case 'co' is a vector, and we have to take only those values that are in the boostrapped dataset. co <- store[[ii]]$co bObj$lsp0 <- log( store[[ii]]$sp ) bObj$family$putCo( co ) bObj$family$putTheta( lsig[ii] ) init <- if(is.null(init)){ list(store[[ii]]$init) } else { list(init, store[[ii]]$init) } .offset <- bObj$offset if( glss ){ init <- lapply(init, function(inp) Sl.initial.repara(bObj$Sl, inp, inverse=FALSE, both.sides=FALSE)) fit <- .gamlssFit(x=bObj$X, y=bObj$y, lsp=as.matrix(bObj$lsp0), Sl=bObj$Sl, weights=bObj$w, offset=bObj$offset, family=bObj$family, control=bObj$control, Mp=bObj$Mp, start=init, needVb=(ctrl$loss=="cal" && ctrl$vtype=="b")) # In gamlss, we want to calibrate only the location and we need to reparametrize the coefficients init <- betas <- Sl.initial.repara(bObj$Sl, fit$coef, inverse=TRUE, both.sides=FALSE) betas <- betas[lpi[[1]]] if( !is.null(.offset) && !is.null(.offset[[1]]) ){ .offset <- .offset[[1]] } else { .offset <- numeric( nrow(pMat) ) } mu <- bObj$family$linfo[[1]]$linkinv( pMat %*% betas + .offset ) } else { bObj$null.coef <- bObj$family$get.null.coef(bObj)$null.coef fit <- .egamFit(x=bObj$X, y=bObj$y, sp=as.matrix(bObj$lsp0), Eb=bObj$Eb, UrS=bObj$UrS, offset=bObj$offset, U1=bObj$U1, Mp=bObj$Mp, family = bObj$family, weights=bObj$w, control=bObj$control, null.coef=bObj$null.coef, start=init, needVb=(ctrl$loss == "cal" && ctrl$vtype == "b")) init <- betas <- fit$coef if( is.null(.offset) ){ .offset <- numeric( nrow(pMat) ) } mu <- bObj$family$linkinv( pMat %*% betas + .offset ) } if( ctrl$loss == "cal" ){ # (1) Return standardized deviations from full data fit OR ... if( ctrl$vtype == "b" ){ # (2) Use variance of bootstrap fit OR ... Vp <- .getVp(fit, bObj, bObj$lsp0, lpi) sdev <- sqrt(rowSums((pMat %*% Vp) * pMat)) # same as sqrt(diag(pMat%*%Vp%*%t(pMat))) but (WAY) faster } else { # (2) ... variance of the main fit sdev <- store[[ii]]$sdev } .z[[ii]] <- drop((mu - as.matrix(store[[ii]]$fit)[ , 1]) / sdev) } else { # (1) ... out of sample observations minus their fitted values .z[[ii]] <- drop(y - mu)[ !.wb ] } } return( .z ) } # # # # # # # # # .getBootDev END # # # # # # # # # if( multicore ){ # Making sure "qgam" is loaded on cluser paropts[[".packages"]] <- unique( c("qgam", paropts[[".packages"]]) ) tmp <- .clusterSetUp(cluster = cluster, ncores = ncores) #, exportALL = TRUE) cluster <- tmp$cluster ncores <- tmp$ncores clusterCreated <- tmp$clusterCreated registerDoParallel(cluster) # Exporting stuff. To about all environment being exported all the time, use .GlobalEnv clusterExport(cluster, c("pMat", "bObj", "lsig", "ctrl", "store", "argGam", ".getVp", ".egamFit", ".gamlssFit"), envir = environment()) environment(.getBootDev) <- .GlobalEnv } # Loop over bootstrap datasets to get standardized deviations from full data fit withCallingHandlers({ z <- llply( .data = wb, .fun = .getBootDev, .parallel = multicore, .progress = ctrl[["progress"]], .inform = ctrl[["verbose"]], .paropts = paropts) }, warning = function(w) { # There is a bug in plyr concerning a useless warning about "..." if (length(grep("... may be used in an incorrect context", conditionMessage(w)))) invokeRestart("muffleWarning") }) # Get stardardized deviations and ... .bindFun <- if( ctrl$loss == "cal" ) { "rbind" } else { "c" } z <- lapply(1:nt, function(.ii) do.call(.bindFun, lapply(z, function(.x) .x[[.ii]]))) if( ctrl$loss == "cal"){ # ... calculate KL distance OR ... # KL distance for explanations see [*] below outLoss <- sapply(z, function(.x){ .v <- .colVars(.x) return( mean( sqrt(.v + colMeans(.x)^2 - log(.v)) ) ) }) # E(z^2) = var(z) + E(z)^2 (var + bias) #outLoss <- sapply(z, function(.x) mean( (.colVars(.x) - 1)^2 + colMeans(.x)^2 ) ) #outLoss <- sapply(z, function(.x) mean( colMeans(.x)^2 ) ) #outLoss <- sapply(z, function(.x) mean( (colMeans(.x^2) - 1)^2 ) ) #outLoss <- sapply(z, function(.x) mean( apply(.x, 2, .adTest) ) ) # .adTest(as.vector(.x))) #outLoss <- sapply(z, function(.x) .adTest(as.vector(.x)) ) # .adTest(as.vector(.x))) } else { # ... pinball loss outLoss <- sapply(z, function(.x) .checkloss(.x, 0, qu = qu)) } # Close the cluster if it was opened inside this function if(multicore && clusterCreated) stopCluster(cluster) return( outLoss ) } ##### [*] Code showing that KL distance is invariant to standardization # mu1 <- rnorm(1) # mu2 <- rnorm(1) # v1 <- runif(1, 1, 2) # v2 <- runif(1, 1, 2) # # x <- rnorm(10000, mu1, sqrt(v1)) # # # KL distance between x ~ N(mu1, V1) and z ~ N(mu2, V2) # v1/v2 + (mu1 - mu2)^2 / v2 + log(v2/v1) # # # Empirical estimate of KL distance # var(x)/v2 + (mean(x) - mu2)^2 / v2 + log(v2/var(x)) # # # Normalizing x using mu2 and V2, assume y is now N(0, 1) # # and recalculate KL distance: the result must be the same # y <- (x - mu2) / sqrt(v2) # var(y) + (mean(y))^2 + log(1/var(y)) qgam/R/I_getPenMatrix.R0000644000176200001440000000127313263361332014407 0ustar liggesusers####################### # Function that extracts the penalty matrix from an extended gam object. # It returns St expressed at the "user-level" parametrization (i.e. not the # parametrization used internally for fitting) # .getPenMatrix <- function(q, UrS, sp, Mp, U1) { if (length(UrS)) { # Stable re-parameterization if needed.... rp <- gam.reparam(UrS, sp, 0) T <- diag( q ) T[1:ncol(rp$Qs), 1:ncol(rp$Qs)] <- rp$Qs T <- U1%*%T ## new params b'=T'b old params St <- rbind(cbind(rp$S,matrix(0,nrow(rp$S),Mp)),matrix(0,Mp,q)) # Invert re-parametrization St <- T %*% St %*% t(T) } else { T <- diag(q); St <- matrix(0,q,q) } return( St ) } qgam/R/I_colVars.R0000644000176200001440000000023613263361332013407 0ustar liggesusers # Variance of columns of a matrix .colVars <- function(.x){ .m <- colMeans(.x) .vr <- rowSums( (t(.x) - .m)^2 ) / (nrow(.x) - 1) return(.vr) } qgam/R/cqcheck.R0000644000176200001440000002712413732075760013144 0ustar liggesusers########################## #' Visually checking a fitted quantile model #' #' @description Given an additive quantile model, fitted using \code{qgam}, \code{cqcheck} provides some plots #' that allow to check what proportion of responses, \code{y}, falls below the fitted quantile. #' #' @param obj the output of a \code{qgam} call. #' @param v if a 1D plot is required, \code{v} should be either a single character or a numeric vector. In the first case #' \code{v} should be the names of one of the variables in the dataframe \code{X}. In the second case, the length #' of \code{v} should be equal to the number of rows of \code{X}. If a 2D plot is required, \code{v} should be #' either a vector of two characters or a matrix with two columns. #' @param X a dataframe containing the data used to obtain the conditional quantiles. By default it is NULL, in which #' case predictions are made using the model matrix in \code{obj$model}. #' @param y vector of responses. Its i-th entry corresponds to the i-th row of X. By default it is NULL, in which #' case it is internally set to \code{obj$y}. #' @param nbin a vector of integers of length one (1D case) or two (2D case) indicating the number of bins to be used #' in each direction. Used only if \code{bound==NULL}. #' @param bound in the 1D case it is a numeric vector whose increasing entries represent the bounds of each bin. #' In the 2D case a list of two vectors should be provided. \code{NULL} by default. #' @param lev the significance levels used in the plots, this determines the width of the confidence #' intervals. Default is 0.05. #' @param scatter if TRUE a scatterplot is added (using the \code{points} function). FALSE by default. #' @param ... extra graphical parameters to be passed to \code{plot()}. #' @return Simply produces a plot. #' @details Having fitted an additive model for, say, quantile \code{qu=0.4} one would expect that about 40% of the #' responses fall below the fitted quantile. This function allows to visually compare the empirical number #' of responses (\code{qu_hat}) falling below the fit with its theoretical value (\code{qu}). In particular, #' the responses are binned, which the bins being constructed along one or two variables (given be arguments #' \code{v}). Let (\code{qu_hat[i]}) be the proportion of responses below the fitted quantile in the ith bin. #' This should be approximately equal to \code{qu}, for every i. In the 1D case, when \code{v} is a single #' character or a numeric vector, \code{cqcheck} provides a plot where: the horizontal line is \code{qu}, #' the dots correspond to \code{qu_hat[i]} and the grey lines are confidence intervals for \code{qu}. The #' confidence intervals are based on \code{qbinom(lev/2, siz, qu)}, if the dots fall outside them, then #' \code{qu_hat[i]} might be deviating too much from \code{qu}. In the 2D case, when \code{v} is a vector of two #' characters or a matrix with two columns, we plot a grid of bins. The responses are divided between the bins #' as before, but now don't plot the confidence intervals. Instead we report the empirical proportions \code{qu_hat[i]} #' for the non-empty bin, and with colour the bins in red if \code{qu_hat[i]. #' @examples #' ####### #' # Bivariate additive model y~1+x+x^2+z+x*z/2+e, e~N(0, 1) #' ####### #' \dontrun{ #' library(qgam) #' set.seed(15560) #' n <- 500 #' x <- rnorm(n, 0, 1); z <- rnorm(n) #' X <- cbind(1, x, x^2, z, x*z) #' beta <- c(0, 1, 1, 1, 0.5) #' y <- drop(X %*% beta) + rnorm(n) #' dataf <- data.frame(cbind(y, x, z)) #' names(dataf) <- c("y", "x", "z") #' #' #### Fit a constant model for median #' qu <- 0.5 #' fit <- qgam(y~1, qu = qu, data = dataf) #' #' # Look at what happens along x: clearly there is non linear pattern here #' cqcheck(obj = fit, v = c("x"), X = dataf, y = y) #' #' #### Add a smooth for x #' fit <- qgam(y~s(x), qu = qu, data = dataf) #' cqcheck(obj = fit, v = c("x"), X = dataf, y = y) # Better! #' #' # Lets look across x and z. As we move along z (x2 in the plot) #' # the colour changes from green to red #' cqcheck(obj = fit, v = c("x", "z"), X = dataf, y = y, nbin = c(5, 5)) #' #' # The effect look pretty linear #' cqcheck(obj = fit, v = c("z"), X = dataf, y = y, nbin = c(10)) #' #' #### Lets add a linear effect for z #' fit <- qgam(y~s(x)+z, qu = qu, data = dataf) #' #' # Looks better! #' cqcheck(obj = fit, v = c("z")) #' #' # Lets look across x and y again: green prevails on the top-left to bottom-right #' # diagonal, while the other diagonal is mainly red. #' cqcheck(obj = fit, v = c("x", "z"), nbin = c(5, 5)) #' #' ### Maybe adding an interaction would help? #' fit <- qgam(y~s(x)+z+I(x*z), qu = qu, data = dataf) #' #' # It does! The real model is: y ~ 1 + x + x^2 + z + x*z/2 + e, e ~ N(0, 1) #' cqcheck(obj = fit, v = c("x", "z"), nbin = c(5, 5)) #' } #' cqcheck <- function(obj, v, X = NULL, y = NULL, nbin = c(10, 10), bound = NULL, lev = 0.05, scatter = FALSE, ...) { #### Set up if( is.null(X) ){ X <- obj$model if( is.null(y) ){ y <- obj$y } } else { if( is.null(y) ){ stop("If you provide X you must provide also the corresponding vector of responses y") } } if( length(y)!=nrow(X) ){ stop("length(y)!=nrow(X)") } ####### Setting up 1D and 2D cases if( is.character(v) ){ # Name(s) of variable(s) in X provided OR ... if(length(v) == 1){ ## 1D CASE ## if( !(v %in% names(X)) ) stop("(v %in% names(X)) == FALSE") x1 <- X[[v]] x2 <- NULL } else { if(length(v) == 2){ ## 2D CASE ## if( !(v[1] %in% names(X)) ) stop("(v[1] %in% names(X)) == FALSE") if( !(v[2] %in% names(X)) ) stop("(v[2] %in% names(X)) == FALSE") x1 <- X[[v[1]]] x2 <- X[[v[2]]] } else { stop("If is.character(v)==TRUE, then length(v) should be either 1 or 2.") } } } else { # ... actual numeric value of the variable(s) provided if(is.vector(v)){ ## 1D CASE ## x1 <- v x2 <- NULL if(length(v) != nrow(X)){ stop("length(v) != ncol(X)") } } else { if(is.matrix(v)){ ## 2D CASE ## if(ncol(v)!=2){ stop("In the 2D case, v should be a matrix with 2 columns or a vector of 2 characters") } x1 <- v[ , 1] x2 <- v[ , 2] } else { stop("In the 2D case, v should be a matrix with 2 columns or a vector of 2 characters") } } } # Discard NAs from X, y, x1 and x2. We don't do this on v, hence if v is numeric it is dangerous to use it from here onwards good <- complete.cases(X, y, x1, x2) y <- y[ good ] X <- X[good, , drop = FALSE] x1 <- x1[ good ] x2 <- x2[ good ] # Calculating proportion of observation falling below estimate quantile curve n <- nrow(X) mu <- as.matrix(predict(obj, newdata = X, type = "response"))[ , 1] res <- (mu - y) > 0 qu <- obj$family$getQu() # Now branching for main computation if( is.null(x2) ) ################ ONE VARIABLE { if(length(x1) != n) stop("length(x1) != ncol(X)") if( is.null(bound )){ # Create bounds OR ... nbin1 <- nbin[1] bound <- seq(min(x1), max(x1), length.out = nbin1 + 1) } else { # ... use those already given nbin1 <- length(bound-1) } # For each bin: count number of responses that are smaller than the fitted quantile indx <- as.factor( .bincode(x1, bound, TRUE, TRUE) ) # Attribute data to bins levels(indx) <- 1:nbin1 bsize <- as.vector( table(indx) ) # Count number of data in each bin indx <- as.integer(indx) bins <- numeric(nbin1) for(ii in 1:nbin1){ bins[ii] <- sum(res[indx==ii]) } # Count number of 1s in each bin # Remove empty bins while( any(bsize == 0) ) { bad <- which(bsize == 0)[1] bsize <- bsize[-bad] bins <- bins[-bad] bound <- bound[ -min(bad+1, nbin1) ] if(badbad] <- indx[indx>bad]-1 } nbin1 <- nbin1 - 1 } ub <- qbinom(lev/2, bsize, qu, lower.tail = FALSE) / bsize lb <- qbinom(lev/2, bsize, qu) / bsize #svpar <- par(no.readonly = TRUE) par(mar = c(5.1, 4.6, 4.1, 2.1)) x <- sort(x1) tmp <- rep(bins/bsize, bsize) plot(x, tmp, ylim = range(ub, lb, tmp), type = 'l', col = "white", ylab = expression(hat(P)(y lb[ii], 1, 2)) rug(x1[indx==ii], col = ifelse(ii%%2, 3, 4)) } #par(svpar) # Warning: this causes problems with shiny } else { ################ ... TWO VARIABLES if(length(x1) != n) stop("length(x1) != ncol(X)") if(length(x2) != n) stop("length(x2) != ncol(X)") if( is.null(bound) ){ # Bounds created OR ... if( length(nbin) != 2 ){ stop("In the 2D case, nbin should be a vector of length 2") } bound1 <- seq(min(x1), max(x1), length.out = nbin[1] + 1) bound2 <- seq(min(x2), max(x2), length.out = nbin[2] + 1) } else { # ... already provided if( length(bound) != 2 ){ stop("In the 2D case, bound a list of two numeric vectors") } bound1 <- bound[[1]] bound2 <- bound[[2]] nbin <- c(length(bound1)-1, length(bound2)-1) } # For each bin: count number of responses that are smaller than the fitted quantile bins <- bsize <- matrix(0, nbin[2], nbin[1]) for(ir in 1:nbin[2]){ # From bottom upward inRow <- which( x2 >= bound2[ir] & x2 <= bound2[ir+1] ) if(length(inRow)){ x1In <- x1[inRow] resIn <- res[inRow] for(ic in 1:nbin[1]){ # From left to right tmp <- which( x1In >= bound1[ic] & x1In <= bound1[ic+1] ) bsize[ir, ic] <- length(tmp) bins[ir, ic] <- sum( resIn[tmp] ) }}} # Plot! plot(x1, x2, pch = ".", col = "white", ylim = range(bound2), xlim = range(bound1), main = expression(hat(P)(yqu){ pr>b } else{ prqu, 0, alpha = ifelse(sig, 0.8, 0.3))) text(x = (bound1[ic]+bound1[ic+1])/2, y = (bound2[ir]+bound2[ir+1])/2, paste(round(pr, 3), ifelse(sig, "*", ''), sep = "")) } }} rug(x1, side = 1) rug(x2, side = 2) if(scatter){ points(x1, x2, pch = ".") } } return( invisible(NULL) ) }qgam/R/cqcheckI.R0000644000176200001440000002374113462140772013252 0ustar liggesusers########################## #' Interactive visual checks for additive quantile fits #' #' @description Given an additive quantile model, fitted using \code{qgam}, \code{cqcheck2DI} provides some interactive #' 2D plots that allow to check what proportion of responses, \code{y}, falls below the fitted quantile. #' This is an interactive version of the \code{cqcheck} function. #' #' @param obj the output of a \code{qgam} call. #' @param v if a 1D plot is required, \code{v} should be either a single character or a numeric vector. In the first case #' \code{v} should be the names of one of the variables in the dataframe \code{X}. In the second case, the length #' of \code{v} should be equal to the number of rows of \code{X}. If a 2D plot is required, \code{v} should be #' either a vector of two characters or a matrix with two columns. #' @param X a dataframe containing the data used to obtain the conditional quantiles. By default it is NULL, in which #' case predictions are made using the model matrix in \code{obj$model}. #' @param y vector of responses. Its i-th entry corresponds to the i-th row of X. By default it is NULL, in which #' case it is internally set to \code{obj$y}. #' @param run if TRUE (default) the function produces an interactive plot, otherwise it returns the corresponding shiny app. #' @param width the width of the main plot. Default is "100\%". #' @param height width the width of the main plot. Default is "680px". #' @return Simply produces an interactive plot. #' @details This is an interactive version of the \code{cqcheck}, see \code{?cqcheck} for details. The main interactive #' feature is that one can select an area by brushing, and then double-click to zoom in. In the 1D case the vertical #' part of the selected area is not use: we zoom only along the x axis. Double-clicking without brushing zooms out. #' @author Matteo Fasiolo . #' @examples #' \dontrun{ #' ####### #' # Example 1: Bivariate additive model y~1+x+x^2+z+x*z/2+e, e~N(0, 1) #' ####### #' library(qgam) #' set.seed(15560) #' n <- 1000 #' x <- rnorm(n, 0, 1); z <- rnorm(n) #' X <- cbind(1, x, x^2, z, x*z) #' beta <- c(0, 1, 1, 1, 0.5) #' y <- drop(X %*% beta) + rnorm(n) #' dataf <- data.frame(cbind(y, x, z)) #' names(dataf) <- c("y", "x", "z") #' #' #### Fit a constant model for median #' qu <- 0.5 #' fit <- qgam(y~1, qu = qu, data = dataf) #' #' # Look at what happens along x: clearly there is non linear pattern here #' cqcheckI(obj = fit, v = c("x"), X = dataf, y = y) #' #' #### Add a smooth for x #' fit <- qgam(y~s(x), qu = qu, data = dataf) #' cqcheckI(obj = fit, v = c("x"), X = dataf, y = y) # Better! #' #' # Lets look across across x and z. As we move along z (x2 in the plot) #' # the colour changes from green to red #' cqcheckI(obj = fit, v = c("x", "z"), X = dataf, y = y) #' #' # The effect look pretty linear #' cqcheckI(obj = fit, v = c("z"), X = dataf, y = y) #' #' #### Lets add a linear effect for z #' fit <- qgam(y~s(x)+z, qu = qu, data = dataf) #' #' # Looks better! #' cqcheckI(obj = fit, v = c("z")) #' #' # Lets look across x and y again: green prevails on the top-left to bottom-right #' # diagonal, while the other diagonal is mainly red. #' cqcheckI(obj = fit, v = c("x", "z")) #' #' ### Maybe adding an interaction would help? #' fit <- qgam(y~s(x)+z+I(x*z), qu = qu, data = dataf) #' #' # It does! The real model is: y ~ 1 + x + x^2 + z + x*z/2 + e, e ~ N(0, 1) #' cqcheckI(obj = fit, v = c("x", "z")) #' } #' cqcheckI <- function(obj, v, X = NULL, y = NULL, run = TRUE, width = "100%", height = "680px") { #### Set up if( is.null(X) ){ X <- obj$model if( is.null(y) ){ y <- obj$y } } else { if( is.null(y) ){ stop("If you provide X you must provide also the corresponding vector of responses y") } } if( length(y)!=nrow(X) ){ stop("length(y)!=nrow(X)") } ####### Setting up 1D and 2D cases if( is.character(v) ){ # Name(s) of variable(s) in X provided OR ... if(length(v) == 1){ ## 1D CASE ## if( !(v %in% names(X)) ) stop("(v %in% names(X)) == FALSE") x1 <- X[[v]] x2 <- NULL } else { if(length(v) == 2){ ## 2D CASE ## if( !(v[1] %in% names(X)) ) stop("(v[1] %in% names(X)) == FALSE") if( !(v[2] %in% names(X)) ) stop("(v[2] %in% names(X)) == FALSE") x1 <- X[[v[1]]] x2 <- X[[v[2]]] } else { stop("If is.character(v)==TRUE, then length(v) should be either 1 or 2.") } } } else { # ... actual numeric value of the variable(s) provided if(is.vector(v)){ ## 1D CASE ## x1 <- v x2 <- NULL if(length(v) != nrow(X)){ stop("length(v) != ncol(X)") } } else { if(is.matrix(v)){ ## 2D CASE ## if(ncol(v)!=2){ stop("In the 2D case, v should be a matrix with 2 columns or a vector of 2 characters") } x1 <- v[ , 1] x2 <- v[ , 2] } else { stop("In the 2D case, v should be a matrix with 2 columns or a vector of 2 characters") } } } out <- if( is.null(x2) ){ # One dimensional OR ... .cqcheck1DI(.obj = obj, .x1 = x1, .X = X, .y = y, .width = width, .height = height) } else { # ... two dimensional case .cqcheck2DI(.obj = obj, .x1 = x1, .x2 = x2, .X = X, .y = y, .width = width, .height = height) } if( run ){ return(runApp(out)) } else { return(out) } } #### Internal function for 1D plot .cqcheck1DI <- function(.obj, .x1, .X, .y, .width, .height) { # User interface ui <- fluidPage( sidebarPanel( numericInput('nbin', 'Num. bins', 10, min = 1, max = Inf), numericInput('lev', 'Sign. lev.', 0.05, min = 0, max = 1), width = 2 ), mainPanel( h4("Brush and double-click to zoom in. Double-click to zoom out."), plotOutput("plot1", dblclick = "plot1_dblclick", hover = "plot1_hover", brush = brushOpts( id = "plot1_brush", resetOnNew = TRUE), width = .width, height = .height), verbatimTextOutput("info") ) ) # Server side server <- function(input, output) { # Control panel inputs ranges <- reactiveValues(x = NULL, y = NULL) nbin <- reactive({ input$nbin }) lev <- reactive({ input$lev }) myPlot <- function(brush = NULL) { if (!is.null(brush)) { good <- which(.x1 > brush$xmin & .x1 < brush$xmax) out <- cqcheck(obj = .obj, v = .x1[good], X = .X[good, ], y = .y[good], nbin = nbin(), lev = lev()) } else { out <- cqcheck(obj = .obj, v = .x1, X = .X, y = .y, nbin = nbin(), lev = lev()) } return( out ) } # Initial plot output$plot1 <- renderPlot({ myPlot() }) # Update if double click or double click + brush observeEvent(input$plot1_dblclick, { brush <- input$plot1_brush output$plot1 <- renderPlot({ myPlot(brush = brush) }) }) # Print some info output$info <- renderText({ x_str <- function(e) { if(is.null(e)) return("NULL\n") paste0("x=", round(e$x, 1), "\n") } x_range_str <- function(e) { if(is.null(e)) return("NULL\n") paste0("xmin=", round(e$xmin, 1), ", xmax=", round(e$xmax, 1)) } paste0( "hover: ", x_str(input$plot1_hover), "brush: ", x_range_str(input$plot1_brush) ) }) } return( shinyApp(ui, server) ) } #### Internal function for 2D plot .cqcheck2DI <- function(.obj, .x1, .x2, .X, .y, .width, .height) { # User interface ui <- fluidPage( sidebarPanel( numericInput('nbin1', 'N. bins x1', 10, min = 1, max = Inf), numericInput('nbin2', 'N. bins x2', 10, min = 1, max = Inf), numericInput('lev', 'Sign. lev.', 0.05, min = 0, max = 1), checkboxInput('scatter', 'Add scatter', value = FALSE), width = 2 ), mainPanel( h4("Brush and double-click to zoom in. Double-click to zoom out."), plotOutput("plot1", dblclick = "plot1_dblclick", hover = "plot1_hover", brush = brushOpts( id = "plot1_brush", resetOnNew = TRUE), width = .width, height = .height), verbatimTextOutput("info") ) ) # Server side server <- function(input, output) { # Control panel inputs ranges <- reactiveValues(x = NULL, y = NULL) nbin1 <- reactive({ input$nbin1 }) nbin2 <- reactive({ input$nbin2 }) lev <- reactive({ input$lev }) scatter <- reactive({ input$scatter }) myPlot <- function(brush = NULL) { if (!is.null(brush)) { good <- which(.x1 > brush$xmin & .x1 < brush$xmax & .x2 > brush$ymin & .x2 < brush$ymax) out <- cqcheck(obj = .obj, v = cbind(.x1[good], .x2[good]), X = .X[good, ], y = .y[good], nbin = c(nbin1(), nbin2()), scatter = scatter(), lev = lev()) } else { out <- cqcheck(obj = .obj, v = cbind(.x1, .x2), X = .X, y = .y, nbin = c(nbin1(), nbin2()), scatter = scatter(), lev = lev()) } return( out ) } # Initial plot output$plot1 <- renderPlot({ myPlot() }) # Update if double click or double click + brush observeEvent(input$plot1_dblclick, { brush <- input$plot1_brush output$plot1 <- renderPlot({ myPlot(brush = brush) }) }) # Print some info output$info <- renderText({ xy_str <- function(e) { if(is.null(e)) return("NULL\n") paste0("x1=", round(e$x, 1), ", x2=", round(e$y, 1), "\n") } xy_range_str <- function(e) { if(is.null(e)) return("NULL\n") paste0("x1min=", round(e$xmin, 1), ", x1max=", round(e$xmax, 1), ", x2min=", round(e$ymin, 1), ", x2max=", round(e$ymax, 1)) } paste0( "hover: ", xy_str(input$plot1_hover), "brush: ", xy_range_str(input$plot1_brush) ) }) } return( shinyApp(ui, server) ) }qgam/R/qgam.R0000644000176200001440000001773214146704232012465 0ustar liggesusers########################## #' Fit a smooth additive quantile regression model #' #' @description This function fits a smooth additive regression model for a single quantile. #' #' @param form A GAM formula, or a list of formulae. See ?mgcv::gam details. #' @param data A data frame or list containing the model response variable and covariates required by the formula. #' By default the variables are taken from environment(formula): typically the environment from which gam is called. #' @param qu The quantile of interest. Should be in (0, 1). #' @param lsig The value of the log learning rate used to create the Gibbs posterior. By defauls \code{lsig=NULL} and this #' parameter is estimated by posterior calibration described in Fasiolo et al. (2017). Obviously, the function is much faster #' if the user provides a value. #' @param err An upper bound on the error of the estimated quantile curve. Should be in (0, 1). #' Since qgam v1.3 it is selected automatically, using the methods of Fasiolo et al. (2017). #' The old default was \code{err=0.05}. #' @param multicore If TRUE the calibration will happen in parallel. #' @param ncores Number of cores used. Relevant if \code{multicore == TRUE}. #' @param cluster An object of class \code{c("SOCKcluster", "cluster")}. This allowes the user to pass her own cluster, #' which will be used if \code{multicore == TRUE}. The user has to remember to stop the cluster. #' @param paropts a list of additional options passed into the foreach function when parallel computation is enabled. #' This is important if (for example) your code relies on external data or packages: #' use the .export and .packages arguments to supply them so that all cluster nodes #' have the correct environment set up for computing. #' @param control A list of control parameters. The only one relevant here is \code{link}, which is the link function #' used (see \code{?elf} and \code{?elflss} for defaults). All other control parameters are used by #' \code{tuneLearnFast}. See \code{?tuneLearnFast} for details. #' @param argGam A list of parameters to be passed to \code{mgcv::gam}. This list can potentially include all the arguments listed #' in \code{?gam}, with the exception of \code{formula}, \code{family} and \code{data}. #' @return A \code{gamObject}. See \code{?gamObject}. #' @author Matteo Fasiolo . #' @references Fasiolo, M., Wood, S.N., Zaffran, M., Nedellec, R. and Goude, Y., 2020. #' Fast calibrated additive quantile regression. #' Journal of the American Statistical Association (to appear). #' \url{https://www.tandfonline.com/doi/full/10.1080/01621459.2020.1725521}. #' @references Fasiolo, M., Wood, S.N., Zaffran, M., Nedellec, R. and Goude, Y., 2021. #' qgam: Bayesian Nonparametric Quantile Regression Modeling in R. #' Journal of Statistical Software, 100(9), 1-31, \doi{10.18637/jss.v100.i09}. #' @examples # #' ##### #' # Univariate "car" example #' #### #' library(qgam); library(MASS) #' #' # Fit for quantile 0.5 using the best sigma #' set.seed(6436) #' fit <- qgam(accel~s(times, k=20, bs="ad"), data = mcycle, qu = 0.5) #' #' # Plot the fit #' xSeq <- data.frame(cbind("accel" = rep(0, 1e3), "times" = seq(2, 58, length.out = 1e3))) #' pred <- predict(fit, newdata = xSeq, se=TRUE) #' plot(mcycle$times, mcycle$accel, xlab = "Times", ylab = "Acceleration", ylim = c(-150, 80)) #' lines(xSeq$times, pred$fit, lwd = 1) #' lines(xSeq$times, pred$fit + 2*pred$se.fit, lwd = 1, col = 2) #' lines(xSeq$times, pred$fit - 2*pred$se.fit, lwd = 1, col = 2) #' #' \dontrun{ #' # You can get a better fit by letting the learning rate change with "accel" #' # For instance #' fit <- qgam(list(accel ~ s(times, k=20, bs="ad"), ~ s(times)), #' data = mcycle, qu = 0.8) #' #' pred <- predict(fit, newdata = xSeq, se=TRUE) #' plot(mcycle$times, mcycle$accel, xlab = "Times", ylab = "Acceleration", ylim = c(-150, 80)) #' lines(xSeq$times, pred$fit, lwd = 1) #' lines(xSeq$times, pred$fit + 2*pred$se.fit, lwd = 1, col = 2) #' lines(xSeq$times, pred$fit - 2*pred$se.fit, lwd = 1, col = 2) #' } #' #' ##### #' # Multivariate Gaussian example #' #### #' library(qgam) #' set.seed(2) #' dat <- gamSim(1,n=400,dist="normal",scale=2) #' #' fit <- qgam(y~s(x0)+s(x1)+s(x2)+s(x3), data=dat, qu = 0.5) #' plot(fit, scale = FALSE, pages = 1) #' #' ###### #' # Heteroscedastic example #' ###### #' \dontrun{ #' set.seed(651) #' n <- 2000 #' x <- seq(-4, 3, length.out = n) #' X <- cbind(1, x, x^2) #' beta <- c(0, 1, 1) #' sigma = 1.2 + sin(2*x) #' f <- drop(X %*% beta) #' dat <- f + rnorm(n, 0, sigma) #' dataf <- data.frame(cbind(dat, x)) #' names(dataf) <- c("y", "x") #' #' fit <- qgam(list(y~s(x, k = 30, bs = "cr"), ~ s(x, k = 30, bs = "cr")), #' data = dataf, qu = 0.95) #' #' plot(x, dat, col = "grey", ylab = "y") #' tmp <- predict(fit, se = TRUE) #' lines(x, tmp$fit) #' lines(x, tmp$fit + 2 * tmp$se.fit, col = 2) #' lines(x, tmp$fit - 2 * tmp$se.fit, col = 2) #' } #' qgam <- function(form, data, qu, lsig = NULL, err = NULL, multicore = !is.null(cluster), cluster = NULL, ncores = detectCores() - 1, paropts = list(), control = list(), argGam = NULL) { if( length(qu) > 1 ) stop("length(qu) > 1, so you should use mqgam()") # Removing all NAs, unused variables and factor levels from data data <- .cleanData(.dat = data, .form = form, .drop = argGam$drop.unused.levels) # Setting up control parameter (mostly used by tuneLearnFast) ctrl <- list("gausFit" = NULL, "verbose" = FALSE, "b" = 0, "link" = "identity") # Checking if the control list contains unknown names entries in "control" substitute those in "ctrl" ctrl <- .ctrlSetup(innerCtrl = ctrl, outerCtrl = control, verbose = FALSE) # Gaussian fit, used for initializations if( is.formula(form) ) { if( is.null(ctrl[["gausFit"]]) ) { ctrl$gausFit <- do.call("gam", c(list("formula" = form, "data" = quote(data), "family" = gaussian(link=ctrl[["link"]])), argGam)) } varHat <- ctrl$gausFit$sig2 formL <- form } else { if( is.null(ctrl[["gausFit"]]) ) { ctrl$gausFit <- do.call("gam", c(list("formula" = form, "data" = quote(data), "family" = gaulss(link=list(ctrl[["link"]], "logb"), b=ctrl[["b"]])), argGam)) } varHat <- 1/ctrl$gausFit$fit[ , 2]^2 formL <- form[[1]] } # Get loss smoothness if( is.null(err) ){ err <- .getErrParam(qu = qu, gFit = ctrl$gausFit) } # Selecting the learning rate sigma learn <- NULL if( is.null(lsig) ) { learn <- tuneLearnFast(form = form, data = data, qu = qu, err = err, multicore = multicore, cluster = cluster, ncores = ncores, paropts = paropts, control = ctrl, argGam = argGam) lsig <- learn$lsig err <- learn$err # Over-writing err parameter! } # Do not use 'start' gausFit in gamlss case because it's not to clear how to deal with model for sigma if( is.null(argGam$start) ) { coefGau <- coef( ctrl$gausFit ) # Need to extract coefficients belonging to model for mean (not variance) if( is.list(ctrl$gausFit$formula) ){ lpi <- attr(predict(ctrl$gausFit, newdata = ctrl$gausFit$model[1:2, , drop = FALSE], type = "lpmatrix"), "lpi") coefGau <- coefGau[ lpi[[1]] ] } # Shift mean fit to quantile of interest argGam$start <- coefGau + c(quantile(residuals(ctrl$gausFit, type="response"), qu), rep(0, length(coefGau) - 1)) } co <- err * sqrt(2*pi*varHat) / (2*log(2)) # Fit model for fixed log-sigma fit <- do.call("gam", c(list("formula" = formL, "family" = quote(elf(qu = qu, co = co, theta = lsig, link = ctrl$link)), "data" = quote(data)), argGam)) fit$calibr <- learn class(fit) <- c("qgam", class(fit)) return( fit ) } qgam/R/I_objFunLearnFast.R0000644000176200001440000001570013457633435015036 0ustar liggesusers####### # Internal loss function to be minimized using Brent method # .objFunLearnFast <- function(lsig, mObj, bObj, wb, initM, initB, pMat, SStuff, qu, ctrl, varHat, err, argGam, cluster, multicore, paropts) { if(ctrl$progress){ cat(".")} co <- err * sqrt(2*pi*varHat) / (2*log(2)) lpi <- attr(pMat, "lpi") mObj$family$putQu( qu ) mObj$family$putCo( co ) mObj$family$putTheta( lsig ) # Full data fit withCallingHandlers({ mFit <- do.call("gam", c(list("G" = quote(mObj), "in.out" = initM[["in.out"]], "start" = initM[["start"]]), argGam))}, warning = function(w) { if (length(grep("Fitting terminated with step failure", conditionMessage(w))) || length(grep("Iteration limit reached without full convergence", conditionMessage(w)))) { message( paste("qu = ", qu, ", log(sigma) = ", round(lsig, 6), " : outer Newton did not converge fully.", sep = "") ) invokeRestart("muffleWarning") } }) mMU <- as.matrix(mFit$fit)[ , 1] initM <- list("start" = coef(mFit), "in.out" = list("sp" = mFit$sp, "scale" = 1)) # Standard deviation of fitted quantile using full data sdev <- NULL if(ctrl$loss %in% c("cal", "calFast") && ctrl$vtype == "m"){ Vp <- mFit$Vp # In the gamlss case, we are interested only in the calibrating the location mode if( !is.null(lpi) ){ Vp <- mFit$Vp[lpi[[1]], lpi[[1]]] } sdev <- sqrt(rowSums((pMat %*% Vp) * pMat)) # same as sqrt(diag(pMat%*%Vp%*%t(pMat))) but (WAY) faster } if(ctrl$loss == "calFast"){ # Fast calibration OR ... Vbias <- .biasedCov(fit = mFit, X = SStuff$XFull, EXXT = SStuff$EXXT, EXEXT = SStuff$EXEXT, lpi = lpi) outLoss <- .sandwichLoss(mFit = mFit, X = pMat, XFull = SStuff$XFull, sdev = sdev, repar = mObj$hidRepara, alpha = Vbias$alpha, VSim = Vbias$V) initB <- NULL } else { # ... bootstrapping or cross-validation ## Function to be run in parallel (over boostrapped datasets) # It has two sets of GLOBAL VARS # Set 1: bObj, pMat, wb, argGam, ctrl (Exported by tuneLearnFast) # If multicore=F, .funToApply() will look for these inside the objFun call. That's why objFun need them as arguments. # If multicore=T, .funToApply() will look for them in .GlobalEnv. That's why we export them to cluster nodes in tuneLearnFast. # Set 2: initB, initM, mMU, co, lsig, qu, sdev (Exported by .tuneLearnFast) # As before but, if multicore=T, these are exported directly by objFun because they change from one call of objFun to another. .funToApply <- function(ind) { .lpi <- attr(pMat, "lpi") glss <- inherits(bObj$family, "general.family") bObj$lsp0 <- log( initM$in.out$sp ) bObj$family$putQu( qu ) bObj$family$putCo( co ) bObj$family$putTheta( lsig ) z <- init <- vector("list", length(ind)) for( ii in 1:length(ind) ){ # Creating boot weights from boot indexes kk <- ind[ ii ] .wb <- wb[[ kk ]] bObj$w <- .wb # Recycle boot initialization, but at first iteration this is NULL... .init <- if(is.null(initB[[kk]])){ list(initM$start) } else { list(initB[[kk]], initM$start) } if( glss ){ # In gamlss I need to reparametrize initialization and in Ex GAM I need to get null coefficients. .init <- lapply(.init, function(inp) Sl.initial.repara(bObj$Sl, inp, inverse=FALSE, both.sides=FALSE)) .fit <- .gamlssFit(x=bObj$X, y=bObj$y, lsp=as.matrix(bObj$lsp0), Sl=bObj$Sl, weights=bObj$w, offset=bObj$offset, family=bObj$family, control=bObj$control, Mp=bObj$Mp, start=.init, needVb=(ctrl$loss=="cal" && ctrl$vtype=="b")) # In gamlss, we want to calibrate only the location and we need to reparametrize the coefficients .init <- .betas <- Sl.initial.repara(bObj$Sl, .fit$coef, inverse=TRUE, both.sides=FALSE) .betas <- .betas[.lpi[[1]]] } else { bObj$null.coef <- bObj$family$get.null.coef(bObj)$null.coef .fit <- .egamFit(x=bObj$X, y=bObj$y, sp=as.matrix(bObj$lsp0), Eb=bObj$Eb, UrS=bObj$UrS, offset=bObj$offset, U1=bObj$U1, Mp=bObj$Mp, family = bObj$family, weights=bObj$w, control=bObj$control, null.coef=bObj$null.coef, start=.init, needVb=(ctrl$loss == "cal" && ctrl$vtype == "b")) .init <- .betas <- .fit$coef } .mu <- pMat %*% .betas if( ctrl$loss == "cal" ){ # (1) Return standardized deviations from full data fit OR ... if( ctrl$vtype == "b" ){ # (2) Use variance of bootstrap fit OR ... .Vp <- .getVp(.fit, bObj, bObj$lsp0, .lpi) .sdev <- sqrt(rowSums((pMat %*% .Vp) * pMat)) # same as sqrt(diag(pMat%*%Vp%*%t(pMat))) but (WAY) faster } else { # (2) ... variance of the main fit .sdev <- sdev } z[[ii]] <- (.mu - mMU) / .sdev } else { # (1) ... out of sample observations minus their fitted values z[[ii]] <- (bObj$y - .mu)[ !.wb ] } init[[ii]] <- .init } return( list("z" = z, "init" = init) ) } if( !is.null(cluster) ){ nc <- length(cluster) environment(.funToApply) <- .GlobalEnv clusterExport(cluster, c("initB", "initM", "mMU", "co", "lsig", "qu", "sdev"), envir = environment()) } else { nc <- 1 } # Divide work (boostrap datasets) between cluster workers nbo <- ctrl$K sched <- mapply(function(a, b) rep(a, each = b), 1:nc, c(rep(floor(nbo / nc), nc - 1), floor(nbo / nc) + nbo %% nc), SIMPLIFY = FALSE ) sched <- split(1:nbo, do.call("c", sched)) # Loop over bootstrap datasets to get standardized deviations from full data fit withCallingHandlers({ out <- llply(.data = sched, .fun = .funToApply, .parallel = multicore, .inform = ctrl[["verbose"]], .paropts = paropts#, ### ... arguments start here ) }, warning = function(w) { # There is a bug in plyr concerning a useless warning about "..." if (length(grep("... may be used in an incorrect context", conditionMessage(w)))) invokeRestart("muffleWarning") }) # Get stardardized deviations and ... .bindFun <- if( ctrl$loss == "cal" ) { "rbind" } else { "c" } z <- do.call(.bindFun, do.call(.bindFun, lapply(out, "[[", "z"))) if( ctrl$loss == "cal"){ # ... calculate KL distance OR ... vrT <- .colVars(z) outLoss <- mean(vrT + colMeans(z)^2 - log(vrT)) } else { # ... pinball loss outLoss <- .checkloss(as.vector(z), 0, qu = qu) } names(outLoss) <- lsig initB <- unlist(lapply(out, "[[", "init"), recursive=FALSE) } return( list("outLoss" = outLoss, "initM" = initM, "initB" = initB) ) } qgam/R/I_clusterSetUp.R0000644000176200001440000000242713033231203014430 0ustar liggesusers# Utility function called from withing another function to set-up a "parallel" cluster .clusterSetUp <- function(cluster, ncores, libraries = c(), toExport = c(), exportALL = FALSE, ...) { parentEnv <- parent.frame() # Create a cluster (if necessary) and set clusterCreated to TRUE if( is.null(cluster) ) { cluster <- makeCluster(ncores) clusterCreated <- TRUE } else{ ncores <- length(cluster) clusterCreated <- FALSE } # Put the vector of names of packages I want to load in the list of stuff to export # I assign "libraries" to the parent environment that it can be exported by .clusterExport if( length(libraries > 0) ){ toExport <- c(toExport, "libraries") assign("libraries", libraries, parentEnv) } # Load stuff in the cluster if( length(toExport) > 0 || exportALL ) .clusterExport(cluster = cluster, envir = parentEnv, toExport = toExport, ALL = exportALL) # Load libraries on the cluster, delete the copy of "libraries" in the parent environment if( length(libraries) > 0 ) { rm("libraries", parentEnv) clusterEvalQ(cluster, sapply(libraries, function(libName) invisible(require(libName, quietly = TRUE, character.only=TRUE)) ) ) } list("cluster" = cluster, "ncores" = ncores, "clusterCreated" = clusterCreated) }qgam/R/I_kcheck.R0000644000176200001440000000164713033231203013221 0ustar liggesusers# Checking degrees of freedom: poor man's version of mgcv:::k.check: here there is no randomized # test, because I am not sure it applies to quantile regression. # This function only gives 1) maximum EDF 2) effective EDF # Some of the code does useless stuff, .kcheck <- function(b) { ## function to check k in a gam fit... ## does a randomization test looking for evidence of residual ## pattern attributable to covariates of each smooth. m <- length(b$smooth) if (m==0) return(NULL) kc <- edf<- rep(0,m) snames <- rep("",m) n <- nrow(b$model) for (k in 1:m) { ## work through smooths ok <- TRUE b$smooth[[k]]$by <- "NA" ## can't deal with by variables snames[k] <- b$smooth[[k]]$label ind <- b$smooth[[k]]$first.para:b$smooth[[k]]$last.para kc[k] <- length(ind) edf[k] <- sum(b$edf[ind]) } k.table <- cbind(kc,edf) dimnames(k.table) <- list(snames, c("k\'","edf")) k.table }qgam/R/elflss.R0000644000176200001440000004251413763713315013032 0ustar liggesusers########################## #' Extended log-F model with variable scale #' #' @description The \code{elflss} family implements the Extended log-F (ELF) density of Fasiolo et al. (2017) and it is supposed #' to work in conjuction with the general GAM fitting methods of Wood et al. (2017), implemented by #' \code{mgcv}. It differs from the \code{elf} family, because here the scale of the density #' (sigma, aka the learning rate) can depend of the covariates, while in #' while in \code{elf} it is a single scalar. NB this function was use within the \code{qgam} function, but #' since \code{qgam} version 1.3 quantile models with varying learning rate are fitted using different methods #' (a parametric location-scale model, see Fasiolo et al. (2017) for details.). #' #' @param link vector of two characters indicating the link function for the quantile location and for the log-scale. #' @param qu parameter in (0, 1) representing the chosen quantile. For instance, to fit the median choose \code{qu=0.5}. #' @param co positive vector of constants used to determine parameter lambda of the ELF density (lambda = co / sigma). #' @param theta a scalar representing the intercept of the model for the log-scale log(sigma). #' @param remInter if TRUE the intercept of the log-scale model is removed. #' @return An object inheriting from mgcv's class \code{general.family}. #' @details This function is meant for internal use only. #' @author Matteo Fasiolo and Simon N. Wood. #' @references Fasiolo, M., Wood, S.N., Zaffran, M., Nedellec, R. and Goude, Y., 2020. #' Fast calibrated additive quantile regression. #' Journal of the American Statistical Association (to appear). #' \url{https://www.tandfonline.com/doi/full/10.1080/01621459.2020.1725521}. #' #' Wood, Simon N., Pya, N. and Safken, B. (2017). Smoothing parameter and model selection for #' general smooth models. Journal of the American Statistical Association. #' @examples #' \dontrun{ #' set.seed(651) #' n <- 1000 #' x <- seq(-4, 3, length.out = n) #' X <- cbind(1, x, x^2) #' beta <- c(0, 1, 1) #' sigma = 1.2 + sin(2*x) #' f <- drop(X %*% beta) #' dat <- f + rnorm(n, 0, sigma) #' dataf <- data.frame(cbind(dat, x)) #' names(dataf) <- c("y", "x") #' #' # Fit median using elflss directly: NOT RECOMMENDED #' fit <- gam(list(y~s(x, bs = "cr"), ~ s(x, bs = "cr")), #' family = elflss(theta = 0, co = rep(0.2, n), qu = 0.5), #' data = dataf) #' #' plot(x, dat, col = "grey", ylab = "y") #' tmp <- predict(fit, se = TRUE) #' lines(x, tmp$fit[ , 1]) #' lines(x, tmp$fit[ , 1] + 3 * tmp$se.fit[ , 1], col = 2) #' lines(x, tmp$fit[ , 1] - 3 * tmp$se.fit[ , 1], col = 2) #' } #' #' ## (c) Simon N. Wood & Matteo Fasiolo ## 2013-2017. Released under GPL2. ## extended families for mgcv, standard components. ## family - name of family character string ## link - name of link character string ## linkfun - the link function ## linkinv - the inverse link function ## mu.eta - d mu/d eta function (derivative of inverse link wrt eta) ## note: for standard links this information is supplemented using ## function fix.family.link.extended.family with functions ## gkg where k is 2,3 or 4 giving the kth derivative of the ## link over the first derivative of the link to the power k. ## for non standard links these functions muct be supplied. ## dev.resids - function computing deviance residuals. ## Dd - function returning derivatives of deviance residuals w.r.t. mu and theta. ## aic - function computing twice - log likelihood for 2df to be added to. ## initialize - expression to be evaluated in gam.fit4 and initial.spg ## to initialize mu or eta. ## preinitialize - optional expression evaluated in estimate.gam to ## e.g. initialize theta parameters (see e.g. ocat) ## postproc - expression to evaluate in estimate.gam after fitting (see e.g. betar) ## ls - function to evaluated log saturated likelihood and derivatives w.r.t. ## phi and theta for use in RE/ML optimization. If deviance used is just -2 log ## lik. can njust return zeroes. ## validmu, valideta - functions used to test whether mu/eta are valid. ## n.theta - number of theta parameters. ## no.r.sq - optional TRUE/FALSE indicating whether r^2 can be computed for family ## ini.theta - function for initializing theta. ## putTheta, getTheta - functions for storing and retriving theta values in function ## environment. ## rd - optional function for simulating response data from fitted model. ## residuals - optional function for computing residuals. ## predict - optional function for predicting from model, called by predict.gam. ## family$data - optional list storing any family specific data for use, e.g. in predict ## function. elflss <- function(link = list("identity", "log"), qu, co, theta, remInter = TRUE) { # Some checks if( !remInter ){ if( theta != 0 ){ stop("remInter == FALSE, but theta != 0") } theta <- 0 } if( !is.na(qu) && (findInterval(qu, c(0, 1) )!=1) ) stop("qu should be in (0, 1)") ## Extended family object for modified log-F, to allow direct estimation of theta ## as part of REML optimization. Currently the template for extended family objects. ## length(theta)=2; log theta supplied. ## Written by Matteo Fasiolo. ## first deal with links and their derivatives... if (length(link)!=2) stop("elflss requires 2 links specified as character strings") okLinks <- list(c("inverse", "log", "identity", "sqrt"), "log") stats <- list() param.names <- c("mu", "sigma") for (i in 1:2) { if (link[[i]] %in% okLinks[[i]]) stats[[i]] <- make.link(link[[i]]) else stop(link[[i]]," link not available for ", param.names[i]," parameter of elflss") fam <- structure(list(link=link[[i]],canonical="none",linkfun=stats[[i]]$linkfun, mu.eta=stats[[i]]$mu.eta), class="family") fam <- fix.family.link(fam) stats[[i]]$d2link <- fam$d2link stats[[i]]$d3link <- fam$d3link stats[[i]]$d4link <- fam$d4link } env <- new.env(parent = .GlobalEnv) assign(".qu", qu, envir = env) getQu <- function( ) get(".qu") putQu <- function(qu) assign(".qu", qu, envir=environment(sys.function())) assign(".co", co, envir = env) getCo <- function( ) get(".co") putCo <- function(co) assign(".co", co, envir=environment(sys.function())) assign(".theta", theta, envir = env) getTheta <- function( ) get(".theta") putTheta <- function(theta) assign(".theta", theta, envir=environment(sys.function())) # variance <- function(mu) exp(get(".Theta")) ##### XXX ##### Necessary? # validmu <- function(mu) all( is.finite(mu) ) residuals <- function(object, type = c("deviance", "response")) { tau <- get(".qu") theta <- get(".theta") co <- get(".co") mu <- object$fitted[ , 1] sig <- object$fitted[ , 2] * exp(theta) # This will break if the link is not log!! lam <- co / sig type <- match.arg(type) # Raw residuals: y - E(y) rsd <- object$y - sig * lam * ( digamma(lam*(1-tau)) - digamma(lam*tau) ) - mu if (type=="response"){ return(rsd) } else { ## compute deviance residuals sgn <- sign(rsd) z <- (object$y - mu) / sig dl <- dlogis(z-mu, 0, lam*sig) pl <- plogis(z-mu, 0, lam*sig) l <- (1-tau) * z - lam * log1pexp( z / lam ) - log( sig * lam * beta(lam*(1-tau), lam*tau) ) ls <- (1-tau)*lam*log1p(-tau) + lam*tau*log(tau) - log(lam * sig * beta(lam*(1-tau), lam*tau)) rsd <- pmax(0, 2*(ls - l)) rsd <- sqrt(rsd)*sgn } rsd } ## residuals ll <- function(y, X, coef, wt, family, offset = NULL, deriv=0, d1b=0, d2b=0, Hp=NULL, rank=0, fh=NULL, D=NULL) { ## function defining the gamlss Gaussian model log lik. ## deriv: 0 - eval ## 1 - grad and Hess ## 2 - diagonal of first deriv of Hess ## 3 - first deriv of Hess ## 4 - everything. tau <- get(".qu") theta <- get(".theta") co <- get(".co") if( !is.null(offset) ){ offset[[3]] <- 0 } # Not sure whether this is needed discrete <- is.list(X) jj <- attr(X,"lpi") ## extract linear predictor index eta <- if (discrete) Xbd(X$Xd,coef,k=X$kd,ks=X$ks,ts=X$ts,dt=X$dt,v=X$v,qc=X$qc,drop=X$drop,lt=X$lpid[[1]]) else X[,jj[[1]],drop=FALSE]%*%coef[jj[[1]]] if( !is.null(offset[[1]]) ){ eta <- eta + offset[[1]] } mu <- family$linfo[[1]]$linkinv(eta) eta1 <- if (discrete) Xbd(X$Xd,coef,k=X$kd,ks=X$ks,ts=X$ts,dt=X$dt,v=X$v,qc=X$qc,drop=X$drop,lt=X$lpid[[2]]) else X[,jj[[2]],drop=FALSE]%*%coef[jj[[2]]] if( !is.null(offset[[2]]) ){ eta1 <- eta1 + offset[[2]] } sig <- family$linfo[[2]]$linkinv(eta1) lam <- co / sig n <- length(y) if( is.null(wt) ) { wt <- numeric(n) + 1 } l1 <- matrix(0, n, 2) z <- (y - mu) / sig zc <- (y - mu) / co lpxp <- log1pexp( zc ) l <- drop(crossprod(wt, (1-tau) * z - co*lpxp/sig - log( co*beta(co*(1-tau)/sig, co*tau/sig) ))) if (deriv>0) { # D logBeta / D lam; D^2 logBeta / D lam^2 dBdL <- (1-tau) * digamma(lam*(1-tau)) + tau * digamma(lam*tau) - digamma(lam) d2BdL2 <- (1-tau)^2 * trigamma(lam*(1-tau)) + tau^2 * trigamma(lam*tau) - trigamma(lam) # D lam / D sig; D^2 lam / D sig^2 dLdS <- - co / sig^2 d2LdS2 <- 2 * co / sig^3 # D logBeta / D sig; D^2 logBeta / D sig^2 dBdS <- dLdS * dBdL d2BdS2 <- d2LdS2*dBdL + (dLdS)^2*d2BdL2 gLog <- (tau-1)*(y-mu) + co * lpxp dl <- dlogis(y, mu, co) pl <- plogis(y, mu, co) dLLKdmu <- (pl - 1 + tau) / sig l1[ , 1] <- wt * dLLKdmu l1[ , 2] <- wt * ( gLog/sig^2 - dBdS ) ## the second derivatives D2LLKdmu2 <- - dl / sig l2 <- matrix(0, n, 3) ## order mm,ms,ss l2[ , 1] <- wt * D2LLKdmu2 l2[ , 2] <- wt * ( - dLLKdmu / sig ) l2[ , 3] <- wt * ( - 2*gLog/sig^3 - d2BdS2 ) ## need some link derivatives for derivative transform ig1 <- cbind(family$linfo[[1]]$mu.eta(eta), family$linfo[[2]]$mu.eta(eta1)) g2 <- cbind(family$linfo[[1]]$d2link(mu), family$linfo[[2]]$d2link(sig)) } l3 <- l4 <- g3 <- g4 <- 0 ## defaults if (deriv>1) { # D^3 logBeta / D lam^3 ; D^3 lam / D sig^3; D^3 logBeta / D sig^3 d3BdL3 <- (1-tau)^3 * psigamma(lam*(1-tau), 2) + tau^3 * psigamma(lam*tau, 2) - psigamma(lam, 2) d3LdS3 <- - 6 * co / sig^4 d3BdS3 <- d3LdS3*dBdL + 3*dLdS*d2LdS2*d2BdL2 + dLdS^3*d3BdL3 der <- sigmoid(zc, deriv = TRUE) D3LLKdmu3 <- der$D2 / (sig*co^2) ## the third derivatives ## order mmm,mms,mss,sss l3 <- matrix(0,n,4) l3[ , 1] <- wt * D3LLKdmu3 l3[ , 2] <- wt * ( - D2LLKdmu2 / sig ) l3[ , 3] <- wt * ( 2 * dLLKdmu / sig^2 ) l3[ , 4] <- wt * ( 6*gLog/sig^4 - d3BdS3 ) g3 <- cbind(family$linfo[[1]]$d3link(mu), family$linfo[[2]]$d3link(sig)) } if (deriv>3) { # D^4 logBeta / D lam^4 ; D^4 lam / D sig^4; D^4 logBeta / D sig^4 d4BdL4 <- (1-tau)^4 * psigamma(lam*(1-tau), 3) + tau^4 * psigamma(lam*tau, 3) - psigamma(lam, 3) d4LdS4 <- 24 * co / sig^5 d4BdS4 <- d4LdS4*dBdL + 3*d2LdS2^2*d2BdL2 + 4*dLdS*d3LdS3*d2BdL2 + 6*(dLdS)^2*d2LdS2*d3BdL3 + dLdS^4*d4BdL4 ## the fourth derivatives, order: mmmm,mmms,mmss,msss,ssss l4 <- matrix(0, n, 5) l4[ , 1] <- wt * ( - der$D3 / (sig*co^3) ) l4[ , 2] <- wt * ( - D3LLKdmu3 / sig ) l4[ , 3] <- wt * ( 2 * D2LLKdmu2 / sig^2 ) l4[ , 4] <- wt * ( - 6 * dLLKdmu / sig^3 ) l4[ , 5] <- wt * ( -24*gLog/sig^5 - d4BdS4 ) g4 <- cbind(family$linfo[[1]]$d4link(mu), family$linfo[[2]]$d4link(sig)) } if (deriv) { i2 <- family$tri$i2; i3 <- family$tri$i3; i4 <- family$tri$i4 ## transform derivates w.r.t. mu to derivatives w.r.t. eta... de <- gamlss.etamu(l1,l2,l3,l4,ig1,g2,g3,g4,i2,i3,i4,deriv-1) ## get the gradient and Hessian... ret <- gamlss.gH(X,jj,de$l1,de$l2,i2,l3=de$l3,i3=i3,l4=de$l4,i4=i4, d1b=d1b,d2b=d2b,deriv=deriv-1,fh=fh,D=D) } else ret <- list() ret$l <- l; ret } ## end ll initialize <- expression({ # COPIED EXACTLY FROM gaulss() ## idea is to regress g(y) on model matrix for mean, and then ## to regress the corresponding log absolute residuals on ## the model matrix for log(sigma) - may be called in both ## gam.fit5 and initial.spg... note that appropriate E scaling ## for full calculation may be inappropriate for initialization ## which is basically penalizing something different here. ## best we can do here is to use E only as a regularizer. n <- rep(1, nobs) ## should E be used unscaled or not?.. use.unscaled <- if (!is.null(attr(E,"use.unscaled"))) TRUE else FALSE if (is.null(start)) { jj <- attr(x,"lpi") if (!is.null(offset)) offset[[3]] <- 0 yt1 <- if (family$link[[1]]=="identity") y else family$linfo[[1]]$linkfun(abs(y)+max(y)*1e-7) if (!is.null(offset[[1]])) yt1 <- yt1 - offset[[1]] if (is.list(x)) { ## discrete case start <- rep(0,max(unlist(jj))) R <- suppressWarnings(chol(XWXd(x$Xd,w=rep(1,length(y)),k=x$kd,ks=x$ks,ts=x$ts,dt=x$dt,v=x$v,qc=x$qc,nthreads=1,drop=x$drop,lt=x$lpid[[1]])+crossprod(E[,jj[[1]]]),pivot=TRUE)) Xty <- XWyd(x$Xd,rep(1,length(y)),yt1,x$kd,x$ks,x$ts,x$dt,x$v,x$qc,x$drop,lt=x$lpid[[1]]) piv <- attr(R,"pivot") rrank <- attr(R,"rank") startji <- rep(0,ncol(R)) if (rrank, Simon N. Wood. #' @references Fasiolo, M., Wood, S.N., Zaffran, M., Nedellec, R. and Goude, Y., 2020. #' Fast calibrated additive quantile regression. #' Journal of the American Statistical Association (to appear). #' \url{https://www.tandfonline.com/doi/full/10.1080/01621459.2020.1725521}. #' @examples #' library(qgam) #' set.seed(0) #' dat <- gamSim(1, n=200) #' b<-qgam(y~s(x0)+s(x1)+s(x2)+s(x3), data=dat, qu = 0.5) #' plot(b, pages=1) #' check.qgam(b, pch=19, cex=.3) #' check.qgam <- function(obj, nbin = 10, lev = 0.05, ...) ## takes a fitted gam object and produces some standard diagnostic plots { svpar <- par(no.readonly = TRUE) cqcheck(obj = obj, v = as.matrix(obj$fitted.values)[ , 1], nbin = nbin, xlab = "Fitted values", main = "Proportion of neg. resid.", ...) ## Checking bias induced by having smoothed the loss # Here we are estimating E( Phi(y, mu, lam*sig) - I(y > mu) | x ) using a Gaussian GAM co <- obj$family$getCo() sig <- exp( obj$family$getTheta() ) if( is.list(obj$formula) ) { sig <- sig * obj$fitted.values[ , 2] } lam <- co / sig dat <- obj$model form <- if( is.list(obj$formula) ) { obj$formula[[1]] } else { obj$formula } res <- dat[[ form[[2]] ]] - as.matrix(obj$fitted.values)[ , 1] form[[2]] <- as.symbol( "bias" ) dat$bias <- plogis(res, 0, sig*lam) - as.numeric(res > 0) fitBias <- gam(form, data = dat) hist(fitBias$fitted.values, xlab = expression(F(hat(mu)) - F(mu[0])), main = "Bias due to smoothed loss") cat("Theor. proportion of neg. resid.:", obj$family$getQu(), " Actual proportion:", mean(res<0)) cat("\nIntegrated absolute bias |F(mu) - F(mu0)| =", mean(abs(fitBias$fitted.values))) ## now summarize convergence information cat("\nMethod:",obj$method," Optimizer:",obj$optimizer) if (!is.null(obj$outer.info)) { ## summarize convergence information if (obj$optimizer[2]%in%c("newton","bfgs")) { boi <- obj$outer.info cat("\n",boi$conv," after ",boi$iter," iteration",sep="") if (boi$iter==1) cat(".") else cat("s.") cat("\nGradient range [",min(boi$grad),",",max(boi$grad),"]",sep="") cat("\n(score ",obj$gcv.ubre," & scale ",obj$sig2,").",sep="") ev <- eigen(boi$hess)$values if (min(ev)>0) cat("\nHessian positive definite, ") else cat("\n") cat("eigenvalue range [",min(ev),",",max(ev),"].\n",sep="") } else { ## just default print of information .. cat("\n"); print(obj$outer.info) } } else { ## no sp, perf iter or AM case if (length(obj$sp)==0) ## no sp's estimated cat("\nModel required no smoothing parameter selection") else { cat("\nSmoothing parameter selection converged after",obj$mgcv.conv$iter,"iteration") if (obj$mgcv.conv$iter>1) cat("s") if (!obj$mgcv.conv$fully.converged) cat(" by steepest\ndescent step failure.\n") else cat(".\n") cat("The RMS",obj$method,"score gradient at convergence was",obj$mgcv.conv$rms.grad,".\n") if (obj$mgcv.conv$hess.pos.def) cat("The Hessian was positive definite.\n") else cat("The Hessian was not positive definite.\n") #cat("The estimated model rank was ",obj$mgcv.conv$rank, # " (maximum possible: ",obj$mgcv.conv$full.rank,")\n",sep="") } } if (!is.null(obj$rank)) { cat("Model rank = ",obj$rank,"/",length(obj$coefficients),"\n") } cat("\n") ## now check k kchck <- .kcheck(obj) if (!is.null(kchck)) { cat("Basis dimension (k) check: if edf is close to k\' (maximum possible edf) \n") cat("it might be worth increasing k. \n\n") printCoefmat(kchck,digits=3); } # Reset graphical parameters par(svpar) return( invisible(NULL) ) }qgam/R/I_allVars1.R0000644000176200001440000000167013370055613013467 0ustar liggesusers## # Version of all.vars that doesn't split up terms like x$y into x and y # Copied from all.vars1 function in mgcv 1.18-24 # .allVars1 <- function(form){ vars <- all.vars(form) vn <- all.names(form) vn <- vn[vn%in%c(vars,"$","[[")] ## actual variable related names if ("[["%in%vn) stop("can't handle [[ in formula") ii <- which(vn%in%"$") ## index of '$' if (length(ii)) { ## assemble variable names vn1 <- if (ii[1]>1) vn[1:(ii[1]-1)] go <- TRUE k <- 1 while (go) { n <- 2; while(k. #' @examples #' n <- 1000 #' x <- seq(0, 4, length.out = n) #' plot(x, pinLoss(x, rep(2, n), qu = 0.9, add = FALSE), type = 'l', ylab = "loss") #' pinLoss <- function(y, mu, qu, add = TRUE){ # Recursive call for multiple quantiles if( length(qu) > 1 ){ n <- length( qu ) l <- sapply(1:n, function(ii){ return( pinLoss(y, mu[ , ii], qu[ii], add = add) ) }) if( is.matrix(l) ){ colnames(l) <- qu } else { names(l) <- qu } return( l ) } tau <- 1 - qu d <- y - mu l <- d * 0 l[d < 0] <- - tau * d[ d < 0 ] l[d > 0] <- - (tau-1) * d[ d > 0 ] if( add ){ l <- sum(l) } return( l ) }qgam/R/mqgam.R0000644000176200001440000001772414146704244012646 0ustar liggesusers########################## #' Fit multiple smooth additive quantile regression models #' #' @description This function fits a smooth additive regression model to several quantiles. #' #' @param form A GAM formula, or a list of formulae. See ?mgcv::gam details. #' @param data A data frame or list containing the model response variable and covariates required by the formula. #' By default the variables are taken from environment(formula): typically the environment from which gam is called. #' @param qu A vectors of quantiles of interest. Each entry should be in (0, 1). #' @param lsig The value of the log learning rate used to create the Gibbs posterior. By defauls \code{lsig=NULL} and this #' parameter is estimated by posterior calibration described in Fasiolo et al. (2017). Obviously, the function is much faster #' if the user provides a value. #' @param err An upper bound on the error of the estimated quantile curve. Should be in (0, 1). If it is a vector, it should be of the #' same length of \code{qu}. Since qgam v1.3 it is selected automatically, using the methods of Fasiolo et al. (2017). #' The old default was \code{err=0.05}. #' @param multicore If TRUE the calibration will happen in parallel. #' @param ncores Number of cores used. Relevant if \code{multicore == TRUE}. #' @param cluster An object of class \code{c("SOCKcluster", "cluster")}. This allowes the user to pass her own cluster, #' which will be used if \code{multicore == TRUE}. The user has to remember to stop the cluster. #' @param paropts a list of additional options passed into the foreach function when parallel computation is enabled. #' This is important if (for example) your code relies on external data or packages: #' use the .export and .packages arguments to supply them so that all cluster nodes #' have the correct environment set up for computing. #' @param control A list of control parameters. The only one relevant here is \code{link}, which is the link function #' used (see \code{?elf} and \code{?elflss} for defaults). All other control parameters are used by #' \code{tuneLearnFast}. See \code{?tuneLearnFast} for details. #' @param argGam A list of parameters to be passed to \code{mgcv::gam}. This list can potentially include all the arguments listed #' in \code{?gam}, with the exception of \code{formula}, \code{family} and \code{data}. #' @return A list with entries: \itemize{ #' \item{\code{fit} = a \code{gamObject}, one for each entry of \code{qu}. Notice that the #' slots \code{model} and \code{smooth} of each object has been removed to save memory. #' See \code{?gamObject}. } #' \item{\code{model} = the \code{model} slot of the \code{gamObject}s in the \code{fit} slot. This is the same for every #' fit, hence only one copy is stored.} #' \item{\code{smooth} = the \code{smooth} slot of the \code{gamObject}s in the \code{fit} slot. This is the same for every #' fit, hence only one copy is stored.} #' \item{\code{calibr} = a list which is the output of an internal call to \code{tuneLearnFast}, which is used for calibrating #' the learning rate. See \code{?tuneLearnFast} for details.} #' } #' @author Matteo Fasiolo . #' @references Fasiolo, M., Wood, S.N., Zaffran, M., Nedellec, R. and Goude, Y., 2020. #' Fast calibrated additive quantile regression. #' Journal of the American Statistical Association (to appear). #' \url{https://www.tandfonline.com/doi/full/10.1080/01621459.2020.1725521}. #' @references Fasiolo, M., Wood, S.N., Zaffran, M., Nedellec, R. and Goude, Y., 2021. #' qgam: Bayesian Nonparametric Quantile Regression Modeling in R. #' Journal of Statistical Software, 100(9), 1-31, \doi{10.18637/jss.v100.i09}. #' @examples #' #' ##### #' # Multivariate Gaussian example #' #### #' library(qgam) #' set.seed(2) #' dat <- gamSim(1, n=300, dist="normal", scale=2) #' #' fit <- mqgam(y~s(x0)+s(x1)+s(x2)+s(x3), data=dat, qu = c(0.2, 0.8)) #' #' invisible( qdo(fit, 0.2, plot, pages = 1) ) #' #' ##### #' # Univariate "car" example #' #### #' library(qgam); library(MASS) #' #' # Fit for quantile 0.8 using the best sigma #' quSeq <- c(0.2, 0.4, 0.6, 0.8) #' set.seed(6436) #' fit <- mqgam(accel~s(times, k=20, bs="ad"), data = mcycle, qu = quSeq) #' #' # Plot the fit #' xSeq <- data.frame(cbind("accel" = rep(0, 1e3), "times" = seq(2, 58, length.out = 1e3))) #' plot(mcycle$times, mcycle$accel, xlab = "Times", ylab = "Acceleration", ylim = c(-150, 80)) #' for(iq in quSeq){ #' pred <- qdo(fit, iq, predict, newdata = xSeq) #' lines(xSeq$times, pred, col = 2) #' } #' mqgam <- function(form, data, qu, lsig = NULL, err = NULL, multicore = !is.null(cluster), cluster = NULL, ncores = detectCores() - 1, paropts = list(), control = list(), argGam = NULL) { nq <- length(qu) # Removing all NAs, unused variables and factor levels from data data <- .cleanData(.dat = data, .form = form, .drop = argGam$drop.unused.levels) if( !is.null(err) && length(err) != nq ){ if(length(err) == 1) { err <- rep(err, nq) } else { stop("\"err\" should either be a scalar or a vector of the same length as \"qu\".") } } # Setting up control parameter (mostly used by tuneLearnFast) ctrl <- list("gausFit" = NULL, "verbose" = FALSE, "b" = 0, "link" = "identity") # Checking if the control list contains unknown names entries in "control" substitute those in "ctrl" ctrl <- .ctrlSetup(innerCtrl = ctrl, outerCtrl = control, verbose = FALSE) # Initial Gaussian fit if( is.null(ctrl[["gausFit"]]) ) { if( is.formula(form) ){ gausFit <- do.call("gam", c(list("formula" = form, "data" = quote(data), "family" = gaussian(link=ctrl[["link"]])), argGam)) } else { gausFit <- do.call("gam", c(list("formula" = form, "data" = quote(data), "family" = gaulss(link=list(ctrl[["link"]], "logb"), b=ctrl[["b"]])), argGam)) } ctrl[["gausFit"]] <- gausFit } # Output list out <- list() if( is.null(lsig) ) { # Selecting the learning rate sigma OR .... learn <- tuneLearnFast(form = form, data = data, err = err, qu = qu, multicore = multicore, cluster = cluster, ncores = ncores, paropts = paropts, control = ctrl, argGam = argGam) lsig <- learn$lsig err <- learn$err # Over-writing err parameters! out[["calibr"]] <- learn } else { # ... use the one provided by the user if( length(lsig) == 1 ) { lsig <- rep(lsig, nq) } else { if( length(lsig) != nq ) stop("lsig should either be scalar or a vector of length(qu) ") } } # Fitting a quantile model for each qu out[["fit"]] <- lapply(1:nq, function(ii){ .out <- qgam(form, data, qu[ii], lsig = lsig[ii], err = err[ii], multicore = FALSE, control = ctrl, argGam = argGam) # Removing data and smooth matrix to reduce memory requirements. There quantities # are kept only inside the first fit ( qfit[[1]] ) if(ii > 1){ .out$model <- NULL .out$smooth <- NULL .out$call$data <- NULL } return( .out ) }) # Storing output list names(out[["fit"]]) <- qu out[["model"]] <- out[["fit"]][[1]][["model"]] out[["smooth"]] <- out[["fit"]][[1]][["smooth"]] out[["data"]] <- out[["fit"]][[1]][["call"]][["data"]] out[["fit"]][[1]][["model"]] <- NULL out[["fit"]][[1]][["smooth"]] <- NULL out[["fit"]][[1]][["call"]][["data"]] <- NULL class(out) <- "mqgam" # out[["qu"]] <- qu # out[["co"]] <- co # out[["lsig"]] <- lsig return( out ) } qgam/R/I_llkGrads.R0000644000176200001440000000651513561024463013551 0ustar liggesusers########## # Internal function which calculates the gradient of each element of the (unpenalized) log-likelihood # wrt the regression coefficients. # INPUT # - gObj: a gamObject fitted using the elf or elflss family # - X: the full "lpmatrix" corresponding to both linear predictors # - jj: a list of indexes indicating the position of the coefficients corresponding to each linear predictor # - type: set it to "DllkDb" if you want the derivative of the log-lik wrt the regression coeff, or # to "DllkDeta" if you want those wrt the linear predictor. # # OUTPUT # - an n by p matrix where the i-th columns is the gradient of the i-th log-likelihood component # .llkGrads <- function(gObj, X, jj, type = "DllkDb") { type <- match.arg(type, c("DllkDb", "DllkDeta")) y <- gObj$y beta <- coef( gObj ) fam <- gObj$family wt <- gObj$prior.weights offset <- gObj$offset n <- length( y ) p <- length( beta ) tau <- fam$getQu( ) theta <- fam$getTheta( ) co <- fam$getCo( ) if( !is.null(jj) ) { # GAMLSS if( !is.null(offset) ){ offset[[3]] <- 0 } eta <- X[ , jj[[1]], drop=FALSE] %*% beta[jj[[1]]] if( !is.null(offset[[1]]) ){ eta <- eta + offset[[1]] } mu <- fam$linfo[[1]]$linkinv( eta ) eta1 <- X[ , jj[[2]], drop=FALSE] %*% beta[jj[[2]]] + theta if( !is.null(offset[[2]]) ){ eta1 <- eta1 + offset[[2]] } sig <- fam$linfo[[2]]$linkinv( eta1 ) lam <- co / sig # Chain rule: DlogBeta/Dsig = DlogBeta/Dlam * Dlam/Dsig; dBdL <- (1-tau) * digamma(lam*(1-tau)) + tau * digamma(lam*tau) - digamma(lam) dLdS <- - co / sig^2 dBdS <- dLdS * dBdL zc <- (y - mu) / co lpxp <- log1pexp( zc ) gLog <- (tau-1) * (y-mu) + co * lpxp pl <- plogis(y, mu, co) # [1] Derivatives of llk wrt parameters l1 <- matrix(0, n, 2) l1[ , 1] <- wt * (pl - 1 + tau) / sig l1[ , 2] <- wt * ( gLog/sig^2 - dBdS ) # Derivative of link function ig1 <- cbind(fam$linfo[[1]]$mu.eta(eta), fam$linfo[[2]]$mu.eta(eta1)) # [2] Transform llk derivatives wrt mu to derivatives wrt linear predictor (eta) l1 <- l1 * ig1 if( type == "DllkDeta" ){ return( list("l1" = as.matrix(l1), "sig" = sig) ) } # [3] Transform into derivatives wrt regression coefficients # The i-th column of 'grads' is the score of the i-th likelihood component grads <- matrix(0, n, p) for (i in 1:length(jj)) { grads[ , jj[[i]]] <- grads[ , jj[[i]]] + l1[ , i] * X[ , jj[[i]], drop = FALSE] } } else { # Extended GAM sig <- exp( theta ) lam <- co / sig if( is.null(offset) ){ offset <- numeric( nrow(X) ) } eta <- X %*% beta + offset mu <- fam$linkinv( eta ) z <- (y - mu) / sig pl <- plogis(y, mu, lam * sig) # [1] Derivatives of llk wrt parameters l1 <- numeric( n ) l1 <- wt * (pl - 1 + tau) / sig # Derivative of link function ig1 <- fam$mu.eta(eta) # [2] Transform llk derivatives wrt mu to derivatives wrt linear predictor (eta) l1 <- l1 * ig1 if( type == "DllkDeta" ){ return(list("l1" = as.matrix(l1), "sig" = sig)) } # [3] Transform into derivatives wrt regression coefficients # The i-th column of 'grads' is the score of the i-th likelihood component grads <- drop(l1) * X } return(grads) } qgam/R/elf.R0000644000176200001440000002665013763713405012313 0ustar liggesusers########################## #' Extended log-F model with fixed scale #' #' @description The \code{elf} family implements the Extended log-F density of Fasiolo et al. (2017) and it is supposed #' to work in conjuction with the extended GAM methods of Wood et al. (2017), implemented by #' \code{mgcv}. It differs from the \code{elflss} family, because here the scale of the density (sigma, aka the learning rate) is a single scalar, #' while in \code{elflss} it can depend on the covariates. At the moment the family is mainly intended for internal use, #' use the \code{qgam} function to fit quantile GAMs based on ELF. #' #' @param theta a scalar representing the log-scale log(sigma). #' @param link the link function between the linear predictor and the quantile location. #' @param qu parameter in (0, 1) representing the chosen quantile. For instance, to fit the median choose \code{qu=0.5}. #' @param co positive constant used to determine parameter lambda of the ELF density (lambda = co / sigma). #' Can be vector valued. #' @return An object inheriting from mgcv's class \code{extended.family}. #' @details This function is meant for internal use only. #' @author Matteo Fasiolo and Simon N. Wood. #' @references Fasiolo, M., Wood, S.N., Zaffran, M., Nedellec, R. and Goude, Y., 2020. #' Fast calibrated additive quantile regression. #' Journal of the American Statistical Association (to appear). #' \url{https://www.tandfonline.com/doi/full/10.1080/01621459.2020.1725521}. #' #' Wood, Simon N., Pya, N. and Safken, B. (2017). Smoothing parameter and model selection for #' general smooth models. Journal of the American Statistical Association. #' @examples #' library(qgam) #' set.seed(2) #' dat <- gamSim(1,n=400,dist="normal",scale=2) #' #' # Fit median using elf directly: FAST BUT NOT RECOMMENDED #' fit <- gam(y~s(x0)+s(x1)+s(x2)+s(x3), #' family = elf(co = 0.1, qu = 0.5), data = dat) #' plot(fit, scale = FALSE, pages = 1) #' #' # Using qgam: RECOMMENDED #' fit <- qgam(y~s(x0)+s(x1)+s(x2)+s(x3), data=dat, qu = 0.8) #' plot(fit, scale = FALSE, pages = 1) #' #' ## (c) Simon N. Wood & Matteo Fasiolo ## 2013-2017. Released under GPL2. ## extended families for mgcv, standard components. ## family - name of family character string ## link - name of link character string ## linkfun - the link function ## linkinv - the inverse link function ## mu.eta - d mu/d eta function (derivative of inverse link wrt eta) ## note: for standard links this information is supplemented using ## function fix.family.link.extended.family with functions ## gkg where k is 2,3 or 4 giving the kth derivative of the ## link over the first derivative of the link to the power k. ## for non standard links these functions muct be supplied. ## dev.resids - function computing deviance residuals. ## Dd - function returning derivatives of deviance residuals w.r.t. mu and theta. ## aic - function computing twice - log likelihood for 2df to be added to. ## initialize - expression to be evaluated in gam.fit4 and initial.spg ## to initialize mu or eta. ## preinitialize - optional expression evaluated in estimate.gam to ## e.g. initialize theta parameters (see e.g. ocat) ## postproc - expression to evaluate in estimate.gam after fitting (see e.g. betar) ## ls - function to evaluated log saturated likelihood and derivatives w.r.t. ## phi and theta for use in RE/ML optimization. If deviance used is just -2 log ## lik. can njust return zeroes. ## validmu, valideta - functions used to test whether mu/eta are valid. ## n.theta - number of theta parameters. ## no.r.sq - optional TRUE/FALSE indicating whether r^2 can be computed for family ## ini.theta - function for initializing theta. ## putTheta, getTheta - functions for storing and retriving theta values in function ## environment. ## rd - optional function for simulating response data from fitted model. ## residuals - optional function for computing residuals. ## predict - optional function for predicting from model, called by predict.gam. ## family$data - optional list storing any family specific data for use, e.g. in predict ## function. elf <- function (theta = NULL, link = "identity", qu, co) { # Some checks if( !is.na(qu) && (findInterval(qu, c(0, 1) )!=1) ) stop("qu should be in (0, 1)") linktemp <- substitute(link) if (!is.character(linktemp)) linktemp <- deparse(linktemp) if (linktemp %in% c("log", "identity", "sqrt")) stats <- make.link(linktemp) else if (is.character(link)) { stats <- make.link(link) linktemp <- link } else { if (inherits(link, "link-glm")) { stats <- link if (!is.null(stats$name)) linktemp <- stats$name } else stop(linktemp, " link not available for elf family; available links are \"identity\", \"log\" and \"sqrt\"") } ## Theta <- NULL; n.theta <- 1 if ( !is.null(theta) ) { iniTheta <- theta ## fixed log theta supplied n.theta <- 0 ## signal that there are no theta parameters to estimate } else iniTheta <- 0 ## inital log theta value env <- new.env(parent = environment(elf)) #.GlobalEnv) ##########!!!!!!!!!!!!!!!!~######################## assign(".Theta", iniTheta, envir = env) getTheta <- function(trans=FALSE) if (trans) exp(get(".Theta")) else get(".Theta") putTheta <- function(theta) assign(".Theta", theta, envir=environment(sys.function())) assign(".qu", qu, envir = env) getQu <- function( ) get(".qu") putQu <- function(qu) assign(".qu", qu, envir=environment(sys.function())) assign(".co", co, envir = env) getCo <- function( ) get(".co") putCo <- function(co) assign(".co", co, envir=environment(sys.function())) # variance <- function(mu) exp(get(".Theta")) ##### XXX ##### Necessary? validmu <- function(mu) all( is.finite(mu) ) dev.resids <- function(y, mu, wt, theta=NULL) { ##### XXX ##### if( is.null(theta) ) theta <- get(".Theta") tau <- get(".qu") co <- get(".co") sig <- exp(theta) lam <- mean(co / sig) sig <- co / lam z <- (y - drop(mu)) / sig term <- (1-tau)*lam*log1p(-tau) + lam*tau*log(tau) - (1-tau)*z + lam*log1pexp( z / lam ) 2 * wt * term } Dd <- function(y, mu, theta, wt, level=0) { tau <- get(".qu") co <- get(".co") mu <- drop(mu) ## derivatives of the deviance... sig <- exp(theta) lam <- mean(co / sig) sig <- co / lam z <- (y - mu) / sig dl <- dlogis(y-mu, 0, lam*sig) pl <- plogis(y-mu, 0, lam*sig) r <- list() ## get the quantities needed for IRLS. ## Dmu2eta2 is deriv of D w.r.t mu twice and eta twice, ## Dmu is deriv w.r.t. mu once, etc... r$Dmu <- - 2 * wt * ( (pl - 1 + tau) / sig ) r$Dmu2 <- 2 * wt * ( dl / sig ) # r$EDmu2 <- 2 * wt * ((1-tau)*tau / (lam + 1)) / sig^2 ## exact (or estimated) expected weight #### XXX #### r$EDmu2 <- r$Dmu2 # It make more sense using the observed information everywhere if (level>0) { ## quantities needed for first derivatives zl <- z / lam der <- sigmoid(zl, deriv = TRUE) r$Dth <- - 2 * wt * sig * ( z * (pl - 1 + tau) / sig ) r$Dmuth <- 2 * wt * ( ((y-mu)*dl + pl - 1 + tau) / sig ) r$Dmu3 <- - (2 * wt * ( der$D2 / (lam * sig) )) / (lam*sig^2) D2mDt <- ((zl*der$D2 + 2*der$D1) / (lam*sig)) / (sig^2) r$Dmu2th <- - 2 * wt * sig * D2mDt } if (level>1) { ## whole damn lot r$Dmu4 <- (2 * wt * ( der$D3 / (lam * sig^2) )) / (lam^2 * sig^2) r$Dth2 <- - 2 * wt * ( z * (pl - 1 + tau) + (2*z*(1 - tau - pl - 0.5 * (y-mu)*dl)) ) r$Dmuth2 <- 2 * wt * ( ((y-mu)*dl + pl - 1 + tau) / sig - (2*(der$D0 - 1 + tau) + 4*zl*der$D1 + zl^2*der$D2) / sig ) r$Dmu2th2 <- - 2 * wt * (sig * D2mDt - (zl^2*der$D3 + 6*zl*der$D2 + 6*der$D1) / (lam*sig^2)) r$Dmu3th <- 2 * wt * ( (zl*der$D3 + 3*der$D2) / (lam * sig) ) / (lam * sig^2) } r } aic <- function(y, mu, theta=NULL, wt, dev) { if (is.null(theta)) theta <- get(".Theta") sig <- exp(theta) tau <- get(".qu") co <- get(".co") lam <- mean(co / sig) sig <- co / lam z <- (y - drop(mu)) / sig term <- - (1-tau) * z + lam * log1pexp( z / lam ) + log( sig * lam * beta(lam*(1-tau), tau*lam) ) 2 * sum(term * wt) } ls <- function(y, w, theta, scale) { tau <- get(".qu") co <- get(".co") sig <- exp(theta) lam <- mean(co / sig) sig <- co / lam ## the log saturated likelihood function. ls <- sum( w * ((1-tau)*lam*log1p(-tau) + lam*tau*log(tau) - log(lam * sig * beta(lam*(1-tau), lam*tau))) ) #lsth <- - sig * sum(w / sig) lsth <- - sum(w) lsth2 <- 0 list(ls=ls, ## saturated log likelihood lsth1=lsth, ## first deriv vector w.r.t theta - last element relates to scale, if free lsth2=lsth2) ## Hessian w.r.t. theta, last row/col relates to scale, if free } initialize <- expression({ mustart <- quantile(y, family$getQu()) + y * 0 # this ---> y <--- is very bad idea }) #postproc <- expression({ ####### XXX ??? ####### # object$family$family <- # paste("elf(",round(object$family$getTheta(TRUE),3),")",sep="") #}) # rd <- function(mu,wt,scale) { ####### XXX TODO ####### # Theta <- exp(get(".Theta")) # rnbinom(mu,size=Theta,mu=mu) # } # # qf <- function(p,mu,wt,scale) { ####### XXX TODO ####### # Theta <- exp(get(".Theta")) # qnbinom(p,size=Theta,mu=mu) # } get.null.coef <- function(G,start=NULL,etastart=NULL,mustart=NULL,...) { ## Get an estimate of the coefs corresponding to maximum reasonable deviance... y <- G$y weights <- G$w nobs <- G$n ## ignore codetools warning!! ##start <- etastart <- mustart <- NULL family <- G$family eval(family$initialize) ## have to do this to ensure y numeric y <- as.numeric(y) mum <- quantile(y, get(".qu")) + 0*y etam <- family$linkfun(mum) null.coef <- qr.coef(qr(G$X), etam) null.coef[is.na(null.coef)] <- 0; ## get a suitable function scale for optimization routines null.scale <- sum(family$dev.resids(y, mum, weights))/nrow(G$X) list(null.coef=null.coef,null.scale=null.scale) } # environment(rd)<- environment(qf) <- environment(variance) <- environment(dev.resids) <- environment(ls) <- environment(aic) <- environment(Dd) <- environment(getTheta) <- environment(putTheta) <- environment(putCo) <- environment(getCo) <- environment(putQu) <- environment(getQu) <- environment(get.null.coef) <- env structure(list(family = "elf", link = linktemp, linkfun = stats$linkfun, linkinv = stats$linkinv, dev.resids = dev.resids,Dd=Dd, #variance=variance, aic = aic, mu.eta = stats$mu.eta, initialize = initialize, #postproc=postproc, ls=ls, validmu = validmu, valideta = stats$valideta, n.theta=n.theta, ini.theta = iniTheta, putTheta=putTheta,getTheta=getTheta, putQu=putQu, getQu=getQu, putCo=putCo,getCo=getCo, get.null.coef=get.null.coef, use.wz=TRUE #, rd=rd,qf=qf ), class = c("extended.family","family")) } ## elf qgam/R/generic_functions.R0000644000176200001440000000145613462141521015234 0ustar liggesusers########################## #' Generic checking function #' #' @description Generic function for checking R objects which produces, for instance, convergence tests or diagnostic plots. #' For \code{qgam} objects \code{check.qgam()} will be used. #' @param obj the object to be checked. #' @param ... extra arguments, mainly used by graphic functions. #' @return Reports the results of convergence tests and/or produces diagnostic plots. #' @author Matteo Fasiolo . #' @examples #' ####### #' # Using check.qgam #' ####### #' library(qgam) #' set.seed(0) #' dat <- gamSim(1, n=200) #' b<-qgam(y~s(x0)+s(x1)+s(x2)+s(x3), data=dat, qu = 0.5) #' plot(b, pages=1) #' check(b, pch=19, cex=.3) #' @docType methods #' check <- function (obj, ...) { UseMethod("check", obj) }qgam/R/tuneLearnFast.R0000644000176200001440000005236014015713517014310 0ustar liggesusers########################## #' Fast learning rate calibration for the Gibbs posterior #' #' @description The learning rate (sigma) of the Gibbs posterior is tuned either by calibrating the credible intervals for the fitted #' curve, or by minimizing the pinball loss on out-of-sample data. This is done by bootrapping or by k-fold cross-validation. #' Here the loss function is minimized, for each quantile, using a Brent search. #' #' @param form A GAM formula, or a list of formulae. See ?mgcv::gam details. #' @param data A data frame or list containing the model response variable and covariates required by the formula. #' By default the variables are taken from environment(formula): typically the environment from which gam is called. #' @param qu The quantile of interest. Should be in (0, 1). #' @param err An upper bound on the error of the estimated quantile curve. Should be in (0, 1). #' Since qgam v1.3 it is selected automatically, using the methods of Fasiolo et al. (2017). #' The old default was \code{err=0.05}. #' @param multicore If TRUE the calibration will happen in parallel. #' @param ncores Number of cores used. Relevant if \code{multicore == TRUE}. #' @param cluster An object of class \code{c("SOCKcluster", "cluster")}. This allowes the user to pass her own cluster, #' which will be used if \code{multicore == TRUE}. The user has to remember to stop the cluster. #' @param paropts a list of additional options passed into the foreach function when parallel computation is enabled. #' This is important if (for example) your code relies on external data or packages: #' use the .export and .packages arguments to supply them so that all cluster nodes #' have the correct environment set up for computing. #' @param control A list of control parameters for \code{tuneLearn} with entries: \itemize{ #' \item{\code{loss} = loss function use to tune log(sigma). If \code{loss=="cal"} is chosen, then log(sigma) is chosen so that #' credible intervals for the fitted curve are calibrated. See Fasiolo et al. (2017) for details. #' If \code{loss=="pin"} then log(sigma) approximately minimizes the pinball loss on the out-of-sample #' data.} #' \item{\code{sam} = sampling scheme use: \code{sam=="boot"} corresponds to bootstrapping and \code{sam=="kfold"} to k-fold #' cross-validation. The second option can be used only if \code{ctrl$loss=="pin"}.} #' \item{\code{vtype} = type of variance estimator used to standardize the deviation from the main fit in the calibration. #' If set to \code{"m"} the variance estimate obtained by the full data fit is used, if set to \code{"b"} #' than the variance estimated produced by the bootstrap fits are used. By default \code{vtype="m"}.} #' \item{\code{epsB} = positive tolerance used to assess convergence when fitting the regression coefficients on bootstrap data. #' In particular, if \code{|dev-dev_old|/(|dev|+0.1) 0 and values > 0.1 #' don't quite make sense. By default \code{aTol=0.05}.} #' \item{\code{redWd} = parameter which determines when the bracket will be reduced. #' If \code{redWd==10} then the bracket is halved if the nearest solution #' falls within the central 10\% of the bracket's width. By default \code{redWd = 10}.} #' \item{\code{b} = offset parameter used by the mgcv::gauslss, which we estimate to initialize the quantile #' fit (when a variance model is used). By default \code{b=0}.} #' \item{\code{link} = Link function to be used. See \code{?elf} and \code{?elflss} for defaults.} #' \item{\code{verbose} = if TRUE some more details are given. By default \code{verbose=FALSE}.} #' \item{\code{progress} = if TRUE progress in learning rate estimation is reported via printed text. #' \code{TRUE} by default.} #' } #' @param argGam A list of parameters to be passed to \code{mgcv::gam}. This list can potentially include all the arguments listed #' in \code{?gam}, with the exception of \code{formula}, \code{family} and \code{data}. #' @return A list with entries: \itemize{ #' \item{\code{lsig} = a vector containing the values of log(sigma) that minimize the loss function, #' for each quantile.} #' \item{\code{err} = the error bound used for each quantile. Generally each entry is identical to the #' argument \code{err}, but in some cases the function increases it to enhance stabily.} #' \item{\code{ranges} = the search ranges by the Brent algorithm to find log-sigma, for each quantile. } #' \item{\code{store} = a list, where the i-th entry is a matrix containing all the locations (1st row) at which #' the loss function has been evaluated and its value (2nd row), for the i-th quantile.} #' } #' @author Matteo Fasiolo . #' @references Fasiolo, M., Wood, S.N., Zaffran, M., Nedellec, R. and Goude, Y., 2020. #' Fast calibrated additive quantile regression. #' Journal of the American Statistical Association (to appear). #' \url{https://www.tandfonline.com/doi/full/10.1080/01621459.2020.1725521}. #' @examples #' library(qgam); library(MASS) #' #' ### #' # Single quantile fit #' ### #' # Calibrate learning rate on a grid #' set.seed(5235) #' tun <- tuneLearnFast(form = accel~s(times,k=20,bs="ad"), #' data = mcycle, #' qu = 0.2) #' #' # Fit for quantile 0.2 using the best sigma #' fit <- qgam(accel~s(times, k=20, bs="ad"), data = mcycle, qu = 0.2, lsig = tun$lsig) #' #' pred <- predict(fit, se=TRUE) #' plot(mcycle$times, mcycle$accel, xlab = "Times", ylab = "Acceleration", #' ylim = c(-150, 80)) #' lines(mcycle$times, pred$fit, lwd = 1) #' lines(mcycle$times, pred$fit + 2*pred$se.fit, lwd = 1, col = 2) #' lines(mcycle$times, pred$fit - 2*pred$se.fit, lwd = 1, col = 2) #' #' ### #' # Multiple quantile fits #' ### #' # Calibrate learning rate on a grid #' quSeq <- c(0.25, 0.5, 0.75) #' set.seed(5235) #' tun <- tuneLearnFast(form = accel~s(times, k=20, bs="ad"), #' data = mcycle, #' qu = quSeq) #' #' # Fit using estimated sigmas #' fit <- mqgam(accel~s(times, k=20, bs="ad"), data = mcycle, qu = quSeq, lsig = tun$lsig) #' #' # Plot fitted quantiles #' plot(mcycle$times, mcycle$accel, xlab = "Times", ylab = "Acceleration", #' ylim = c(-150, 80)) #' for(iq in quSeq){ #' pred <- qdo(fit, iq, predict) #' lines(mcycle$times, pred, col = 2) #' } #' #' \dontrun{ #' # You can get a better fit by letting the learning rate change with "accel" #' # For instance #' tun <- tuneLearnFast(form = list(accel ~ s(times, k=20, bs="ad"), ~ s(times)), #' data = mcycle, #' qu = quSeq) #' #' fit <- mqgam(list(accel ~ s(times, k=20, bs="ad"), ~ s(times)), #' data = mcycle, qu = quSeq, lsig = tun$lsig) #' #' # Plot fitted quantiles #' plot(mcycle$times, mcycle$accel, xlab = "Times", ylab = "Acceleration", #' ylim = c(-150, 80)) #' for(iq in quSeq){ #' pred <- qdo(fit, iq, predict) #' lines(mcycle$times, pred, col = 2) #' } #' } #' tuneLearnFast <- function(form, data, qu, err = NULL, multicore = !is.null(cluster), cluster = NULL, ncores = detectCores() - 1, paropts = list(), control = list(), argGam = NULL) { # Removing all NAs, unused variables and factor levels from data data <- .cleanData(.dat = data, .form = form, .drop = argGam$drop.unused.levels) n <- nrow(data) nq <- length(qu) if( !is.null(err) && length(err) != nq ){ if(length(err) == 1) { err <- rep(err, nq) } else { stop("\"err\" should either be a scalar or a vector of the same length as \"qu\".") } } # Setting up control parameter ctrl <- list( "loss" = "calFast", "sam" = "boot", "vtype" = "m", "epsB" = 1e-5, "init" = NULL, "brac" = log( c(1/2, 2) ), "K" = 50, "redWd" = 10, "tol" = .Machine$double.eps^0.25, "aTol" = 0.05, "b" = 0, "gausFit" = NULL, "link" = "identity", "verbose" = FALSE, "progress" = TRUE ) # Checking if the control list contains unknown names # Entries in "control" substitute those in "ctrl" ctrl <- .ctrlSetup(innerCtrl = ctrl, outerCtrl = control) if( ctrl$progress == "none" ) { ctrl$progress <- FALSE } if( !(ctrl$vtype%in%c("m", "b")) ) stop("control$vtype should be either \"m\" or \"b\" ") if( !(ctrl$loss%in%c("calFast", "cal", "pin")) ) stop("control$loss should be either \"cal\", \"pin\" or \"calFast\" ") if( !(ctrl$sam%in%c("boot", "kfold")) ) stop("control$sam should be either \"boot\" or \"kfold\" ") if( (ctrl$loss=="cal") && (ctrl$sam=="kfold") ) stop("You can't use control$sam == \"kfold\" when ctrl$loss==\"cal\" ") tol <- ctrl[["tol"]] brac <- ctrl[["brac"]] if( length(argGam$sp) && ctrl$loss != c("calFast") ){ stop("Cannot fix smoothing parameters unless control$loss == \"calFast\".") } # Sanity check if( tol > 0.1 * abs(diff(brac)) ) stop("tol > bracket_widths/10, choose smaller tolerance or larger bracket") if( ctrl$sam == "boot" ){ # Create weights for K boostrap dataset OR... wb <- lapply(1:ctrl[["K"]], function(nouse) tabulate(sample(1:n, n, replace = TRUE), n)) } else { # ... OR for K training sets for CV tmp <- sample(rep(1:ctrl[["K"]], length.out = n), n, replace = FALSE) wb <- lapply(1:ctrl[["K"]], function(ii) tabulate(which(tmp != ii), n)) } # Gaussian fit, used for initialization if( is.formula(form) ) { gausFit <- if( is.null(ctrl[["gausFit"]]) ) { do.call("gam", c(list("formula" = form, "data" = quote(data), "family" = gaussian(link=ctrl[["link"]])), argGam)) } else { ctrl$gausFit } varHat <- gausFit$sig2 formL <- form } else { gausFit <- if( is.null(ctrl[["gausFit"]]) ) { do.call("gam", c(list("formula" = form, "data" = quote(data), "family" = gaulss(link=list(ctrl[["link"]], "logb"), b=ctrl[["b"]])), argGam)) } else { ctrl$gausFit } varHat <- 1 / gausFit$fit[ , 2]^2 formL <- form[[1]] } # Order quantiles so that those close to the median are dealt with first oQu <- order( abs(qu - 0.5) ) # (Optional) Initializing the search range for sigma if( is.null(ctrl[["init"]]) ){ # We assume lam~0 and we match the variance of a symmetric (median) Laplace density with that of the Gaussian fit. # This is an over-estimate for extreme quantiles, but experience suggests that it's better erring on the upper side. tmp <- 0.5 #qu[ oQu[1] ] if( !is.list(form) ){ isig <- log(sqrt( gausFit$sig2 * (tmp^2*(1-tmp)^2) / (2*tmp^2-2*tmp+1) )) } else { isig <- log(sqrt( (ctrl[["b"]]+exp(coef(gausFit)["(Intercept).1"]))^2 * (tmp^2*(1-tmp)^2) / (2*tmp^2-2*tmp+1) )) } } else { isig <- ctrl[["init"]] } # Create gam object for full data fits mObj <- do.call("gam", c(list("formula" = formL, "family" = quote(elf(qu = NA, co = NA, theta = NA, link = ctrl$link)), "data" = quote(data), "fit" = FALSE), argGam)) # Remove "sp" as it is already been fixed argGam <- argGam[ names(argGam) != "sp" ] # Create gam object for bootstrap fits bObj <- do.call("gam", c(list("formula" = formL, "family" = quote(elf(qu = NA, co = NA, theta = NA, link = ctrl$link)), "data" = quote(data), "sp" = if(length(gausFit$sp)){gausFit$sp}else{NULL}, fit = FALSE), argGam)) # Preparing bootstrap object for gam.fit3 bObj <- .prepBootObj(obj = bObj, eps = ctrl$epsB, control = argGam$control) # Preparing reparametrization list and hide it within mObj. This will be needed by the sandwich calibration if( ctrl$loss == "calFast" ){ mObj$hidRepara <- if(is.formula(formL)) { .prepBootObj(obj = mObj, eps = NULL, control = argGam$control)[ c("UrS", "Mp", "U1") ] } else { bObj$Sl } } # Create prediction design matrices for each bootstrap sample or CV fold class( mObj ) <- c("gam", "glm", "lm") mObj$coefficients <- rep(0, ncol(mObj$X)) # Needed to fool predict.gam pMat <- predict.gam(mObj, newdata = data, type = "lpmatrix") # Stuff needed for the sandwich estimator sandStuff <- list("XFull" = pMat, "EXXT" = crossprod(pMat, pMat) / n, # E(xx^T) "EXEXT" = tcrossprod( colMeans(pMat), colMeans(pMat))) # E(x)E(x)^T if( multicore ){ # Create cluster tmp <- .clusterSetUp(cluster = cluster, ncores = ncores) #, exportALL = TRUE) cluster <- tmp$cluster ncores <- tmp$ncores clusterCreated <- tmp$clusterCreated registerDoParallel(cluster) # Load "qgam" and user-specified packages tmp <- unique( c("qgam", paropts[[".packages"]]) ) clusterExport(cluster, "tmp", envir = environment()) clusterEvalQ(cluster, { lapply(tmp, library, character.only = TRUE) }) paropts[[".packages"]] <- NULL # Export bootstrap objects, prediction matrix and user-defined stuff tmp <- unique( c("bObj", "pMat", "wb", "ctrl", "argGam", ".getVp", ".egamFit", ".gamlssFit", paropts[[".export"]]) ) clusterExport(cluster, tmp, envir = environment()) paropts[[".export"]] <- NULL } # Estimated learning rates, num of bracket expansions, error rates and bracket ranges used in bisection sigs <- efacts <- errors <- numeric(nq) rans <- matrix(NA, nq, 2) store <- vector("list", nq) names(sigs) <- names(errors) <- rownames(rans) <- qu # Here we need bTol > aTol, otherwise the new bracket will be too close to the probable solution bTol <- 4*ctrl$aTol if( is.null(err) ){ err <- .getErrParam(qu = qu, gFit = gausFit) } if(ctrl$progress){ cat("Estimating learning rate. Each dot corresponds to a loss evaluation. \n") } for(ii in 1:nq) { oi <- oQu[ii] ef <- 1 if(ctrl$progress){ cat("qu =", qu[oi]) } repeat{ # Compute bracket srange <- isig + ef * brac # Estimate log(sigma) using brent methods with current bracket (srange) res <- .tuneLearnFast(mObj = mObj, bObj = bObj, pMat = pMat, sandStuff = sandStuff, wb = wb, qu = qu[oi], err = err[oi], srange = srange, gausFit = gausFit, varHat = varHat, multicore = multicore, cluster = cluster, ncores = ncores, paropts = paropts, control = ctrl, argGam = argGam) # Store loss function evaluations store[[oi]] <- cbind(store[[oi]], res[["store"]]) lsig <- res$minimum # If solution not too close to boundary store results and determine bracket for next iteration if( all(abs(lsig-srange) > ctrl$aTol * abs(diff(srange))) ){ sigs[oi] <- lsig rans[oi, ] <- srange efacts[oi] <- ef errors[oi] <- res$err # Determine what quantile needs to be dealt with next, then choose bracket and initialization using old results if(ii < nq) { kk <- oQu[ which.min(abs(qu[oQu[ii+1]] - qu[oQu[1:ii]])) ] isig <- sigs[kk] wd <- abs(diff(rans[kk, ])) brac <- c(-1, 1) * wd / 2 # If kk solution close to center of kk bracket, halve the bracket size # (unless the size of the bracket is < 10*tol or the bracket has been expanded in the old iteration) if( (abs(isig - mean(rans[kk, ])) < wd/ctrl$redWd) && (wd > 10*tol) && (efacts[kk] == 1)) brac <- brac / 2 } break } # If solution is close to bracket boundaries, we shift bracket and expand it # This (- wd + bTol*wd)/2 is divided by 2 to make the algorithm more reluctant to reduce lsig wd <- abs( diff(brac) ) isig <- lsig + ifelse(lsig-srange[1] < ctrl$aTol*wd, (- wd + bTol*wd)/2, wd - bTol*wd) ef <- 2*ef } if( ctrl$verbose && (nq>1) ) { tseq <- oQu[1:ii] tmp <- rans[tseq, , drop = FALSE] layout(matrix(c(1,1,2,3), 2, 2, byrow = TRUE), heights=c(2, 1)) par(mai = c(1, 1, 0.1, 0.1)) plot(qu[tseq], sigs[oQu[1:ii]], ylim = range(as.vector(tmp)), xlim = range(qu), col = 2, ylab = expression("Log(" * sigma * ")"), xlab = "qu") points(qu[tseq], tmp[ , 1], pch = 3) points(qu[tseq], tmp[ , 2], pch = 3) points(qu[tseq], rowMeans(tmp), pch = 3) for(zz in 1:ii) segments(qu[oQu[zz]], rowMeans(rans)[oQu[zz]] - abs(diff(tmp[zz, ]))/ctrl$redWd, qu[oQu[zz]], rowMeans(rans)[oQu[zz]] + abs(diff(tmp[zz, ]))/ctrl$redWd, col = 1) plot(qu, efacts, xlab = "qu", "ylab" = "Bracket expansions") plot(qu, errors) } if(ctrl$progress){ cat("done \n") } } if( any(errors > err) ){ message("We had to increase \`err\` for some of the quantiles. See fit$calibr$err") } names(sigs) <- qu out <- list("lsig" = sigs, "err" = errors, "ranges" = rans, "store" = store) attr(out, "class") <- "learnFast" # Close the cluster if it was opened inside this function if(multicore && clusterCreated) stopCluster(cluster) return( out ) } ########################################################################## ### Internal version, which works for a single quantile qu ########################################################################## .tuneLearnFast <- function(mObj, bObj, pMat, sandStuff, wb, qu, err, srange, gausFit, varHat, multicore, cluster, ncores, paropts, control, argGam) { # Initializing smoothing parameters using gausFit is a very BAD idea if( is.formula(mObj$formula) ) { # Extended Gam OR ... coefGau <- coef(gausFit) if( is.list(gausFit$formula) ){ lpi <- attr(predict(gausFit, newdata = gausFit$model[1:2, , drop = FALSE], type = "lpmatrix"), "lpi") coefGau <- coefGau[ lpi[[1]] ] } initM <- list("start" = coefGau + c(quantile(residuals(gausFit, type="response"), qu), rep(0, length(coefGau)-1)), "in.out" = NULL) # let gam() initialize sp via initial.spg() } else { # ... GAMLSS initM <- list("start" = NULL, "in.out" = NULL) # I have no clue } init <- list("initM" = initM, "initB" = vector("list", control$K)) # If we get convergence error, we increase "err" up to 0.2. If the error persists (or if the # error is of another nature) we throw an error repeat{ res <- tryCatch(.brent(brac=srange, f=.objFunLearnFast, mObj = mObj, bObj = bObj, wb = wb, init = init, pMat = pMat, SStuff = sandStuff, qu = qu, ctrl = control, varHat = varHat, err = err, argGam = argGam, multicore = multicore, paropts = paropts, cluster = cluster, t = control$tol, aTol = control$aTol), error = function(e) e) if("error" %in% class(res)){ if( grepl("can't correct step size", res) ) { if(err < 0.2){ err <- min(2*err, 0.2) if(control$verbose) message( paste("Increase \"err\" to ", err, " to get convergence") ) } else { stop("I increased \"err\" up to 0.2, but still didn't get convergence.") } } else { stop( res ) } } else { break } } res[["err"]] <- err return( res ) } qgam/R/I_ctrlSetup.R0000644000176200001440000000067213033231203013753 0ustar liggesusers.ctrlSetup <- function(innerCtrl, outerCtrl, verbose = TRUE) { if(length(outerCtrl)) { namOut <- names(outerCtrl) namIn <- names(innerCtrl) if (verbose && length(noNms <- namOut[! namOut %in% namIn])) { warning("unknown names in control list: ", paste(noNms, collapse = ", "), ". They will not be used") } if(length(outerCtrl)) innerCtrl[namOut] <- outerCtrl } return(innerCtrl) }qgam/R/I_cleanData.R0000644000176200001440000000465214116623717013666 0ustar liggesusers### # Removing all NAs, unused variables and factor levels from data # .cleanData <- function(.dat, .form, .drop){ if( inherits(.dat, "groupedData") ) { .dat <- as.data.frame( .dat ) } .vars <- .allVars1( interpret.gam(.form)$fake.formula ) # Data is a list, but not a data.frame hence it needs special treament # NB: assumption here is that .dat contains 1 and only 1 data.frame and # remaining entries are matrices if( is.list(.dat) && !is.data.frame(.dat) ){ # Keep only elements that are data.frames or that appear in model formula .dat <- .dat[ which( sapply(.dat, is.data.frame) | (names(.dat) %in% .vars) ) ] # Check if there are matrices (e.g. for functional effects) .matVar <- which( !sapply(.dat, is.data.frame) ) # No matrices in the list, we do the check only on the data.frame element if( !length(.matVar) ){ return( .cleanData(.dat = .dat[ sapply(.dat, is.data.frame) ][[1]], .form = .form, .drop = .drop) ) } .datI <- subset(.dat[ -.matVar ][[1]], select = .vars[!(.vars %in% names(.dat))] ) # Standard part of data .datM <- .dat[ .matVar ] # List of matrices # Find rows with NAs in two parts .badI <- attr(na.omit(.datI), "na.action") .badM <- attr(na.omit(do.call("cbind", .datM)), "na.action") # Now remove all bad rows from all elements of .dat .badAll <- union(.badI, .badM) if( !is.null(.badAll) ){ .datI <- .datI[-.badAll, ] if( is.null(.drop) || .drop ) { .datI <- droplevels( .datI ) } .datM <- lapply(.datM, function(.X) .X[-.badAll, ]) .datO <- c(list(.datI), .datM) # NOTE datO not dat0 !!!!!! } } else{ .datO <- na.omit( subset(.dat, select = .vars) ) if( is.null(.drop) || .drop ) { .dat0 <- droplevels( .datO ) } } return( .datO ) } #### # Test case #### # .dat <- list(data.frame("x" = 1:10, "y" = 1:10, "u1" = rnorm(10)), # "z1" = matrix(1:30, 10, 3), # "z2" = matrix(1:30, 10, 3), # "u2" = matrix(1:30, 10, 3)) # .dat[[1]]$y[1] <- NA # .dat$z1[3, 2] <- NaN # .dat$z2[10, 2] <- NaN # # .vars <- c("x", "y", "z1", "z2") # # # The 1st, 3rd and 10th rows should be removed, and the u1 and u2 variables should disappear # qgam:::.cleanData(.dat = .dat, .form = y ~ s(x) + s(y, z1) + z2, .drop = TRUE)qgam/R/I_brent.R0000644000176200001440000001405113332310104013074 0ustar liggesusers #***************************************************************************** # Shamelessly copied (and translated) from Burkardt's # website https://people.sc.fsu.edu/~jburkardt/m_src/brent/local_min.m # ## LOCAL_MIN seeks a local minimum of a function F(X) in an interval [A,B]. # # Discussion: # # The method used is a combination of golden section search and # successive parabolic interpolation. Convergence is never much slower # than that for a Fibonacci search. If F has a continuous second # derivative which is positive at the minimum (which is not at A or # B), then convergence is superlinear, and usually of the order of # about 1.324.... # # The values EPSI and T define a tolerance TOL = EPSI * abs ( X ) + T. # F is never evaluated at two points closer than TOL. # # If F is a unimodal function and the computed values of F are always # unimodal when separated by at least SQEPS * abs ( X ) + (T/3), then # LOCAL_MIN approximates the abscissa of the global minimum of F on the # interval [A,B] with an error less than 3*SQEPS*abs(LOCAL_MIN)+T. # # If F is not unimodal, then LOCAL_MIN may approximate a local, but # perhaps non-global, minimum to the same accuracy. # # Thanks to Jonathan Eggleston for pointing out a correction to the # golden section step, 01 July 2013. # # Licensing: # # This code is distributed under the GNU LGPL license. # # Modified: # # 13 April 2008 # # Author: # # Original FORTRAN77 version by Richard Brent. # MATLAB version by John Burkardt. # R vesion from Matteo Fasiolo # # Reference: # # Richard Brent, # Algorithms for Minimization Without Derivatives, # Dover, 2002, # ISBN: 0-486-41998-3, # LC: QA402.5.B74. # # Parameters: # # Input, real A, B, the endpoints of the interval. # # Input, real EPSI, a positive relative error tolerance. # EPSI should be no smaller than twice the relative machine precision, # and preferably not much less than the square root of the relative # machine precision. # # Input, real T, a positive absolute error tolerance. # # Input, function value = F ( x ), the name of a user-supplied # function whose local minimum is being sought. # # Output, real X, the estimated value of an abscissa # for which F attains a local minimum value in [A,B]. # # Output, real FX, the value F(X). # .brent <- function(brac, f, mObj, bObj, wb, init, pMat, qu, ctrl, varHat, cluster, t = .Machine$double.eps^0.25, aTol = 0, ...) { brac <- sort(brac) a <- brac[1] b <- brac[2] # Relative tolerance, as in ?optimize. No need to touch it, I think. epsi = sqrt(.Machine$double.eps) # cc is the square of the inverse of the golden ratio. cc = 0.5 * ( 3.0 - sqrt(5.0) ) sa = a sb = b x = sa + cc * ( b - a ) w = x v = w e = 0.0 feval = f(lsig = x, mObj = mObj, bObj = bObj, wb = wb, initM = init[["initM"]], initB = init[["initB"]], pMat = pMat, qu = qu, ctrl = ctrl, varHat = varHat, cluster = cluster, ...) fx = feval$outLoss fw = fx fv = fw # Storing all evaluations points and function values jj <- 1 store <- list() store[[jj]] <- list("x" = x, "f" = fx, "initM" = feval[["initM"]], "initB" = feval[["initB"]]) jj <- jj + 1 while( TRUE ) { m = 0.5 * ( sa + sb ) tol = epsi * abs ( x ) + t t2 = 2.0 * tol # Check the stopping criterion. We exit if we detect convergence or # if we detect that we are too close to the bracket extremes if( (abs(x-m) <= (t2 - 0.5 * (sb-sa))) || any(abs(x-c(a, b)) < aTol * abs(b-a)) ) { break } # Fit a parabola. r = 0.0 q = r p = q if ( tol < abs(e) ) { r = ( x - w ) * ( fx - fv ) q = ( x - v ) * ( fx - fw ) p = ( x - v ) * q - ( x - w ) * r q = 2.0 * ( q - r ) if( 0.0 < q ) { p = - p } q = abs ( q ) r = e e = d } # Take the parabolic interpolation step OR ... if ( (abs(p) < abs(0.5 * q * r)) && (q * ( sa - x )) < p && (p < q * ( sb - x )) ) { d = p / q u = x + d # F must not be evaluated too close to A or B. if ( ( u - sa ) < t2 || ( sb - u ) < t2 ) { if ( x < m ) { d = tol } else { d = - tol } } } else { # ... a golden-section step. if ( x < m ){ e = sb - x } else { e = sa - x } d = cc * e } # F must not be evaluated too close to X. if ( tol <= abs( d ) ){ u = x + d } else { if ( 0.0 < d ) { u = x + tol } else { u = x - tol } } init <- store[[ which.min(abs(u - sapply(store, "[[", "x"))) ]] feval = f(lsig = u, mObj = mObj, bObj = bObj, wb = wb, initM = init[["initM"]], initB = init[["initB"]], pMat = pMat, qu = qu, ctrl = ctrl, varHat = varHat, cluster = cluster, ...) fu = feval$outLoss store[[jj]] <- list("x" = u, "f" = fu, "initM" = feval[["initM"]], "initB" = feval[["initB"]]) jj <- jj + 1 # Update A, B, V, W, and X. if ( fu <= fx ){ if ( u < x ) { sb = x } else { sa = x } v = w fv = fw w = x fw = fx x = u fx = fu } else { if ( u < x ) { sa = u } else { sb = u } if ( (fu <= fw) || (w == x) ) { v = w fv = fw w = u fw = fu } else { if ( (fu <= fv) || (v == x) || (v == w) ){ v = u fv = fu } } } } store <- rbind( sapply(store, "[[", "x"), sapply(store, "[[", "f") ) return( list("minimum" = x, "objective" = fx, "store" = store) ) } ################### ######### TEST ################### # # Test 1 # f <- function (x, k) (x - k)^2 # xmin <- optimize(f, c(0, 1), tol = 0.0001, k = 1/3) # xmin # # qgam:::.brent(brac = c(0, 1), f = f, t = 1e-4, k = 1/3) # # # Test 2 # f <- function(x) ifelse(x > -1, ifelse(x < 4, exp(-1/abs(x - 1)), 10), 10) # fp <- function(x) { print(x); f(x) } # # plot(f, -2,5, ylim = 0:1, col = 2) # optimize(fp, c(-7, 20)) # ok # qgam:::.brent(c(-7, 20), f = f) qgam/R/I_shashQf.R0000644000176200001440000000036513447125341013400 0ustar liggesusers### # Quantile function of shash distribution .shashQf <- function(p, param) { mu <- param[1] sig <- exp(param[2]) eps <- param[3] del <- exp(param[4]) return( mu + (del * sig) * sinh((1/del) * asinh(qnorm(p)) + (eps/del)) ) }qgam/R/I_shashMode.R0000644000176200001440000000060613457634763013732 0ustar liggesusers### # Finding the mode of the shash density # .shashMode <- function(param) { .objFun <- function(mu){ - .llkShash(x = mu, mu = param[1], tau = param[2], eps = param[3], phi = param[4], deriv = 0)$l0 } range <- c(.shashQf(0.001, param), .shashQf(0.999, param)) mode <- optimize(f = .objFun, interval = range)$minimum return( mode ) }qgam/R/I_getErrParam.R0000644000176200001440000000450613615324554014221 0ustar liggesusers############## # Get "err" parameter for automatic loss smoothness selection ############## # .getErrParam <- function(qu, gFit){ # Estimated conditional mean and variance (latter could be constant) if( is.list(gFit$formula) ) { muHat <- gFit$fitted.values[ , 1] varHat <- 1 / gFit$fitted.values[ , 2]^2 } else { muHat <- gFit$fitted.values varHat <- gFit$sig2 } # Raw residuals from Gaussian regression, normalized using estimated conditional SD r <- ( gFit$y - muHat ) / sqrt( varHat ) n <- length( r ) # Fixing dimension d to EDF of Gaussian fit. # First use anova to find degrees of freedom of parametric terms in equation for location # Then find EDF of smooth terms in equation for location. unique() needed for "adaptive" smooths anv <- anova( gFit ) d <- sum( anv$pTerms.df[ !grepl("\\.1", rownames(anv$pTerms.table)) ] ) d <- d + sum( unique(pen.edf(gFit)[!grepl("s\\.1|te\\.1|ti\\.1|t2\\.1", names(pen.edf(gFit)))]) ) # Estimate parameters of shash density on standardized residuals parSH <- .fitShash( r )$par # Find probability p corresponding to the mode of shash density pmode <- .shashCDF(.shashMode( parSH ), parSH) err <- qu * 0 for(ii in 1:length(qu)){ quX <- qu[ii] # If quantile qu is too close to mode, lower it or increase it by 0.05 to avoid f' = 0 if( abs(quX - pmode) < 0.05 ){ quX <- pmode + sign(quX - pmode) * 0.05 quX <- max(min(quX, 0.99), 0.01) # To avoid going outside (0, 1) } # Quantile of shash at which derivatives should be estimated qhat <- .shashQf(quX, parSH) # Compure log(density) and log( abs(derivative of density) ) at quantile rqu # |Df / Dx| = |(Dlog(f) / Dx) * f| # log( |Df / Dx| ) = log( |Dlog(f) / Dx| ) + log( f ) lf0 <- .llkShash(qhat, mu = parSH[1], tau = parSH[2], eps = parSH[3], phi = parSH[4])$l0 lf1 <- - .llkShash(qhat, mu = parSH[1], tau = parSH[2], eps = parSH[3], phi = parSH[4], deriv = 1)$l1[1] # NB df/dx = -df/dmu lf1 <- log( abs(lf1) ) + lf0 # f / f'^2 = exp( log(f) - 2 * log(|f'|) ) but we avoid dividing by almost zero h <- (d*9 / (n*pi^4))^(1/3) * exp(lf0/3 - 2*lf1/3) # Setting err too high might be problematic, so here it can be at most 1 err[ii] <- min(h * 2 * log(2) / sqrt(2*pi), 1) } return( err ) } qgam/R/check_learnFast.R0000644000176200001440000000724513763713243014620 0ustar liggesusers########################## #' Visual checks for the output of tuneLearnFast() #' #' @description Provides some visual checks to verify whether the Brent optimizer used by \code{tuneLearnFast()} worked correctly. #' @param obj the output of a call to \code{tuneLearnFast}. #' @param sel integer vector determining which of the plots will be produced. For instance if \code{sel = c(1, 3)} only #' the 1st and 3rd plots are showed. No entry of \code{sel} can be bigger than one plus the number of quantiles considered #' in the original \code{tuneLearnFast()} call. That is, if we estimated the learning rate for \code{qu = c(0.1, 0.4)}, #' then \code{max(sel)} must be <= 3. #' @param ... currently not used, here only for compatibility reasons. #' @return It produces several plots. #' @details The top plot in the first page shows the bracket used to estimate log(sigma) for each quantile. #' The brackets are delimited by the crosses and the red dots are the estimates. If a dot falls very close to one of the crosses, #' that might indicate problems. The bottom plot shows, for each quantile, the value of parameter \code{err} used. Sometimes the algorithm #' needs to increase \code{err} above its user-defined value to achieve convergence. Subsequent plots show, for each quantile, the value #' of the loss function corresponding to each value of log(sigma) explored by Brent algorithm. #' @author Matteo Fasiolo . #' @references Fasiolo, M., Wood, S.N., Zaffran, M., Nedellec, R. and Goude, Y., 2020. #' Fast calibrated additive quantile regression. #' Journal of the American Statistical Association (to appear). #' \url{https://www.tandfonline.com/doi/full/10.1080/01621459.2020.1725521}. #' @examples #' library(qgam) #' set.seed(525) #' dat <- gamSim(1, n=200) #' b <- tuneLearnFast(y ~ s(x0)+s(x1)+s(x2)+s(x3), #' data = dat, qu = c(0.4, 0.5), #' control = list("tol" = 0.05)) # <- sloppy tolerance to speed-up calibration #' check(b) #' check(b, 3) # Produces only third plot #' check.learnFast <- function(obj, sel = NULL, ...) { est <- obj$store brac <- obj$ranges lsig <- obj$lsig errors <- obj$err qu <- as.numeric(names(obj$lsig)) nq <- length(qu) sel <- if(is.null(sel)){ 1:(nq+1) } else { sort(sel) } oldPar <- par(no.readonly = TRUE) if( 1%in%sel ){ layout(matrix(c(1,1,2,2), 2, 2, byrow = TRUE), heights=c(2, 1)) par(mai = c(1, 1, 0.1, 0.1)) plot(qu, lsig, ylim = range(as.vector(brac)), xlim = range(qu)+c(-1e-5,+1e-5), col = 2, ylab = expression("Log(" * sigma * ")"), xlab = "Quantile") points(qu, brac[ , 1], pch = 3) points(qu, brac[ , 2], pch = 3) points(qu, rowMeans(brac), pch = 3) for(zz in 1:nq) segments(qu[zz], mean(brac[zz, ]) - abs(diff(brac[zz, ]))/4, qu[zz], mean(brac[zz, ]) + abs(diff(brac[zz, ]))/4, col = 1) plot(qu, errors, xlab = "Quantile") par(oldPar) } if(any(sel > 1)) { selQ <- sel[sel>1] - 1 # readline(prompt = "Press to see the next plot...") pDim <- min( ceiling(sqrt(length(selQ))), 2 ) par(mfrow = c(pDim, pDim)) for( ii in selQ ) { plot(sort(est[[ii]][1, ]), est[[ii]][2, order(est[[ii]][1, ])], main = substitute(Quantile == x, list(x = round(qu[ii], 3))), ylab = "loss", xlab = expression(log(sigma)), type = 'b') abline(v = est[[ii]][1, which.min(est[[ii]][2, ])], col = 2) #if((ii %% (pDim^2) == 0) && (ii!=nq)) readline(prompt = "Press to see the next plot...") } } par(oldPar) return( invisible(NULL) ) } qgam/R/I_qqVett.R0000644000176200001440000000040213033231203013241 0ustar liggesusers#### Vettorized empirical cdf .qqVett <- function(y, mu){ nq <- ncol( mu ) nobs <- length( y ) out <- sapply(1:nq, function(ii){ return( sum( (y - mu[ , ii]) < 0 ) / nobs ) }) return( out ) }qgam/R/I_tuneLearnFullFits.R0000644000176200001440000001021314015714063015377 0ustar liggesusers############### #### Internal function that does the full-data fits ############### .tuneLearnFullFits <- function(lsig, form, fam, qu, err, ctrl, data, argGam, gausFit, varHat, initM){ n <- nrow(data) nt <- length(lsig) # Create gam object for full data fits mainObj <- do.call("gam", c(list("formula" = form, "family" = quote(elf(qu = qu, co = NA, theta = NA, link = ctrl$link)), "data" = quote(data), "fit" = FALSE), argGam)) # Remove "sp" as it is already been fixed argGam <- argGam[ names(argGam) != "sp" ] # Create reparametrization list for... repar <- if( is.list(form) ){ # ... GAMLSS case OR... Sl.setup( mainObj ) } else { # ... extended GAM case .prepBootObj(obj = mainObj, eps = NULL, control = argGam$control)[ c("UrS", "Mp", "U1") ] } # these are needed for sandwich calibration # Store degrees of freedom for each value of lsig tmp <- pen.edf( gausFit ) if( length(tmp) ) { edfStore <- list( ) } else { edfStore <- NULL } # FULL data fits, used to estimate the smoothing parameters store <- vector("list", nt) for( ii in 1:nt ) # START lsigma loop, from smallest to largest (because when lsig is large the smooth params diverge) { mainObj$family$putCo( err * sqrt(2*pi*varHat) / (2*log(2)) ) mainObj$family$putTheta( lsig[ii] ) convProb <- FALSE # Variable indicating convergence problems withCallingHandlers({ fit <- do.call("gam", c(list("G" = quote(mainObj), "in.out" = initM[["in.out"]], "start" = initM[["start"]]), argGam)) }, warning = function(w) { if (length(grep("Fitting terminated with step failure", conditionMessage(w))) || length(grep("Iteration limit reached without full convergence", conditionMessage(w)))) { message( paste("log(sigma) = ", round(lsig[ii], 3), " : outer Newton did not converge fully.", sep = "") ) convProb <<- TRUE invokeRestart("muffleWarning") } }) if( !is.null(edfStore) ) { edfStore[[ii]] <- c(lsig[ii], pen.edf(fit)) } # Create prediction matrix (only in the first iteration) if( ii == 1 ){ pMat <- pMatFull <- predict.gam(fit, type = "lpmatrix") lpi <- attr(pMat, "lpi") if( !is.null(lpi) ){ pMat <- pMat[ , lpi[[1]]] # "lpi" attribute lost here attr(pMat, "lpi") <- lpi } } sdev <- NULL if(ctrl$loss %in% c("cal", "calFast") && ctrl$vtype == "m"){ Vp <- fit$Vp # In the gamlss case, we are interested only in the calibrating the location mode if( !is.null(lpi) ){ Vp <- fit$Vp[lpi[[1]], lpi[[1]]] } sdev <- sqrt(rowSums((pMat %*% Vp) * pMat)) # same as sqrt(diag(pMat%*%Vp%*%t(pMat))) but (WAY) faster } initM <- list("start" = coef(fit), "in.out" = list("sp" = fit$sp, "scale" = 1)) if( ctrl$loss == "calFast" ){ # Fast calibration OR ... if( ii == 1 ){ EXXT <- crossprod(pMatFull, pMatFull) / n # E(xx^T) EXEXT <- tcrossprod( colMeans(pMatFull), colMeans(pMatFull) ) # E(x)E(x)^T } Vbias <- .biasedCov(fit = fit, X = pMatFull, EXXT = EXXT, EXEXT = EXEXT, lpi = lpi) store[[ii]] <- list("loss" = .sandwichLoss(mFit = fit, X = pMat, XFull = pMatFull, sdev = sdev, repar = repar, alpha = Vbias$alpha, VSim = Vbias$V), "convProb" = convProb) } else { # Bootstrapping or cross-validation: full data fit will be used when fitting the bootstrap datasets store[[ii]] <- list("sp" = fit$sp, "fit" = fit$fitted, "co" = fit$family$getCo(), "init" = initM$start, "sdev" = sdev, "weights" = fit$working.weights, "res" = fit$residuals, "convProb" = convProb) } } if( !is.null(edfStore) ){ edfStore <- do.call("rbind", edfStore) colnames(edfStore) <- c("lsig", names( pen.edf(fit) )) } return( list("store" = store, "edfStore" = edfStore, "pMat" = if(ctrl$loss != "calFast") { pMat } else { NULL } ) ) } qgam/R/check_learn.R0000644000176200001440000000505013763713266013777 0ustar liggesusers########################## #' Visual checks for the output of tuneLearn() #' #' @description Provides some visual plots showing how the calibration criterion and the effective degrees of #' freedom of each smooth component vary with the learning rate. #' #' @param obj the output of a call to \code{tuneLearn}. #' @param sel this function produces two plots, set this parameter to 1 to plot only the first, #' to 2 to plot only the second or leave it to 1:2 to plot both. #' @param ... currently not used, here only for compatibility reasons. #' @return It produces several plots. #' @details The first plot shows how the calibrations loss, which we are trying to minimize, varies with the #' log learning rate. This function should look quite smooth, if it doesn't then try to increase #' \code{err} or \code{control$K} (the number of bootstrap samples) in the original call to #' \code{tuneLearn}. The second plot shows how the effective degrees of freedom of each smooth term #' vary with log(sigma). Generally as log(sigma) increases the complexity of the fit decreases, hence #' the slope is negative. #' @author Matteo Fasiolo . #' @references Fasiolo, M., Wood, S.N., Zaffran, M., Nedellec, R. and Goude, Y., 2020. #' Fast calibrated additive quantile regression. #' Journal of the American Statistical Association (to appear). #' \url{https://www.tandfonline.com/doi/full/10.1080/01621459.2020.1725521}. #' @examples #' library(qgam) #' set.seed(525) #' dat <- gamSim(1, n=200) #' b <- tuneLearn(lsig = seq(-0.5, 1, length.out = 10), #' y~s(x0)+s(x1)+s(x2)+s(x3), #' data=dat, qu = 0.5) #' check(b) #' check.learn <- function(obj, sel = 1:2, ...) { sig <- as.numeric( names( obj$loss ) ) if( 1 %in% sel ){ # readline(prompt = "Press to see the next plot...") plot(sig, obj$loss, type = "b", ylab = "Calibration Loss", xlab = expression("log(" * sigma * ")")) rug(sig[obj$convProb], side = 3, col = 2, lwd = 2) } if( !is.null(obj$edf) && 2 %in% sel ) { # readline(prompt = "Press to see the next plot...") nc <- ncol(obj$edf) matplot(obj$edf[ , 1], obj$edf[ , 2:nc], type = 'b', ylab = "Penalized EDF", xlab = expression("log(" * sigma * ")"), pch = 1:nc, col = 1:nc) legend("topright", colnames(obj$edf)[2:nc], pch = 1:nc, col = 1:nc, bg="transparent") rug(sig[obj$convProb], side = 3, col = 2, lwd = 2) } return( invisible(NULL) ) } qgam/R/I_egamFit.R0000644000176200001440000003545313151257162013364 0ustar liggesusers.egamFit <- function(x, y, sp, Eb, UrS, weights, start, offset, U1, Mp, family, control, null.coef, needVb) { ## Routine for fitting GAMs beyond exponential family. ## Inputs as gam.fit3 except that family is of class "extended.family", while ## sp contains the vector of extended family parameters, followed by the log smoothing parameters, scoreType <- "REML" deriv <- 0 theta <- family$getTheta() ## penalized <- if (length(UrS)>0) TRUE else FALSE x <- as.matrix(x) nSp <- length(sp) rank.tol <- .Machine$double.eps*100 ## tolerance to use for rank deficiency q <- ncol(x) n <- nobs <- nrow(x) xnames <- dimnames(x)[[2]] ynames <- if (is.matrix(y)) rownames(y) else names(y) ## Now a stable re-parameterization is needed.... if (length(UrS)) { rp <- gam.reparam(UrS, sp, deriv) T <- diag(q) T[1:ncol(rp$Qs),1:ncol(rp$Qs)] <- rp$Qs T <- U1%*%T ## new params b'=T'b old params null.coef <- t(T)%*%null.coef # Start is a list of vectors, each is a different possible initialization for(jj in 1:length(start)){ start[[jj]] <- t(T)%*%start[[jj]] } ## form x%*%T in parallel x <- .Call(.C_qgam_pmmult2,x,T,0,0,control$nthreads) rS <- list() for (i in 1:length(UrS)) { rS[[i]] <- rbind(rp$rS[[i]],matrix(0,Mp,ncol(rp$rS[[i]]))) } ## square roots of penalty matrices in current parameterization Eb <- Eb%*%T ## balanced penalty matrix rows.E <- q-Mp Sr <- cbind(rp$E,matrix(0,nrow(rp$E),Mp)) St <- rbind(cbind(rp$S,matrix(0,nrow(rp$S),Mp)),matrix(0,Mp,q)) } else { T <- diag(q); St <- matrix(0,q,q) rSncol <- rows.E <- Eb <- Sr <- 0 rS <- list(0) rp <- list(det=0,det1 = 0,det2 = 0,fixed.penalty=FALSE) } ## re-parameterization complete. Initialization.... nvars <- ncol(x) if (nvars==0) stop("emtpy models not available") if (is.null(weights)) weights <- rep.int(1, nobs) if (is.null(offset)) offset <- rep.int(0, nobs) linkinv <- family$linkinv valideta <- family$valideta validmu <- family$validmu dev.resids <- family$dev.resids ## need an initial `null deviance' to test for initial divergence... ## if (!is.null(start)) null.coef <- start - can be on edge of feasible - not good null.eta <- as.numeric(x%*%null.coef + as.numeric(offset)) ## call the families initialization code... mustart <- NULL eval(family$initialize) coefold <- mu <- eta <- NULL old.pdev <- null.pdev <- sum(dev.resids(y, linkinv(null.eta), weights, theta)) + t(null.coef)%*%St%*%null.coef # Calculating pen deviance using each initialization and choosing the best tmp <- lapply(start, function(.st){ if (length(.st) != nvars){stop("Length of start should equal ", nvars, " and correspond to initial coefs for ", deparse(xnames))} .eta <- offset + as.vector(if (NCOL(x) == 1) x * .st else x %*% .st) .mu <- linkinv(.eta) .pdev <- sum(dev.resids(y, .mu, weights, theta)) + t(.st)%*%St%*%.st return( list("eta"=.eta, "mu"=.mu, "pdev"=.pdev, "start"=.st) ) }) tmp <- tmp[[ which.min(sapply(tmp, "[[", "pdev")) ]] coefold <- start <- tmp$start eta <- etaold <- tmp$eta mu <- tmp$mu pdev <- tmp$pdev # BAD start (it's worse than null.coef) reset everything if (pdev>old.pdev){ start <- coefold <- eta <- etaold <- mu <- NULL } else { # GOOD start old.pdev <- pdev } ## Initialization of mu and eta (and coefold) if "start" did not do it if(is.null(coefold)){ coefold <- null.coef } if (is.null(eta)){ eta <- family$linkfun(mustart) mu <- linkinv(eta) etaold <- eta } conv <- boundary <- FALSE for (iter in 1:control$maxit) { ## start of main fitting iteration if (control$trace) cat(iter," ") dd <- dDeta(y,mu,weights,theta,family,0) ## derivatives of deviance w.r.t. eta # good <- is.finite(dd$Deta.Deta2) w <- dd$Deta2 * .5; wz <- w*(eta-offset) - .5*dd$Deta z <- (eta-offset) - dd$Deta.Deta2 good <- is.finite(z)&is.finite(w) if (control$trace&sum(!good)>0) cat("\n",sum(!good)," not good\n") if (sum(!good)) { use.wy <- TRUE good <- is.finite(w)&is.finite(wz) z[!is.finite(z)] <- 0 ## avoid NaN in .C call - unused anyway } else use.wy <- family$use.wz oo <- .C(.C_qgam_pls_fit1, y=as.double(z[good]),X=as.double(x[good,]),w=as.double(w[good]),wy = as.double(wz[good]), E=as.double(Sr),Es=as.double(Eb),n=as.integer(sum(good)), q=as.integer(ncol(x)),rE=as.integer(rows.E),eta=as.double(z), penalty=as.double(1),rank.tol=as.double(rank.tol), nt=as.integer(control$nthreads),use.wy=as.integer(use.wy)) if (oo$n<0) { ## then problem is indefinite - switch to +ve weights for this step if (control$trace) cat("**using positive weights\n") # problem is that Fisher can be very poor for zeroes ## index weights that are finite and positive good <- is.finite(dd$Deta2) good[good] <- dd$Deta2[good]>0 w[!good] <- 0 wz <- w*(eta-offset) - .5*dd$Deta z <- (eta-offset) - dd$Deta.Deta2 good <- is.finite(z)&is.finite(w) if (sum(!good)) { use.wy <- TRUE good <- is.finite(w)&is.finite(wz) z[!is.finite(z)] <- 0 ## avoid NaN in .C call - unused anyway } else use.wy <- family$use.wz oo <- .C(.C_qgam_pls_fit1, ##.C_pls_fit1, y=as.double(z[good]),X=as.double(x[good,]),w=as.double(w[good]),wy = as.double(wz[good]), E=as.double(Sr),Es=as.double(Eb),n=as.integer(sum(good)), q=as.integer(ncol(x)),rE=as.integer(rows.E),eta=as.double(z), penalty=as.double(1),rank.tol=as.double(rank.tol), nt=as.integer(control$nthreads),use.wy=as.integer(use.wy)) } # if(control$epsilon == Inf){ ### MATTEO ################################### # startOld <- drop(start) # lprOld <- drop(x%*%startOld) # } start <- oo$y[1:ncol(x)] ## current coefficient estimates penalty <- oo$penalty ## size of penalty eta <- drop(x%*%start) ## the linear predictor (less offset) if (any(!is.finite(start))) { ## test for breakdown conv <- FALSE warning("Non-finite coefficients at iteration ", iter) return(list(REML=NA)) ## return immediately signalling failure } mu <- linkinv(eta <- eta + offset) dev <- sum(dev.resids(y, mu, weights,theta)) ########################## MATTEO ################################### # if(control$epsilon == Inf) # { # .myObj <- function(.alpha){ # .param <- .alpha * start + (1-.alpha) * startOld # .lpr <- .alpha * eta + (1-.alpha) * lprOld # .mu <- linkinv(.lpr) # .dev <- sum(dev.resids(y, .mu, weights, theta)) # return(.dev) # } # # .opt <- optimize(.myObj, c(0, 2))$minimum # cat(" ", .opt) # # start <- .opt * start + (1-.opt) * startOld # eta <- .opt * eta + (1-.opt) * lprOld # mu <- linkinv(eta) # dev <- sum(dev.resids(y, mu, weights,theta)) # # boundary <- TRUE # penalty <- t(start)%*%St%*%start ## reset penalty too # } ########################## MATTEO ################################### ## now step halve under non-finite deviance... if (!is.finite(dev)) { if (is.null(coefold)) { if (is.null(null.coef)) stop("no valid set of coefficients has been found:please supply starting values", call. = FALSE) ## Try to find feasible coefficients from the null.coef and null.eta coefold <- null.coef etaold <- null.eta } #warning("Step size truncated due to divergence", # call. = FALSE) ii <- 1 while (!is.finite(dev)) { if (ii > control$maxit) stop("inner loop 1; can't correct step size") ii <- ii + 1 start <- (start + coefold)/2 eta <- (eta + etaold)/2 mu <- linkinv(eta) dev <- sum(dev.resids(y, mu, weights,theta)) } boundary <- TRUE penalty <- t(start)%*%St%*%start ## reset penalty too if (control$trace) cat("Step halved: new deviance =", dev, "\n") } ## end of infinite deviance correction ## now step halve if mu or eta are out of bounds... if (!(valideta(eta) && validmu(mu))) { #warning("Step size truncated: out of bounds", # call. = FALSE) ii <- 1 while (!(valideta(eta) && validmu(mu))) { if (ii > control$maxit) stop("inner loop 2; can't correct step size") ii <- ii + 1 start <- (start + coefold)/2 eta <- (eta + etaold)/2 mu <- linkinv(eta) } boundary <- TRUE dev <- sum(dev.resids(y, mu, weights)) penalty <- t(start)%*%St%*%start ## need to reset penalty too if (control$trace) cat("Step halved: new deviance =", dev, "\n") } ## end of invalid mu/eta handling ## now check for divergence of penalized deviance.... pdev <- dev + penalty ## the penalized deviance if (control$trace) cat("penalized deviance =", pdev, "\n") div.thresh <- 10*(.1+abs(old.pdev))*.Machine$double.eps^.5 if (pdev-old.pdev>div.thresh) { ## solution diverging ii <- 1 ## step halving counter if (iter == 1 && (pdev-null.pdev>div.thresh)) { ## Doing worse than null.coef at 1st iterat -> shrink towards zero etaold <- null.eta; coefold <- null.coef; old.pdev <- null.pdev } while (pdev - old.pdev > div.thresh) { ## step halve until pdev <= old.pdev if (ii > 100) stop("inner loop 3; can't correct step size") ii <- ii + 1 start <- (start + coefold)/2 eta <- (eta + etaold)/2 mu <- linkinv(eta) dev <- sum(dev.resids(y, mu, weights,theta)) pdev <- dev + t(start)%*%St%*%start ## the penalized deviance if (control$trace) cat("Step halved: new penalized deviance =", pdev, "\n") } } ## end of pdev divergence ## convergence testing... if (abs(pdev - old.pdev)/(0.1 + abs(pdev)) < control$epsilon) { ## Need to check coefs converged adequately, to ensure implicit differentiation ## ok. Testing coefs unchanged is problematic under rank deficiency (not guaranteed to ## drop same parameter every iteration!) grad <- 2 * t(x[good,])%*%((w[good]*(x%*%start)[good]-wz[good]))+ 2*St%*%start if (max(abs(grad)) > control$epsilon*max(abs(start+coefold))/2) { old.pdev <- pdev ## not converged quite enough coef <- coefold <- start etaold <- eta ##muold <- mu } else { ## converged conv <- TRUE coef <- start break } } else { ## not converged old.pdev <- pdev coef <- coefold <- start etaold <- eta } } ## end of main loop ## so at this stage the model has been fully estimated coef <- as.numeric(T %*% coef) ## now obtain derivatives, if these are needed... # check.derivs <- FALSE # while (check.derivs) { ## debugging code to check derivatives # eps <- 1e-7 # fmud.test(y,mu,weights,theta,family,eps = eps) # fetad.test(y,mu,weights,theta,family,eps = eps) # } ##################### Calculate the Bayesian covariance matrix if( needVb ) { dd <- dDeta(y,mu,weights,theta,family,deriv) w <- dd$Deta2 * .5 z <- (eta-offset) - dd$Deta.Deta2 ## - .5 * dd$Deta[good] / w wf <- pmax(0,dd$EDeta2 * .5) ## Fisher type weights wz <- w*(eta-offset) - 0.5*dd$Deta ## Wz finite when w==0 gdi.type <- if (any(abs(w)<.Machine$double.xmin*1e20)||any(!is.finite(z))) 1 else 0 good <- is.finite(wz)&is.finite(w) residuals <- z - (eta - offset) residuals[!is.finite(residuals)] <- NA z[!is.finite(z)] <- 0 ## avoid passing NA etc to C code ntot <- length(theta) + length(sp) rSncol <- unlist(lapply(UrS,ncol)) ## Now drop any elements of dd that have been dropped in fitting... if (sum(!good)>0) { ## drop !good from fields of dd, weights and pseudodata z <- z[good]; w <- w[good]; wz <- wz[good]; wf <- wf[good] dd$Deta <- dd$Deta[good];dd$Deta2 <- dd$Deta2[good] dd$EDeta2 <- dd$EDeta2[good] if (deriv>0) dd$Deta3 <- dd$Deta3[good] if (deriv>1) dd$Deta4 <- dd$Deta4[good] if (length(theta)>1) { if (deriv>0) { dd$Dth <- dd$Dth[good,]; dd$Detath <- dd$Detath[good,]; dd$Deta2th <- dd$Deta2th[good,] if (deriv>1) { dd$Detath2 <- dd$Detath2[good,]; dd$Deta3th <- dd$Deta3th[good,] dd$Deta2th2 <- dd$Deta2th2[good,];dd$Dth2 <- dd$Dth2[good,] } } } else { if (deriv>0) { dd$Dth <- dd$Dth[good]; dd$Detath <- dd$Detath[good]; dd$Deta2th <- dd$Deta2th[good] if (deriv>1) { dd$Detath2 <- dd$Detath2[good]; dd$Deta3th <- dd$Deta3th[good] dd$Deta2th2 <- dd$Deta2th2[good]; dd$Dth2 <- dd$Dth2[good] } } } } oo <- .C(.C_qgam_gdi2, X=as.double(x[good,]),E=as.double(Sr),Es=as.double(Eb),rS=as.double(unlist(rS)), U1 = as.double(U1),sp=as.double(exp(sp)),theta=as.double(theta), z=as.double(z),w=as.double(w),wz=as.double(wz),wf=as.double(wf),Dth=as.double(dd$Dth), Det=as.double(dd$Deta), Det2=as.double(dd$Deta2),Dth2=as.double(dd$Dth2),Det.th=as.double(dd$Detath), Det2.th=as.double(dd$Deta2th),Det3=as.double(dd$Deta3),Det.th2 = as.double(dd$Detath2), Det4 = as.double(dd$Deta4),Det3.th=as.double(dd$Deta3th), Deta2.th2=as.double(dd$Deta2th2), beta=as.double(coef),b1=as.double(rep(0,ntot*ncol(x))),w1=as.double(rep(0,ntot*length(z))), D1=as.double(rep(0,ntot)),D2=as.double(rep(0,ntot^2)), P=as.double(0),P1=as.double(rep(0,ntot)),P2 = as.double(rep(0,ntot^2)), ldet=as.double(1-2*(scoreType=="ML")),ldet1 = as.double(rep(0,ntot)), ldet2 = as.double(rep(0,ntot^2)), rV=as.double(rep(0,ncol(x)^2)), rank.tol=as.double(.Machine$double.eps^.75),rank.est=as.integer(0), n=as.integer(sum(good)),q=as.integer(ncol(x)),M=as.integer(nSp), n.theta=as.integer(length(theta)), Mp=as.integer(Mp),Enrow=as.integer(rows.E), rSncol=as.integer(rSncol),deriv=as.integer(deriv), fixed.penalty = as.integer(rp$fixed.penalty),nt=as.integer(control$nthreads), type=as.integer(gdi.type),dVkk=as.double(rep(0,nSp^2))) rV <- matrix(oo$rV,ncol(x),ncol(x)) ## rV%*%t(rV)*scale gives covariance matrix rV <- T %*% rV Vb <- rV%*%t(rV) } else { Vb <- NULL } list("coefficients" = coef, "Vb" = Vb) } ## gam.fitF qgam/MD50000644000176200001440000001002214146767672011526 0ustar liggesusersa062271330e52a707f666c8c1f0a7d7f *DESCRIPTION 88e9ce78060ab650f38e52cf0c2c22c2 *NAMESPACE 1bb4fd802155c44bff5f37478631ee09 *R/I_adTest.R 10a9769b67dae844e019d302ea114bd2 *R/I_allVars1.R b1fb4dc4e0f00987408028e5cccfda79 *R/I_biasedCov.R fd2700c41ba817d3606ff635d93702ba *R/I_brent.R cffd53392fb659ddddc0c001277b73a6 *R/I_checkLoss.R b90bf02e85b0cdfb2ec92f764f167c5c *R/I_cleanData.R 42e973a3c1b72fcc000d79d259cb11b2 *R/I_clusterExport.R 934f96c56bfd807400d03eeacb249b5c *R/I_clusterSetUp.R f6a05321a7ebe8362b920e338cd1fbbf *R/I_colVars.R 8dfe4abdb20efe1bac8ac191ec0fe22a *R/I_ctrlSetup.R 0cf8eacda9c98b90ee0dac41f092153a *R/I_egamFit.R 87c4e61fdfe884d648a8a1195d021715 *R/I_fitShash.R aac4c3e5e0ea74b3eb60f19e1d80fe64 *R/I_gamlssFit.R 3f27041559c52476e5d63da14cd18507 *R/I_getErrParam.R bb301619b732c787e5252a5ac3ba59df *R/I_getPenMatrix.R 25d8c07608e82b9dc4e33756b98fc006 *R/I_getVp.R d706680e21dc56dede19a69e02a0eed9 *R/I_kcheck.R e5d73eeb803c15dfdc707b366bb89bfc *R/I_llkGrads.R d3cae12f9ec0a4ee63647e4f5c1ffcf2 *R/I_llkShash.R 9106dd486952911248c4a4da39dd5da0 *R/I_objFunLearnFast.R 10f24183e45cf285881fa4b42045c33b *R/I_prepBootObj.R b76ae53a8bf1857a52768dcfe9fb95ab *R/I_qqVett.R e53b3ffb0ad591d3333e5c746f24aa4f *R/I_sandwichLoss.R 8554d161ce299dda267ad5737f3667a9 *R/I_shashCDF.R e72fd1adfb9224f61fce8dc3bf095b15 *R/I_shashMode.R 21474e13359227da5829c65b0d6fd84c *R/I_shashQf.R 2b19b8c06d7cbec7df3f0c10473b1f4c *R/I_tuneLearnBootstrapping.R 89922450b738801a6fa65d38c3450b29 *R/I_tuneLearnFullFits.R 6c54f9a5e059f7cf4bc91a99785c9f8b *R/check_learn.R 0d2069600d963cdf0d11d00c4a9a9234 *R/check_learnFast.R 249f2e67e69823f718c873a87b41dd51 *R/check_qgam.R 9170dbc23b58809c0dfddbfd9a097d42 *R/cqcheck.R c5fdf29f19d1e2e7d20c4c26b9461778 *R/cqcheckI.R eee8eb9ce1abaa7e670f0171217fdb0c *R/elf.R 70c9ab61cb0fedafd764c62bb5f63b3d *R/elflss.R f547e2d10e597a0ff879f6c1e0341ee0 *R/generic_functions.R cdba8c823b13f9c39649d046c9176edf *R/log1pexp.R 21942b5cbfba6990d2d022e440a0ba82 *R/mqgam.R ffe19724e6cd7ee829b5b53a4167694a *R/pinLoss.R 6d95696d368c53310cce2c180274a825 *R/qdo.R a15af041875330a4d47d461ab08796ec *R/qgam.R 194e514348dde289289beee4e6fc15b3 *R/sigmoid.R 592215c6abb4fc0269ecc711709c7866 *R/tuneLearn.R fe7012519b9079b7ccc4186a1b1f7d16 *R/tuneLearnFast.R b94b7d4ce3dbd1699bf0534c36f1a63b *build/partial.rdb 1a2d822c0c6cac892b2148e6b009b5d4 *build/vignette.rds 715ad9f122c07d0e336666db699164e6 *data/AUDem.RData f3630529abbae44b0137d9b8de2011d0 *data/UKload.RData ca19853f49f913c28d4964c22fc24dfc *inst/CITATION c94e1aa297785472facd2a2e0673a57e *inst/doc/qgam.R 32506fcc91cb7d860779ec39d0cd573f *inst/doc/qgam.Rmd f59027ad098993ebdd24e971862e7241 *inst/doc/qgam.html f638e6e80aa1dd498ae5e202cca2c80f *man/AUDem.Rd fa7d3525b4c32b5ce457aeb73a91e050 *man/UKload.Rd ee4e45bf588595552fd228617247a234 *man/check.Rd 1c2c33ea8224b4c921e870870d8fe37d *man/check.learn.Rd f00ee16fff58a86cadb879b42f4c8701 *man/check.learnFast.Rd 9318d637b1422aafa21e41253f03ca87 *man/check.qgam.Rd b2590cf2c32f3fed50ea200e2dfd949f *man/cqcheck.Rd 27f11bf9968f7411167eb715a247fb47 *man/cqcheckI.Rd abd074398d9b1155691d4da3006e2a7f *man/elf.Rd 6f6a1c0948bc8a8d69d8c26afb60cab8 *man/elflss.Rd 1ccc4e54fed57b2b5af0e7138ed35eb2 *man/log1pexp.Rd 49f2d5b2a11b29b25a4ea083fed3806c *man/mqgam.Rd c475afd3eb1359ecfd545c9d0abb8fcf *man/pinLoss.Rd ccfeaf51cfa49d6c909ee3b81dc59fb7 *man/qdo.Rd 498ff0d83c4bdac66fe3cc98d11bf152 *man/qgam.Rd 88b5404cf89e1157b2fac23ab53547c5 *man/sigmoid.Rd ef11626e6141b32ac5ad9fbfd42e14e8 *man/tuneLearn.Rd 0b77bec8e527bf7d6f991ea15d3aa3a5 *man/tuneLearnFast.Rd 0a5018d9bb97f2e613d024f0354d3a49 *src/init.c 721e37d3b68489ad722cc09933704e3b *src/mgcv_wrap.c b6338848e143fa04e8f59da1f60a170e *tests/testthat.R 987ac7f910d27a260722dab0d64c8f7b *tests/testthat/test-calFastTuneLearn.R c52144b7e0ea8e34ebd0c7ce26ffba2e *tests/testthat/test-calFastTuneLearnFast.R 05685ad185bbdc4c51ab0279d8b1fbb9 *tests/testthat/test-check_qgam.R ec2c323e234f635eca5ef85584356ff6 *tests/testthat/test-tuneLearn.R ad2392e3a011fe4f19037bf57de3064d *tests/testthat/test-tuneLearnFast.R 32506fcc91cb7d860779ec39d0cd573f *vignettes/qgam.Rmd qgam/inst/0000755000176200001440000000000014146705710012161 5ustar liggesusersqgam/inst/doc/0000755000176200001440000000000014146705710012726 5ustar liggesusersqgam/inst/doc/qgam.Rmd0000644000176200001440000005252014073267073014327 0ustar liggesusers--- title: "qgam: quantile non-parametric additive models" date: '`r format(Sys.Date(), "%B %d %Y")`' author: "Matteo Fasiolo, Simon N. Wood, Yannig Goude, and Raphael Nedellec" output: html_document: toc: true number_sections: true vignette: > %\VignetteEngine{knitr::rmarkdown} %\VignetteIndexEntry{quantile_mgcViz} %\VignetteEncoding{UTF-8} --- ```{r setup, include=FALSE} library(knitr) opts_chunk$set(out.extra='style="display:block; margin: auto"', fig.align="center", tidy=FALSE) ``` This R package offers methods for fitting additive quantile regression models based on splines, using the methods described in [Fasiolo et al., 2017](https://arxiv.org/abs/1707.03307). The main fitting functions are: - `qgam()` fits an additive quantile regression model to a single quantile. Very similar to `mgcv::gam()`. It returns an object of class `qgam`, which inherits from `mgcv::gamObject`. - `mqgam()` fits the same additive quantile regression model to several quantiles. It is more efficient that calling `qgam()` several times, especially in terms of memory usage. - `tuneLearn()` useful for tuning the learning rate of the Gibbs posterior. It evaluates a calibration loss function on a grid of values provided by the user. - `tuneLearnFast()` similar to `tuneLearn()`, but here the learning rate is selected by minimizing the calibration loss, using Brent method. A first example: smoothing the motorcycle dataset ======================= Let's start with a simple example. Here we are fitting a regression model with an adaptive spline basis to quantile 0.8 of the motorcycle dataset. ```{r 1, message = F} library(qgam); library(MASS) if( suppressWarnings(require(RhpcBLASctl)) ){ blas_set_num_threads(1) } # Optional fit <- qgam(accel~s(times, k=20, bs="ad"), data = mcycle, qu = 0.8) # Plot the fit xSeq <- data.frame(cbind("accel" = rep(0, 1e3), "times" = seq(2, 58, length.out = 1e3))) pred <- predict(fit, newdata = xSeq, se=TRUE) plot(mcycle$times, mcycle$accel, xlab = "Times", ylab = "Acceleration", ylim = c(-150, 80)) lines(xSeq$times, pred$fit, lwd = 1) lines(xSeq$times, pred$fit + 2*pred$se.fit, lwd = 1, col = 2) lines(xSeq$times, pred$fit - 2*pred$se.fit, lwd = 1, col = 2) ``` `qgam` automatically calls `tuneLearnFast` to select the learning rate. The results of the calibrations are stored in `fit$calibr`. We can check whether the optimization succeded as follows: ```{r 2} check(fit$calibr, 2) ``` The plot suggest that the calibration criterion has a single minimum, and that the optimizer has converged to its neighbourhood. Alternatively, we could have selected the learning rate by evaluating the loss function on a grid. ```{r 3, message = F} set.seed(6436) cal <- tuneLearn(accel~s(times, k=20, bs="ad"), data = mcycle, qu = 0.8, lsig = seq(1, 3, length.out = 20), control = list("progress" = "none")) #<- sequence of values for learning rate check(cal) ``` Here the generic `check` function produces a different output. The first plot is the calibration criterion as a function of $log(\sigma)$, which should look fairly smooth. The second plot shows how the effective degrees of freedom (EDF) vary with $log(\sigma)$. Notice that here we are using an adaptive smoother, which includes five smoothing parameters. We might want to fit several quantiles at once. This can be done with `mqgam`. ```{r 4} quSeq <- c(0.2, 0.4, 0.6, 0.8) set.seed(6436) fit <- mqgam(accel~s(times, k=20, bs="ad"), data = mcycle, qu = quSeq) ``` To save memory `mqgam` does not return one `mgcv::gamObject` for each quantile, but it avoids storing some redundant data (such as several copies of the design matrix). The output of `mqgam` can be manipulated using the `qdo` function. ```{r 5} # Plot the data xSeq <- data.frame(cbind("accel" = rep(0, 1e3), "times" = seq(2, 58, length.out = 1e3))) plot(mcycle$times, mcycle$accel, xlab = "Times", ylab = "Acceleration", ylim = c(-150, 80)) # Predict each quantile curve and plot for(iq in quSeq){ pred <- qdo(fit, iq, predict, newdata = xSeq) lines(xSeq$times, pred, col = 2) } ``` Using `qdo` we can print out the summary for each quantile, for instance: ```{r 6} # Summary for quantile 0.4 qdo(fit, qu = 0.4, summary) ``` Notice that here the generic function `summary` is calling `summary.gam`, because `summary.qgam` has not been implemented yet. Hence one cannot quite rely on the p-value provided by this function, because their are calculated using result that apply to parametric, not quantile, regression. Dealing with heteroscedasticity ======================= Let us simulate some data from an heteroscedastic model. ```{r h1} set.seed(651) n <- 2000 x <- seq(-4, 3, length.out = n) X <- cbind(1, x, x^2) beta <- c(0, 1, 1) sigma = 1.2 + sin(2*x) f <- drop(X %*% beta) dat <- f + rnorm(n, 0, sigma) dataf <- data.frame(cbind(dat, x)) names(dataf) <- c("y", "x") qus <- seq(0.05, 0.95, length.out = 5) plot(x, dat, col = "grey", ylab = "y") for(iq in qus){ lines(x, qnorm(iq, f, sigma)) } ``` We now fit ten quantiles between 0.05 and 0.95, using a quantile GAM with scalar learning rate. ```{r h2} fit <- mqgam(y~s(x, k = 30, bs = "cr"), data = dataf, qu = qus) qus <- seq(0.05, 0.95, length.out = 5) plot(x, dat, col = "grey", ylab = "y") for(iq in qus){ lines(x, qnorm(iq, f, sigma), col = 2) lines(x, qdo(fit, iq, predict)) } legend("top", c("truth", "fitted"), col = 2:1, lty = rep(1, 2)) ``` With the exception of `qu = 0.95`, the fitted quantiles are close to the true ones, but their credible intervals don't vary much with x. Indeed, let's look at intervals for quantile 0.95. ```{r h3} plot(x, dat, col = "grey", ylab = "y") tmp <- qdo(fit, 0.95, predict, se = TRUE) lines(x, tmp$fit) lines(x, tmp$fit + 3 * tmp$se.fit, col = 2) lines(x, tmp$fit - 3 * tmp$se.fit, col = 2) ``` We can get better credible intervals, and solve the "wigglines" problem for the top quantile, by letting the learning rate vary with the covariate. In particular, we can use an additive model for quantile location and one for learning rate. ```{r h4} fit <- qgam(list(y~s(x, k = 30, bs = "cr"), ~ s(x, k = 30, bs = "cr")), data = dataf, qu = 0.95) plot(x, dat, col = "grey", ylab = "y") tmp <- predict(fit, se = TRUE) lines(x, tmp$fit) lines(x, tmp$fit + 3 * tmp$se.fit, col = 2) lines(x, tmp$fit - 3 * tmp$se.fit, col = 2) ``` Now the credible intervals correctly represent the underlying uncertainty, and the fit has the correct amount of smoothness. Neglecting to take the heteroscedasticity into account can lead to bias, in addition to inadequate coverage of the credible intervals. Let's go back the motorcycle data set, and to the first model we fitted: ```{r mcy2rnd, message = F} fit <- qgam(accel~s(times, k=20, bs="ad"), data = mcycle, qu = 0.8) # Plot the fit xSeq <- data.frame(cbind("accel" = rep(0, 1e3), "times" = seq(2, 58, length.out = 1e3))) pred <- predict(fit, newdata = xSeq, se=TRUE) plot(mcycle$times, mcycle$accel, xlab = "Times", ylab = "Acceleration", ylim = c(-150, 80)) lines(xSeq$times, pred$fit, lwd = 1) lines(xSeq$times, pred$fit + 2*pred$se.fit, lwd = 1, col = 2) lines(xSeq$times, pred$fit - 2*pred$se.fit, lwd = 1, col = 2) ``` The slightly disturbing thing about this quantile fit is that for `Times < 10` the fit is clearly above all the responses. But we are fitting quantile 0.8, hence we should expect around 20$\%$ of the responses to be above the fit. The problem here is that the variance of the response (`accel`) varies wildly with `Times`, so that the bias induced by the smoothed pinball loss used by `qgam` is not constant (see Fasiolo et al. 2017 for details). This issue is solved by letting the learning rate change with `Times`: ```{r mcy2rnd2, message = F} fit <- qgam(list(accel ~ s(times, k=20, bs="ad"), ~ s(times)), data = mcycle, qu = 0.8) pred <- predict(fit, newdata = xSeq, se=TRUE) plot(mcycle$times, mcycle$accel, xlab = "Times", ylab = "Acceleration", ylim = c(-150, 80)) lines(xSeq$times, pred$fit, lwd = 1) lines(xSeq$times, pred$fit + 2*pred$se.fit, lwd = 1, col = 2) lines(xSeq$times, pred$fit - 2*pred$se.fit, lwd = 1, col = 2) ``` Model checking ======================= The `qgam` package provides some functions that can be useful for model checking, but a more complete set of visualisation and checking tools can be found in the `mgcViz` R package (Fasiolo et al., 2018). In `qgam` we have: - `cqcheck` if we are fitting, say, quantile 0.2 we expect roughly $20\%$ of the observations to fall below the fitted quantile. This function produces some plots to verify this. - `cqcheckI` interactive version of `cqcheckI`. Implemented using the `shiny` package. Not demonstrated here, but see `?cqcheckI`. - `check.qgam` provides some diagnostics regarding the optimization. Mainly based to `gam.check`. - `check.learn` diagnostic checks to verify that the learning rate selection went well. It can be used on the output of `tuneLearn`. - `check.tuneLearn` similar to `check.learn`, but it can be used on the output of `tuneLearn` or on the `$calibr` slot of a `qgam` object. We start by illustrating the `cqcheck` function. In particular, let us consider the additive model: $$ y \sim x+x^2+z+xz/2+e,\;\;\; e \sim N(0, 1) $$ We start by simulating some data from it. ```{r c1} library(qgam) set.seed(15560) n <- 1000 x <- rnorm(n, 0, 1); z <- rnorm(n) X <- cbind(1, x, x^2, z, x*z) beta <- c(0, 1, 1, 1, 0.5) y <- drop(X %*% beta) + rnorm(n) dataf <- data.frame(cbind(y, x, z)) names(dataf) <- c("y", "x", "z") ``` We fit a linear model to the median and we use `cqcheck` produce a diagnostic plot. ```{r c2} qu <- 0.5 fit <- qgam(y~x, qu = qu, data = dataf) cqcheck(obj = fit, v = c("x"), X = dataf, y = y) ``` The `cqcheck` function takes a `qgam` object as input and it predicts the conditional quantile using the data in `X`. Then it bins the responses `y` using the corresponding values of `v` and it calculates, for every bin, what fraction of responses falls below the fitted quantile. Given that we are fitting the median, we would expect that around $50\%$ of the point falls below the fit. But, as the plot shows, this fraction varies widely along `x`. There is clearly a non-linear relation between the quantile location and `x`, hence we add a smooth for `x`. ```{r c3, message = F} fit <- qgam(y~s(x), qu = qu, data = dataf) cqcheck(obj = fit, v = c("x"), X = dataf, y = y) ``` The deviations from the theoretical quantile ($0.5$) are much reduced, but let's look across both `x` and `z`. ```{r c4, message = F} cqcheck(obj = fit, v = c("x", "z"), X = dataf, y = y, nbin = c(5, 5)) ``` This plot uses binning as before, if a bin is red (green) this means that the fraction of responses falling below the fit is smaller (larger) than 0.5. Bright colours means that the deviation is statistically significant. As we move along `z` (`x2` in the plot) the colour changes from green to red, so it make sense drawing a marginal plot for `z`: ```{r c5, message = F} cqcheck(obj = fit, v = c("z"), X = dataf, y = y, nbin = c(10)) ``` We are clearly missing an effect here. Given that effect looks pretty linear, we simply add a parametric term to the fit, which seems to solve the problem: ```{r c6, message = F} fit <- qgam(y~s(x)+z, qu = qu, data = dataf) cqcheck(obj = fit, v = c("z")) ``` But if we look again across both `x` and `z` we see that green prevails on the top-left to bottom-right diagonal, while the other diagonal is mainly red. ```{r c7, message = F} cqcheck(obj = fit, v = c("x", "z"), nbin = c(5, 5)) ``` This suggests that adding an interaction between `x` and `z` might be a good idea. Indeed, now `cqcheck` does not signal any problem: ```{r c8, message = F} fit <- qgam(y~s(x)+z+I(x*z), qu = qu, data = dataf) cqcheck(obj = fit, v = c("x", "z"), nbin = c(5, 5)) ``` Now that we are fairly satisfied with the model structure, we can, for instance, fit several quantiles by doing: ```{r c9, message = F} fit <- mqgam(y~s(x)+z+I(x*z), qu = c(0.2, 0.4, 0.6, 0.8), data = dataf) ``` We can then check whether the learning rate was selected correctly. Recall that the `qgam` function calls internally `tuneLearnFast`, hence we can look at how the calibration went by doing: ```{r c10, message = F} check.learnFast(fit$calibr, 2:5) ``` For each quantile, the calibration loss seems to have a unique minimum, which is what one would hope. Objects of class `qgam` can also be checked using the generic function `check`, which defaults to `check.qgam`. To use this function on the output of `mqgam`, we must use the `qdo` function: ```{r c11, message = F} qdo(fit, 0.2, check) ``` The printed output gives some information about the optimizer used to estimate the smoothing parameters, for fixed learning rate. See `?check.qgam` for more information. The plot has been obtained using `cqcheck`, where each data point has been binned using the fitted values. On the right side of the plot there seems to be some large deviations, but the rug shows that there are very few data points there. Setting the loss-smoothing parameter and checking convergence ======================= Let's simulate some data: ```{r check1, message = F} set.seed(5235) n <- 1000 x <- seq(-3, 3, length.out = n) X <- cbind(1, x, x^2) beta <- c(0, 1, 1) f <- drop(X %*% beta) dat <- f + rgamma(n, 4, 1) dataf <- data.frame(cbind(dat, x)) names(dataf) <- c("y", "x") ``` Assume that we want to estimate quantiles 0.05, 0.5 and 0.95: ```{r check2, message = F} qus <- c(0.05, 0.5, 0.95) fit <- mqgam(y ~ s(x), data = dataf, qu = qus) plot(x, dat, col = "grey", ylab = "y") lines(x, f + qgamma(0.95, 4, 1), lty = 2) lines(x, f + qgamma(0.5, 4, 1), lty = 2) lines(x, f + qgamma(0.05, 4, 1), lty = 2) lines(x, qdo(fit, qus[1], predict), col = 2) lines(x, qdo(fit, qus[2], predict), col = 2) lines(x, qdo(fit, qus[3], predict), col = 2) ``` Since `qgam` version 1.3 the parameter `err`, which determines the smoothness of the loss function used by `qgam`, is determined automatically. But there might be scenarios where you might want to chose is manually, so let's try to use several values of `err`: ```{r check2b, message = F} lfit <- lapply(c(0.01, 0.05, 0.1, 0.2, 0.3, 0.5), function(.inp){ mqgam(y ~ s(x), data = dataf, qu = qus, err = .inp, control = list("progress" = F)) }) plot(x, dat, col = "grey", ylab = "y", ylim = c(-2, 20)) colss <- rainbow(length(lfit)) for(ii in 1:length(lfit)){ lines(x, qdo(lfit[[ii]], qus[1], predict), col = colss[ii]) lines(x, qdo(lfit[[ii]], qus[2], predict), col = colss[ii]) lines(x, qdo(lfit[[ii]], qus[3], predict), col = colss[ii]) } lines(x, f + qgamma(0.95, 4, 1), lty = 2) lines(x, f + qgamma(0.5, 4, 1), lty = 2) lines(x, f + qgamma(0.05, 4, 1), lty = 2) ``` The bias increases with `err`, and it is upward (downward) for high (low) quantiles. The median fit is not much affected by `err`. The bias really starts appearing for `err > 0.1`. Decreasing `err` tends to slow down computation: ```{r check3, message = F} system.time( fit1 <- qgam(y ~ s(x), data = dataf, qu = 0.95, err = 0.05, control = list("progress" = F)) )[[3]] system.time( fit2 <- qgam(y ~ s(x), data = dataf, qu = 0.95, err = 0.001, control = list("progress" = F)) )[[3]] ``` Even worse, it can lead to numeric problems. Here we check that we have found the minimum of the calibration loss: ```{r check4, message = F} check(fit1$calibr, sel = 2) check(fit2$calibr, sel = 2) ``` In the first case the loss looks smooth and with as single minimum, in the second case we have some instabilities. If the calibration loss looks like this, you generally have to increase `err`. We can use `check` to have an estimate of the bias and to have information regarding the convergence of the smoothing parameter estimation routine: ```{r check5, message = F} check(fit1) ``` The second plot suggest that the actual bias is much lower than the bound `err = 0.05`. This is also supported by the first two lines of text, which say that 95.1\% of the residuals are negative, which is very close to the theoretical 95\%. The text says that full convergence in smoothing parameter estimation has been achieved, it is important to check this. In summary, practical experience suggests that: - the automatic procedure for selecting `err` offer a good compromise between bias and stability; - the old default (`qgam` version < 1.3) was `err = 0.05`, which generally does not imply too much bias; - if the calibration loss plotted by `check(fit$learn)` is irregular, try to increase `err`; - same if the text printed by `check(fit)` does not say that `full convergence` was achieved; - you can estimate the bias using `check(fit)`; - if you have to increase `err` to 0.2 or higher, there might be something wrong with your model; - you might get messages saying that `outer Newton did not converge fully` during estimation. This might not be problematic as long as the calibration loss is smooth and `full convergence` was achieved; - in preliminary studies do not decrease `err` too much, as it slows down computation; - setting `err` too low is not a good idea: it is much better to have some bias than numerical problems. Application to probabilistic electricity load forecasting ======================= Here we consider a UK electricity demand dataset, taken from the national grid [website](https://www.nationalgrid.com/). The dataset covers the period January 2011 to June 2016 and it contains the following variables: - `NetDemand` net electricity demand between 11:30am and 12am. - `wM` instantaneous temperature, averaged over several English cities. - `wM_s95` exponential smooth of `wM`, that is `wM_s95[i] = a*wM[i] + (1-a)*wM_s95[i]` with `a=0.95`. - `Posan` periodic index in `[0, 1]` indicating the position along the year. - `Dow` factor variable indicating the day of the week. - `Trend` progressive counter, useful for defining the long term trend. - `NetDemand.48` lagged version of `NetDemand`, that is `NetDemand.48[i] = NetDemand[i-2]`. - `Holy` binary variable indicating holidays. - `Year` and `Date` should obvious, and partially redundant. See [Fasiolo et al., 2017](https://arxiv.org/abs/1707.03307) for more details. This is how the demand over the period looks like: ```{r edf1} data("UKload") tmpx <- seq(UKload$Year[1], tail(UKload$Year, 1), length.out = nrow(UKload)) plot(tmpx, UKload$NetDemand, type = 'l', xlab = 'Year', ylab = 'Load') ``` To estimate the median demand, we consider the following model ```{r edf2} qu <- 0.5 form <- NetDemand~s(wM,k=20,bs='cr') + s(wM_s95,k=20,bs='cr') + s(Posan,bs='ad',k=30,xt=list("bs"="cc")) + Dow + s(Trend,k=4) + NetDemand.48 + Holy ``` Notice that we use very few knots for the long term trend, this is because we don't want to end up interpolating the data. We use an adaptive cyclic smooth for `Posan`, we'll explain later why adaptivity is needed here. Now we tune the learning rate on a grid, on two cores. As the first plot shows, the calibrations loss is minimized at $\log (\sigma)\approx 6$, the second plot shows how the effective degrees of freedom of each smooth term changes with $\log (\sigma)$. ```{r edf3, message=FALSE} set.seed(41241) sigSeq <- seq(4, 8, length.out = 16) closs <- tuneLearn(form = form, data = UKload, lsig = sigSeq, qu = qu, control = list("K" = 20), multicore = TRUE, ncores = 2) check(closs) ``` Now let's fit the model with the learning rate corresponding to the lowest loss and let's look at the resulting smooth effects. ```{r edf4} lsig <- closs$lsig fit <- qgam(form = form, data = UKload, lsig = lsig, qu = qu) plot(fit, scale = F, page = 1) ``` The effect of temperature (`wM`) is minimized around 18 degrees, which is reasonable. The cyclic effect of `Posan` has a very sharp drop corresponding to the winter holidays, we used an adaptive smooth in order to have more flexibility during this period. Now we can have a look as some diagnostic plot: ```{r edf5} par(mfrow = c(2, 2)) cqcheck(fit, v = c("wM"), main = "wM") cqcheck(fit, v = c("wM_s95"), main = "wM_s95") cqcheck(fit, v = c("Posan"), main = "Posan") cqcheck(fit, v = c("Trend"), main = "Trend", xaxt='n') axis(1, at = UKload$Trend[c(1, 500, 1000, 1500, 2000)], UKload$Year[c(1, 500, 1000, 1500, 2000)] ) ``` The plots for `wM_s95` and `Posan` don't show any important deviation from 0.5, the target quantile. Along `wM` we see a large deviation, but we have essentially no data for very high temperatures. If we look at deviations along the `Trend` variable, which is just a time counter, we see several important deviations. It would be interesting verifying why these occur (we have no answer currently). Finally, recall that we can produce 2D versions of these diagnostic plots, for instance: ```{r edf6} par(mfrow = c(1, 1)) cqcheck(fit, v = c("wM", "Posan"), scatter = T) ``` References ======================= * Fasiolo, M., Goude, Y., Nedellec, R. and Wood, S. N. (2017). Fast calibrated additive quantile regression. Available at https://arxiv.org/abs/1707.03307 * Fasiolo, M., Nedellec, R., Goude, Y. and Wood, S.N. (2018). Scalable visualisation methods for modern Generalized Additive Models. Available at https://arxiv.org/abs/1809.10632 qgam/inst/doc/qgam.html0000644000176200001440002442765114146705710014566 0ustar liggesusers qgam: quantile non-parametric additive models

This R package offers methods for fitting additive quantile regression models based on splines, using the methods described in Fasiolo et al., 2017.

The main fitting functions are:

  • qgam() fits an additive quantile regression model to a single quantile. Very similar to mgcv::gam(). It returns an object of class qgam, which inherits from mgcv::gamObject.
  • mqgam() fits the same additive quantile regression model to several quantiles. It is more efficient that calling qgam() several times, especially in terms of memory usage.
  • tuneLearn() useful for tuning the learning rate of the Gibbs posterior. It evaluates a calibration loss function on a grid of values provided by the user.
  • tuneLearnFast() similar to tuneLearn(), but here the learning rate is selected by minimizing the calibration loss, using Brent method.

1 A first example: smoothing the motorcycle dataset

Let's start with a simple example. Here we are fitting a regression model with an adaptive spline basis to quantile 0.8 of the motorcycle dataset.

library(qgam); library(MASS)
if( suppressWarnings(require(RhpcBLASctl)) ){ blas_set_num_threads(1) } # Optional

fit <- qgam(accel~s(times, k=20, bs="ad"), 
            data = mcycle, 
            qu = 0.8)
## Estimating learning rate. Each dot corresponds to a loss evaluation. 
## qu = 0.8............done
# Plot the fit
xSeq <- data.frame(cbind("accel" = rep(0, 1e3), "times" = seq(2, 58, length.out = 1e3)))
pred <- predict(fit, newdata = xSeq, se=TRUE)
plot(mcycle$times, mcycle$accel, xlab = "Times", ylab = "Acceleration", ylim = c(-150, 80))
lines(xSeq$times, pred$fit, lwd = 1)
lines(xSeq$times, pred$fit + 2*pred$se.fit, lwd = 1, col = 2)
lines(xSeq$times, pred$fit - 2*pred$se.fit, lwd = 1, col = 2)   

qgam automatically calls tuneLearnFast to select the learning rate. The results of the calibrations are stored in fit$calibr. We can check whether the optimization succeded as follows:

check(fit$calibr, 2)

The plot suggest that the calibration criterion has a single minimum, and that the optimizer has converged to its neighbourhood. Alternatively, we could have selected the learning rate by evaluating the loss function on a grid.

set.seed(6436)
cal <- tuneLearn(accel~s(times, k=20, bs="ad"), 
                 data = mcycle, 
                 qu = 0.8,
                 lsig = seq(1, 3, length.out = 20), 
                 control = list("progress" = "none")) #<- sequence of values for learning rate
                 
check(cal)

Here the generic check function produces a different output. The first plot is the calibration criterion as a function of \(log(\sigma)\), which should look fairly smooth. The second plot shows how the effective degrees of freedom (EDF) vary with \(log(\sigma)\). Notice that here we are using an adaptive smoother, which includes five smoothing parameters.

We might want to fit several quantiles at once. This can be done with mqgam.

quSeq <- c(0.2, 0.4, 0.6, 0.8)
set.seed(6436)
fit <- mqgam(accel~s(times, k=20, bs="ad"), 
             data = mcycle, 
             qu = quSeq)
## Estimating learning rate. Each dot corresponds to a loss evaluation. 
## qu = 0.4..........done 
## qu = 0.6...........done 
## qu = 0.2............done 
## qu = 0.8..........done

To save memory mqgam does not return one mgcv::gamObject for each quantile, but it avoids storing some redundant data (such as several copies of the design matrix). The output of mqgam can be manipulated using the qdo function.

# Plot the data
xSeq <- data.frame(cbind("accel" = rep(0, 1e3), "times" = seq(2, 58, length.out = 1e3)))
plot(mcycle$times, mcycle$accel, xlab = "Times", ylab = "Acceleration", ylim = c(-150, 80))

# Predict each quantile curve and plot
for(iq in quSeq){
  pred <- qdo(fit, iq, predict, newdata = xSeq)
  lines(xSeq$times, pred, col = 2)
}

Using qdo we can print out the summary for each quantile, for instance:

# Summary for quantile 0.4
qdo(fit, qu = 0.4, summary)
## 
## Family: elf 
## Link function: identity 
## 
## Formula:
## accel ~ s(times, k = 20, bs = "ad")
## 
## Parametric coefficients:
##             Estimate Std. Error z value Pr(>|z|)    
## (Intercept)  -31.181      1.832  -17.02   <2e-16 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Approximate significance of smooth terms:
##            edf Ref.df Chi.sq p-value    
## s(times) 8.968  10.35  666.4  <2e-16 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## R-sq.(adj) =  0.781   Deviance explained = 69.9%
## -REML = 609.58  Scale est. = 1         n = 133

Notice that here the generic function summary is calling summary.gam, because summary.qgam has not been implemented yet. Hence one cannot quite rely on the p-value provided by this function, because their are calculated using result that apply to parametric, not quantile, regression.

2 Dealing with heteroscedasticity

Let us simulate some data from an heteroscedastic model.

set.seed(651)
n <- 2000
x <- seq(-4, 3, length.out = n)
X <- cbind(1, x, x^2)
beta <- c(0, 1, 1)
sigma =  1.2 + sin(2*x)
f <- drop(X %*% beta)
dat <- f + rnorm(n, 0, sigma)
dataf <- data.frame(cbind(dat, x))
names(dataf) <- c("y", "x")
   
qus <- seq(0.05, 0.95, length.out = 5)
plot(x, dat, col = "grey", ylab = "y")
for(iq in qus){ lines(x, qnorm(iq, f, sigma)) }

We now fit ten quantiles between 0.05 and 0.95, using a quantile GAM with scalar learning rate.

fit <- mqgam(y~s(x, k = 30, bs = "cr"), 
             data = dataf,
             qu = qus)
## Estimating learning rate. Each dot corresponds to a loss evaluation. 
## qu = 0.5........done 
## qu = 0.725.......done 
## qu = 0.275.........done 
## qu = 0.95................done 
## qu = 0.05...............done
qus <- seq(0.05, 0.95, length.out = 5)
plot(x, dat, col = "grey", ylab = "y")
for(iq in qus){ 
 lines(x, qnorm(iq, f, sigma), col = 2)
 lines(x, qdo(fit, iq, predict))
}
legend("top", c("truth", "fitted"), col = 2:1, lty = rep(1, 2))

With the exception of qu = 0.95, the fitted quantiles are close to the true ones, but their credible intervals don't vary much with x. Indeed, let's look at intervals for quantile 0.95.

plot(x, dat, col = "grey", ylab = "y")
tmp <- qdo(fit, 0.95, predict, se = TRUE)
lines(x, tmp$fit)
lines(x, tmp$fit + 3 * tmp$se.fit, col = 2)
lines(x, tmp$fit - 3 * tmp$se.fit, col = 2)

We can get better credible intervals, and solve the "wigglines" problem for the top quantile, by letting the learning rate vary with the covariate. In particular, we can use an additive model for quantile location and one for learning rate.

fit <- qgam(list(y~s(x, k = 30, bs = "cr"), ~ s(x, k = 30, bs = "cr")), 
            data = dataf, qu = 0.95)
## Estimating learning rate. Each dot corresponds to a loss evaluation. 
## qu = 0.95.........done
plot(x, dat, col = "grey", ylab = "y")
tmp <- predict(fit, se = TRUE)
lines(x, tmp$fit)
lines(x, tmp$fit + 3 * tmp$se.fit, col = 2)
lines(x, tmp$fit - 3 * tmp$se.fit, col = 2)

Now the credible intervals correctly represent the underlying uncertainty, and the fit has the correct amount of smoothness.

Neglecting to take the heteroscedasticity into account can lead to bias, in addition to inadequate coverage of the credible intervals. Let's go back the motorcycle data set, and to the first model we fitted:

fit <- qgam(accel~s(times, k=20, bs="ad"), 
            data = mcycle, 
            qu = 0.8)
## Estimating learning rate. Each dot corresponds to a loss evaluation. 
## qu = 0.8............done
# Plot the fit
xSeq <- data.frame(cbind("accel" = rep(0, 1e3), "times" = seq(2, 58, length.out = 1e3)))
pred <- predict(fit, newdata = xSeq, se=TRUE)
plot(mcycle$times, mcycle$accel, xlab = "Times", ylab = "Acceleration", ylim = c(-150, 80))
lines(xSeq$times, pred$fit, lwd = 1)
lines(xSeq$times, pred$fit + 2*pred$se.fit, lwd = 1, col = 2)
lines(xSeq$times, pred$fit - 2*pred$se.fit, lwd = 1, col = 2)   

The slightly disturbing thing about this quantile fit is that for Times < 10 the fit is clearly above all the responses. But we are fitting quantile 0.8, hence we should expect around 20\(\%\) of the responses to be above the fit. The problem here is that the variance of the response (accel) varies wildly with Times, so that the bias induced by the smoothed pinball loss used by qgam is not constant (see Fasiolo et al. 2017 for details). This issue is solved by letting the learning rate change with Times:

fit <- qgam(list(accel ~ s(times, k=20, bs="ad"), ~ s(times)),
            data = mcycle, 
            qu = 0.8)
## Estimating learning rate. Each dot corresponds to a loss evaluation. 
## qu = 0.8................done
pred <- predict(fit, newdata = xSeq, se=TRUE)
plot(mcycle$times, mcycle$accel, xlab = "Times", ylab = "Acceleration", ylim = c(-150, 80))
lines(xSeq$times, pred$fit, lwd = 1)
lines(xSeq$times, pred$fit + 2*pred$se.fit, lwd = 1, col = 2)
lines(xSeq$times, pred$fit - 2*pred$se.fit, lwd = 1, col = 2)  

3 Model checking

The qgam package provides some functions that can be useful for model checking, but a more complete set of visualisation and checking tools can be found in the mgcViz R package (Fasiolo et al., 2018). In qgam we have:

  • cqcheck if we are fitting, say, quantile 0.2 we expect roughly \(20\%\) of the observations to fall below the fitted quantile. This function produces some plots to verify this.
  • cqcheckI interactive version of cqcheckI. Implemented using the shiny package. Not demonstrated here, but see ?cqcheckI.
  • check.qgam provides some diagnostics regarding the optimization. Mainly based to gam.check.
  • check.learn diagnostic checks to verify that the learning rate selection went well. It can be used on the output of tuneLearn.
  • check.tuneLearn similar to check.learn, but it can be used on the output of tuneLearn or on the $calibr slot of a qgam object.

We start by illustrating the cqcheck function. In particular, let us consider the additive model: \[ y \sim x+x^2+z+xz/2+e,\;\;\; e \sim N(0, 1) \] We start by simulating some data from it.

library(qgam)
set.seed(15560)
n <- 1000
x <- rnorm(n, 0, 1); z <- rnorm(n)
X <- cbind(1, x, x^2, z, x*z)
beta <- c(0, 1, 1, 1, 0.5)
y <- drop(X %*% beta) + rnorm(n) 
dataf <- data.frame(cbind(y, x, z))
names(dataf) <- c("y", "x", "z")

We fit a linear model to the median and we use cqcheck produce a diagnostic plot.

qu <- 0.5
fit <- qgam(y~x, qu = qu, data = dataf)
## Estimating learning rate. Each dot corresponds to a loss evaluation. 
## qu = 0.5.........done
cqcheck(obj = fit, v = c("x"), X = dataf, y = y) 

The cqcheck function takes a qgam object as input and it predicts the conditional quantile using the data in X. Then it bins the responses y using the corresponding values of v and it calculates, for every bin, what fraction of responses falls below the fitted quantile. Given that we are fitting the median, we would expect that around \(50\%\) of the point falls below the fit. But, as the plot shows, this fraction varies widely along x. There is clearly a non-linear relation between the quantile location and x, hence we add a smooth for x.

fit <- qgam(y~s(x), qu = qu, data = dataf)
## Estimating learning rate. Each dot corresponds to a loss evaluation. 
## qu = 0.5........done
cqcheck(obj = fit, v = c("x"), X = dataf, y = y)

The deviations from the theoretical quantile (\(0.5\)) are much reduced, but let's look across both x and z.

cqcheck(obj = fit, v = c("x", "z"), X = dataf, y = y, nbin = c(5, 5))

This plot uses binning as before, if a bin is red (green) this means that the fraction of responses falling below the fit is smaller (larger) than 0.5. Bright colours means that the deviation is statistically significant. As we move along z (x2 in the plot) the colour changes from green to red, so it make sense drawing a marginal plot for z:

cqcheck(obj = fit, v = c("z"), X = dataf, y = y, nbin = c(10))

We are clearly missing an effect here. Given that effect looks pretty linear, we simply add a parametric term to the fit, which seems to solve the problem:

fit <- qgam(y~s(x)+z, qu = qu, data = dataf)
## Estimating learning rate. Each dot corresponds to a loss evaluation. 
## qu = 0.5.........done
cqcheck(obj = fit, v = c("z"))

But if we look again across both x and z we see that green prevails on the top-left to bottom-right diagonal, while the other diagonal is mainly red.

cqcheck(obj = fit, v = c("x", "z"), nbin = c(5, 5))

This suggests that adding an interaction between x and z might be a good idea. Indeed, now cqcheck does not signal any problem:

fit <- qgam(y~s(x)+z+I(x*z), qu = qu, data = dataf)
## Estimating learning rate. Each dot corresponds to a loss evaluation. 
## qu = 0.5........done
cqcheck(obj = fit, v = c("x", "z"), nbin = c(5, 5))

Now that we are fairly satisfied with the model structure, we can, for instance, fit several quantiles by doing:

fit <- mqgam(y~s(x)+z+I(x*z), qu = c(0.2, 0.4, 0.6, 0.8), data = dataf)
## Estimating learning rate. Each dot corresponds to a loss evaluation. 
## qu = 0.4........done 
## qu = 0.6.........done 
## qu = 0.2.......done 
## qu = 0.8...............done

We can then check whether the learning rate was selected correctly. Recall that the qgam function calls internally tuneLearnFast, hence we can look at how the calibration went by doing:

check.learnFast(fit$calibr, 2:5)

For each quantile, the calibration loss seems to have a unique minimum, which is what one would hope. Objects of class qgam can also be checked using the generic function check, which defaults to check.qgam. To use this function on the output of mqgam, we must use the qdo function:

qdo(fit, 0.2, check)

## Theor. proportion of neg. resid.: 0.2   Actual proportion: 0.198
## Integrated absolute bias |F(mu) - F(mu0)| = 0.01025554
## Method: REML   Optimizer: outer newton
## full convergence after 4 iterations.
## Gradient range [0.00075606,0.00075606]
## (score 1658.424 & scale 1).
## Hessian positive definite, eigenvalue range [3.83483,3.83483].
## Model rank =  12 / 12 
## 
## Basis dimension (k) check: if edf is close to k' (maximum possible edf) 
## it might be worth increasing k. 
## 
##      k'  edf
## s(x)  9 7.38
## NULL

The printed output gives some information about the optimizer used to estimate the smoothing parameters, for fixed learning rate. See ?check.qgam for more information. The plot has been obtained using cqcheck, where each data point has been binned using the fitted values. On the right side of the plot there seems to be some large deviations, but the rug shows that there are very few data points there.

4 Setting the loss-smoothing parameter and checking convergence

Let's simulate some data:

set.seed(5235)
n <- 1000
x <- seq(-3, 3, length.out = n)
X <- cbind(1, x, x^2)
beta <- c(0, 1, 1)
f <- drop(X %*% beta)
dat <- f + rgamma(n, 4, 1)
dataf <- data.frame(cbind(dat, x))
names(dataf) <- c("y", "x")

Assume that we want to estimate quantiles 0.05, 0.5 and 0.95:

qus <- c(0.05, 0.5, 0.95)
fit <- mqgam(y ~ s(x), data = dataf, qu = qus)
## Estimating learning rate. Each dot corresponds to a loss evaluation. 
## qu = 0.5........done 
## qu = 0.95........done 
## qu = 0.05.................done
plot(x, dat, col = "grey", ylab = "y")
lines(x, f + qgamma(0.95, 4, 1), lty = 2)
lines(x, f + qgamma(0.5, 4, 1), lty = 2)
lines(x, f + qgamma(0.05, 4, 1), lty = 2)
lines(x, qdo(fit, qus[1], predict), col = 2)
lines(x, qdo(fit, qus[2], predict), col = 2)
lines(x, qdo(fit, qus[3], predict), col = 2)

Since qgam version 1.3 the parameter err, which determines the smoothness of the loss function used by qgam, is determined automatically. But there might be scenarios where you might want to chose is manually, so let's try to use several values of err:

lfit <- lapply(c(0.01, 0.05, 0.1, 0.2, 0.3, 0.5),
               function(.inp){
                 mqgam(y ~ s(x), data = dataf, qu = qus, err = .inp,
                       control = list("progress" = F))
               })

plot(x, dat, col = "grey", ylab = "y", ylim = c(-2, 20))
colss <- rainbow(length(lfit))
for(ii in 1:length(lfit)){
  lines(x, qdo(lfit[[ii]], qus[1], predict), col = colss[ii])
  lines(x, qdo(lfit[[ii]], qus[2], predict), col = colss[ii])
  lines(x, qdo(lfit[[ii]], qus[3], predict), col = colss[ii])
}
lines(x, f + qgamma(0.95, 4, 1), lty = 2)
lines(x, f + qgamma(0.5, 4, 1), lty = 2)
lines(x, f + qgamma(0.05, 4, 1), lty = 2)

The bias increases with err, and it is upward (downward) for high (low) quantiles. The median fit is not much affected by err. The bias really starts appearing for err > 0.1. Decreasing err tends to slow down computation:

system.time( fit1 <- qgam(y ~ s(x), data = dataf, qu = 0.95, err = 0.05,
                           control = list("progress" = F)) )[[3]]
## [1] 0.341
system.time( fit2 <- qgam(y ~ s(x), data = dataf, qu = 0.95, err = 0.001,
                           control = list("progress" = F)) )[[3]]
## [1] 32.723

Even worse, it can lead to numeric problems. Here we check that we have found the minimum of the calibration loss:

check(fit1$calibr, sel = 2)

check(fit2$calibr, sel = 2)

In the first case the loss looks smooth and with as single minimum, in the second case we have some instabilities. If the calibration loss looks like this, you generally have to increase err.

We can use check to have an estimate of the bias and to have information regarding the convergence of the smoothing parameter estimation routine:

check(fit1)

## Theor. proportion of neg. resid.: 0.95   Actual proportion: 0.951
## Integrated absolute bias |F(mu) - F(mu0)| = 0.001017127
## Method: REML   Optimizer: outer newton
## full convergence after 5 iterations.
## Gradient range [4.450199e-08,4.450199e-08]
## (score 3438.722 & scale 1).
## Hessian positive definite, eigenvalue range [1.428662,1.428662].
## Model rank =  10 / 10 
## 
## Basis dimension (k) check: if edf is close to k' (maximum possible edf) 
## it might be worth increasing k. 
## 
##      k'  edf
## s(x)  9 5.05

The second plot suggest that the actual bias is much lower than the bound err = 0.05. This is also supported by the first two lines of text, which say that 95.1% of the residuals are negative, which is very close to the theoretical 95%. The text says that full convergence in smoothing parameter estimation has been achieved, it is important to check this.

In summary, practical experience suggests that:

  • the automatic procedure for selecting err offer a good compromise between bias and stability;
  • the old default (qgam version < 1.3) was err = 0.05, which generally does not imply too much bias;
  • if the calibration loss plotted by check(fit$learn) is irregular, try to increase err;
  • same if the text printed by check(fit) does not say that full convergence was achieved;
  • you can estimate the bias using check(fit);
  • if you have to increase err to 0.2 or higher, there might be something wrong with your model;
  • you might get messages saying that outer Newton did not converge fully during estimation. This might not be problematic as long as the calibration loss is smooth and full convergence was achieved;
  • in preliminary studies do not decrease err too much, as it slows down computation;
  • setting err too low is not a good idea: it is much better to have some bias than numerical problems.

5 Application to probabilistic electricity load forecasting

Here we consider a UK electricity demand dataset, taken from the national grid website. The dataset covers the period January 2011 to June 2016 and it contains the following variables:

  • NetDemand net electricity demand between 11:30am and 12am.
  • wM instantaneous temperature, averaged over several English cities.
  • wM_s95 exponential smooth of wM, that is wM_s95[i] = a*wM[i] + (1-a)*wM_s95[i] with a=0.95.
  • Posan periodic index in [0, 1] indicating the position along the year.
  • Dow factor variable indicating the day of the week.
  • Trend progressive counter, useful for defining the long term trend.
  • NetDemand.48 lagged version of NetDemand, that is NetDemand.48[i] = NetDemand[i-2].
  • Holy binary variable indicating holidays.
  • Year and Date should obvious, and partially redundant.

See Fasiolo et al., 2017 for more details. This is how the demand over the period looks like:

data("UKload")
tmpx <- seq(UKload$Year[1], tail(UKload$Year, 1), length.out = nrow(UKload)) 
plot(tmpx, UKload$NetDemand, type = 'l', xlab = 'Year', ylab = 'Load')

To estimate the median demand, we consider the following model

qu <- 0.5
form <- NetDemand~s(wM,k=20,bs='cr') + s(wM_s95,k=20,bs='cr') + 
        s(Posan,bs='ad',k=30,xt=list("bs"="cc")) + Dow + s(Trend,k=4) + NetDemand.48 + Holy

Notice that we use very few knots for the long term trend, this is because we don't want to end up interpolating the data. We use an adaptive cyclic smooth for Posan, we'll explain later why adaptivity is needed here.

Now we tune the learning rate on a grid, on two cores. As the first plot shows, the calibrations loss is minimized at \(\log (\sigma)\approx 6\), the second plot shows how the effective degrees of freedom of each smooth term changes with \(\log (\sigma)\).

set.seed(41241)
sigSeq <- seq(4, 8, length.out = 16)
closs <- tuneLearn(form = form, data = UKload, 
                   lsig = sigSeq, qu = qu, control = list("K" = 20), 
                   multicore = TRUE, ncores = 2)

check(closs)

Now let's fit the model with the learning rate corresponding to the lowest loss and let's look at the resulting smooth effects.

lsig <- closs$lsig
fit <- qgam(form = form, data = UKload, lsig = lsig, qu = qu)
plot(fit, scale = F, page = 1)

The effect of temperature (wM) is minimized around 18 degrees, which is reasonable. The cyclic effect of Posan has a very sharp drop corresponding to the winter holidays, we used an adaptive smooth in order to have more flexibility during this period. Now we can have a look as some diagnostic plot:

par(mfrow = c(2, 2))
cqcheck(fit, v = c("wM"), main = "wM")
cqcheck(fit, v = c("wM_s95"), main = "wM_s95")
cqcheck(fit, v = c("Posan"), main = "Posan")
cqcheck(fit, v = c("Trend"), main = "Trend", xaxt='n')
axis(1, at = UKload$Trend[c(1, 500, 1000, 1500, 2000)], 
             UKload$Year[c(1, 500, 1000, 1500, 2000)] )

The plots for wM_s95 and Posan don't show any important deviation from 0.5, the target quantile. Along wM we see a large deviation, but we have essentially no data for very high temperatures. If we look at deviations along the Trend variable, which is just a time counter, we see several important deviations. It would be interesting verifying why these occur (we have no answer currently).

Finally, recall that we can produce 2D versions of these diagnostic plots, for instance:

par(mfrow = c(1, 1))
cqcheck(fit, v = c("wM", "Posan"), scatter = T)

6 References

  • Fasiolo, M., Goude, Y., Nedellec, R. and Wood, S. N. (2017). Fast calibrated additive quantile regression. Available at https://arxiv.org/abs/1707.03307

  • Fasiolo, M., Nedellec, R., Goude, Y. and Wood, S.N. (2018). Scalable visualisation methods for modern Generalized Additive Models. Available at https://arxiv.org/abs/1809.10632

qgam/inst/doc/qgam.R0000644000176200001440000002373014146705707014011 0ustar liggesusers## ----setup, include=FALSE----------------------------------------------------- library(knitr) opts_chunk$set(out.extra='style="display:block; margin: auto"', fig.align="center", tidy=FALSE) ## ----1, message = F----------------------------------------------------------- library(qgam); library(MASS) if( suppressWarnings(require(RhpcBLASctl)) ){ blas_set_num_threads(1) } # Optional fit <- qgam(accel~s(times, k=20, bs="ad"), data = mcycle, qu = 0.8) # Plot the fit xSeq <- data.frame(cbind("accel" = rep(0, 1e3), "times" = seq(2, 58, length.out = 1e3))) pred <- predict(fit, newdata = xSeq, se=TRUE) plot(mcycle$times, mcycle$accel, xlab = "Times", ylab = "Acceleration", ylim = c(-150, 80)) lines(xSeq$times, pred$fit, lwd = 1) lines(xSeq$times, pred$fit + 2*pred$se.fit, lwd = 1, col = 2) lines(xSeq$times, pred$fit - 2*pred$se.fit, lwd = 1, col = 2) ## ----2------------------------------------------------------------------------ check(fit$calibr, 2) ## ----3, message = F----------------------------------------------------------- set.seed(6436) cal <- tuneLearn(accel~s(times, k=20, bs="ad"), data = mcycle, qu = 0.8, lsig = seq(1, 3, length.out = 20), control = list("progress" = "none")) #<- sequence of values for learning rate check(cal) ## ----4------------------------------------------------------------------------ quSeq <- c(0.2, 0.4, 0.6, 0.8) set.seed(6436) fit <- mqgam(accel~s(times, k=20, bs="ad"), data = mcycle, qu = quSeq) ## ----5------------------------------------------------------------------------ # Plot the data xSeq <- data.frame(cbind("accel" = rep(0, 1e3), "times" = seq(2, 58, length.out = 1e3))) plot(mcycle$times, mcycle$accel, xlab = "Times", ylab = "Acceleration", ylim = c(-150, 80)) # Predict each quantile curve and plot for(iq in quSeq){ pred <- qdo(fit, iq, predict, newdata = xSeq) lines(xSeq$times, pred, col = 2) } ## ----6------------------------------------------------------------------------ # Summary for quantile 0.4 qdo(fit, qu = 0.4, summary) ## ----h1----------------------------------------------------------------------- set.seed(651) n <- 2000 x <- seq(-4, 3, length.out = n) X <- cbind(1, x, x^2) beta <- c(0, 1, 1) sigma = 1.2 + sin(2*x) f <- drop(X %*% beta) dat <- f + rnorm(n, 0, sigma) dataf <- data.frame(cbind(dat, x)) names(dataf) <- c("y", "x") qus <- seq(0.05, 0.95, length.out = 5) plot(x, dat, col = "grey", ylab = "y") for(iq in qus){ lines(x, qnorm(iq, f, sigma)) } ## ----h2----------------------------------------------------------------------- fit <- mqgam(y~s(x, k = 30, bs = "cr"), data = dataf, qu = qus) qus <- seq(0.05, 0.95, length.out = 5) plot(x, dat, col = "grey", ylab = "y") for(iq in qus){ lines(x, qnorm(iq, f, sigma), col = 2) lines(x, qdo(fit, iq, predict)) } legend("top", c("truth", "fitted"), col = 2:1, lty = rep(1, 2)) ## ----h3----------------------------------------------------------------------- plot(x, dat, col = "grey", ylab = "y") tmp <- qdo(fit, 0.95, predict, se = TRUE) lines(x, tmp$fit) lines(x, tmp$fit + 3 * tmp$se.fit, col = 2) lines(x, tmp$fit - 3 * tmp$se.fit, col = 2) ## ----h4----------------------------------------------------------------------- fit <- qgam(list(y~s(x, k = 30, bs = "cr"), ~ s(x, k = 30, bs = "cr")), data = dataf, qu = 0.95) plot(x, dat, col = "grey", ylab = "y") tmp <- predict(fit, se = TRUE) lines(x, tmp$fit) lines(x, tmp$fit + 3 * tmp$se.fit, col = 2) lines(x, tmp$fit - 3 * tmp$se.fit, col = 2) ## ----mcy2rnd, message = F----------------------------------------------------- fit <- qgam(accel~s(times, k=20, bs="ad"), data = mcycle, qu = 0.8) # Plot the fit xSeq <- data.frame(cbind("accel" = rep(0, 1e3), "times" = seq(2, 58, length.out = 1e3))) pred <- predict(fit, newdata = xSeq, se=TRUE) plot(mcycle$times, mcycle$accel, xlab = "Times", ylab = "Acceleration", ylim = c(-150, 80)) lines(xSeq$times, pred$fit, lwd = 1) lines(xSeq$times, pred$fit + 2*pred$se.fit, lwd = 1, col = 2) lines(xSeq$times, pred$fit - 2*pred$se.fit, lwd = 1, col = 2) ## ----mcy2rnd2, message = F---------------------------------------------------- fit <- qgam(list(accel ~ s(times, k=20, bs="ad"), ~ s(times)), data = mcycle, qu = 0.8) pred <- predict(fit, newdata = xSeq, se=TRUE) plot(mcycle$times, mcycle$accel, xlab = "Times", ylab = "Acceleration", ylim = c(-150, 80)) lines(xSeq$times, pred$fit, lwd = 1) lines(xSeq$times, pred$fit + 2*pred$se.fit, lwd = 1, col = 2) lines(xSeq$times, pred$fit - 2*pred$se.fit, lwd = 1, col = 2) ## ----c1----------------------------------------------------------------------- library(qgam) set.seed(15560) n <- 1000 x <- rnorm(n, 0, 1); z <- rnorm(n) X <- cbind(1, x, x^2, z, x*z) beta <- c(0, 1, 1, 1, 0.5) y <- drop(X %*% beta) + rnorm(n) dataf <- data.frame(cbind(y, x, z)) names(dataf) <- c("y", "x", "z") ## ----c2----------------------------------------------------------------------- qu <- 0.5 fit <- qgam(y~x, qu = qu, data = dataf) cqcheck(obj = fit, v = c("x"), X = dataf, y = y) ## ----c3, message = F---------------------------------------------------------- fit <- qgam(y~s(x), qu = qu, data = dataf) cqcheck(obj = fit, v = c("x"), X = dataf, y = y) ## ----c4, message = F---------------------------------------------------------- cqcheck(obj = fit, v = c("x", "z"), X = dataf, y = y, nbin = c(5, 5)) ## ----c5, message = F---------------------------------------------------------- cqcheck(obj = fit, v = c("z"), X = dataf, y = y, nbin = c(10)) ## ----c6, message = F---------------------------------------------------------- fit <- qgam(y~s(x)+z, qu = qu, data = dataf) cqcheck(obj = fit, v = c("z")) ## ----c7, message = F---------------------------------------------------------- cqcheck(obj = fit, v = c("x", "z"), nbin = c(5, 5)) ## ----c8, message = F---------------------------------------------------------- fit <- qgam(y~s(x)+z+I(x*z), qu = qu, data = dataf) cqcheck(obj = fit, v = c("x", "z"), nbin = c(5, 5)) ## ----c9, message = F---------------------------------------------------------- fit <- mqgam(y~s(x)+z+I(x*z), qu = c(0.2, 0.4, 0.6, 0.8), data = dataf) ## ----c10, message = F--------------------------------------------------------- check.learnFast(fit$calibr, 2:5) ## ----c11, message = F--------------------------------------------------------- qdo(fit, 0.2, check) ## ----check1, message = F------------------------------------------------------ set.seed(5235) n <- 1000 x <- seq(-3, 3, length.out = n) X <- cbind(1, x, x^2) beta <- c(0, 1, 1) f <- drop(X %*% beta) dat <- f + rgamma(n, 4, 1) dataf <- data.frame(cbind(dat, x)) names(dataf) <- c("y", "x") ## ----check2, message = F------------------------------------------------------ qus <- c(0.05, 0.5, 0.95) fit <- mqgam(y ~ s(x), data = dataf, qu = qus) plot(x, dat, col = "grey", ylab = "y") lines(x, f + qgamma(0.95, 4, 1), lty = 2) lines(x, f + qgamma(0.5, 4, 1), lty = 2) lines(x, f + qgamma(0.05, 4, 1), lty = 2) lines(x, qdo(fit, qus[1], predict), col = 2) lines(x, qdo(fit, qus[2], predict), col = 2) lines(x, qdo(fit, qus[3], predict), col = 2) ## ----check2b, message = F----------------------------------------------------- lfit <- lapply(c(0.01, 0.05, 0.1, 0.2, 0.3, 0.5), function(.inp){ mqgam(y ~ s(x), data = dataf, qu = qus, err = .inp, control = list("progress" = F)) }) plot(x, dat, col = "grey", ylab = "y", ylim = c(-2, 20)) colss <- rainbow(length(lfit)) for(ii in 1:length(lfit)){ lines(x, qdo(lfit[[ii]], qus[1], predict), col = colss[ii]) lines(x, qdo(lfit[[ii]], qus[2], predict), col = colss[ii]) lines(x, qdo(lfit[[ii]], qus[3], predict), col = colss[ii]) } lines(x, f + qgamma(0.95, 4, 1), lty = 2) lines(x, f + qgamma(0.5, 4, 1), lty = 2) lines(x, f + qgamma(0.05, 4, 1), lty = 2) ## ----check3, message = F------------------------------------------------------ system.time( fit1 <- qgam(y ~ s(x), data = dataf, qu = 0.95, err = 0.05, control = list("progress" = F)) )[[3]] system.time( fit2 <- qgam(y ~ s(x), data = dataf, qu = 0.95, err = 0.001, control = list("progress" = F)) )[[3]] ## ----check4, message = F------------------------------------------------------ check(fit1$calibr, sel = 2) check(fit2$calibr, sel = 2) ## ----check5, message = F------------------------------------------------------ check(fit1) ## ----edf1--------------------------------------------------------------------- data("UKload") tmpx <- seq(UKload$Year[1], tail(UKload$Year, 1), length.out = nrow(UKload)) plot(tmpx, UKload$NetDemand, type = 'l', xlab = 'Year', ylab = 'Load') ## ----edf2--------------------------------------------------------------------- qu <- 0.5 form <- NetDemand~s(wM,k=20,bs='cr') + s(wM_s95,k=20,bs='cr') + s(Posan,bs='ad',k=30,xt=list("bs"="cc")) + Dow + s(Trend,k=4) + NetDemand.48 + Holy ## ----edf3, message=FALSE------------------------------------------------------ set.seed(41241) sigSeq <- seq(4, 8, length.out = 16) closs <- tuneLearn(form = form, data = UKload, lsig = sigSeq, qu = qu, control = list("K" = 20), multicore = TRUE, ncores = 2) check(closs) ## ----edf4--------------------------------------------------------------------- lsig <- closs$lsig fit <- qgam(form = form, data = UKload, lsig = lsig, qu = qu) plot(fit, scale = F, page = 1) ## ----edf5--------------------------------------------------------------------- par(mfrow = c(2, 2)) cqcheck(fit, v = c("wM"), main = "wM") cqcheck(fit, v = c("wM_s95"), main = "wM_s95") cqcheck(fit, v = c("Posan"), main = "Posan") cqcheck(fit, v = c("Trend"), main = "Trend", xaxt='n') axis(1, at = UKload$Trend[c(1, 500, 1000, 1500, 2000)], UKload$Year[c(1, 500, 1000, 1500, 2000)] ) ## ----edf6--------------------------------------------------------------------- par(mfrow = c(1, 1)) cqcheck(fit, v = c("wM", "Posan"), scatter = T) qgam/inst/CITATION0000644000176200001440000000157614146701663013332 0ustar liggesusersbibentry(bibtype = "Article", title = "{qgam}: {B}ayesian Nonparametric Quantile Regression Modeling in {R}", author = c(person(given = "Matteo", family = "Fasiolo", email = "matteo.fasiolo@gmail.com"), person(given = c("Simon", "N."), family = "Wood"), person(given = "Margaux", family = "Zaffran"), person(given = "Rapha\\\"el", family = "Nedellec"), person(given = "Yannig", family = "Goude")), journal = "Journal of Statistical Software", year = "2021", volume = "100", number = "9", pages = "1--31", doi = "10.18637/jss.v100.i09", header = "To cite qgam in publications use:" )