numDeriv/0000755000175100001440000000000013476161015012065 5ustar hornikusersnumDeriv/po/0000755000175100001440000000000012434225535012504 5ustar hornikusersnumDeriv/po/R-numDeriv.pot0000644000175100001440000000232412434225535015221 0ustar hornikusersmsgid "" msgstr "" "Project-Id-Version: R 3.0.2\n" "Report-Msgid-Bugs-To: bugs.r-project.org\n" "POT-Creation-Date: 2014-11-22 15:23\n" "PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n" "Last-Translator: FULL NAME \n" "Language-Team: LANGUAGE \n" "MIME-Version: 1.0\n" "Content-Type: text/plain; charset=CHARSET\n" "Content-Transfer-Encoding: 8bit\n" msgid "Richardson method for hessian assumes a scalar valued function." msgstr "" msgid "method not implemented." msgstr "" msgid "BUG! should not get here." msgstr "" msgid "The current code assumes v is 2 (the default)." msgstr "" msgid "Non-NULL argument 'side' should have the same length as x" msgstr "" msgid "Non-NULL argument 'side' should have values NA, +1, or -1." msgstr "" msgid "grad assumes a scalar valued function." msgstr "" msgid "method 'complex' does not support non-NULL argument 'side'." msgstr "" msgid "function does not accept complex argument as required by method 'complex'." msgstr "" msgid "function does not return a complex value as required by method 'complex'." msgstr "" msgid "function returns NA at" msgstr "" msgid "distance from x." msgstr "" msgid "indicated method" msgstr "" msgid "not supported." msgstr "" numDeriv/po/R-ko.po0000644000175100001440000000316112434225535013655 0ustar hornikusers# This file is distributed under the same license as the R numDeriv package. # Maintainer: Paul Gilbert # Korean translation for R numDeriv package # Contributor: Chel Hee Lee , 2014. # Copyright: 2006-2011, Bank of Canada. 2012-2014, Paul Gilbert # msgid "" msgstr "" "Project-Id-Version: R numDeriv 2014.2-1\n" "Report-Msgid-Bugs-To: http://optimizer.r-forge.r-project.org/\n" "POT-Creation-Date: 2014-11-22 15:23\n" "PO-Revision-Date: 2014-11-22 15:24-0600\n" "Last-Translator: Chel Hee Lee \n" "Language-Team: Chel Hee Lee\n" "MIME-Version: 1.0\n" "Content-Type: text/plain; charset=UTF-8\n" "Content-Transfer-Encoding: 8bit\n" "Language: ko\n" "Plural-Forms: nplurals=1; plural=0;\n" msgid "Richardson method for hessian assumes a scalar valued function." msgstr "" msgid "method not implemented." msgstr "" msgid "BUG! should not get here." msgstr "" msgid "The current code assumes v is 2 (the default)." msgstr "" msgid "Non-NULL argument 'side' should have the same length as x" msgstr "" msgid "Non-NULL argument 'side' should have values NA, +1, or -1." msgstr "" msgid "grad assumes a scalar valued function." msgstr "" msgid "method 'complex' does not support non-NULL argument 'side'." msgstr "" msgid "" "function does not accept complex argument as required by method 'complex'." msgstr "" msgid "" "function does not return a complex value as required by method 'complex'." msgstr "" msgid "function returns NA at" msgstr "" msgid "distance from x." msgstr "" msgid "indicated method" msgstr "" msgid "not supported." msgstr "" numDeriv/inst/0000755000175100001440000000000012267353530013044 5ustar hornikusersnumDeriv/inst/doc/0000755000175100001440000000000013475450114013607 5ustar hornikusersnumDeriv/inst/doc/Guide.Stex0000644000175100001440000000307712267353530015522 0ustar hornikusers\documentclass[english]{article} \begin{document} %\VignetteIndexEntry{numDeriv Guide} \SweaveOpts{eval=TRUE,echo=TRUE,results=hide,fig=FALSE} \begin{Scode}{echo=FALSE,results=hide} options(continue=" ") \end{Scode} \section{Functions to calculate Numerical Derivatives and Hessian Matrix} In R, the functions in this package are made available with \begin{Scode} library("numDeriv") \end{Scode} The code from the vignette that generates this guide can be loaded into an editor with \emph{edit(vignette("Guide", package="numDeriv"))}. This uses the default editor, which can be changed using \emph{options()}. Here are some examples of grad. \begin{Scode} grad(sin, pi) grad(sin, (0:10)*2*pi/10) func0 <- function(x){ sum(sin(x)) } grad(func0 , (0:10)*2*pi/10) func1 <- function(x){ sin(10*x) - exp(-x) } curve(func1,from=0,to=5) x <- 2.04 numd1 <- grad(func1, x) exact <- 10*cos(10*x) + exp(-x) c(numd1, exact, (numd1 - exact)/exact) x <- c(1:10) numd1 <- grad(func1, x) exact <- 10*cos(10*x) + exp(-x) cbind(numd1, exact, (numd1 - exact)/exact) \end{Scode} Here are some examples of jacobian. \begin{Scode} func2 <- function(x) c(sin(x), cos(x)) x <- (0:1)*2*pi jacobian(func2, x) \end{Scode} Here are some examples of hessian. \begin{Scode} x <- 0.25 * pi hessian(sin, x) fun1e <- function(x) sum(exp(2*x)) x <- c(1, 3, 5) hessian(fun1e, x, method.args=list(d=0.01)) \end{Scode} Here are some examples of genD. \begin{Scode} func <- function(x){c(x[1], x[1], x[2]^2)} z <- genD(func, c(2,2,5)) z \end{Scode} \end{document} numDeriv/inst/doc/Guide.pdf0000644000175100001440000015370313475450114015350 0ustar hornikusers%PDF-1.5 % 3 0 obj << /Length 1222 /Filter /FlateDecode >> stream xXK4ϯ89;Y0,Hp@}[)Iqγ_b!z|# iJL ɉ#EDy i'!\!H@U֪ :ɚNHA!$IVg2{j_6aq+"_,Fߺf4z}WIRA4b&7FF3e1 a- UðԂ*\OE`6"N<|k yYf9t7Ia3啴JtMU:k>ȫug#ng軖*A^0`-܄OØ+zWpiyL w|aBYc9#W(!O W 8" 4eL'877r=BwV۲E}%o<)!'gm_"K!U=F.gÀ ,/p' 8#8qꠌ5؏+!mhXHXֺKbtCk$xp7pQMl>dqv;ց1zqчn@S7d]4G/CR|?) endstream endobj 14 0 obj << /Length1 1664 /Length2 9465 /Length3 0 /Length 10534 /Filter /FlateDecode >> stream xڍT6LwI tt0 0 % *- JHtI}xι{[3Ϯ}`a瑳@0~^>q OO pab( "と <~q~Qq>>p8@h0;wxv0_LLp `M" ЇJ.*z{{\y{in7q #@.Fa8@t>>Q1 ;0um~! {<| '$ߎF8[(`p`=ȏ/aξb2_#픗y<~~A10 耠O*݇sO^iw.-r!n'~,?BT;Uۑ~? ?A[ R!<}];}PweVvS. /3сC?-~>=lp.wI%n{E  %aa?:B|P1 {< 8oT A(z=@ 17=_ \`C[/|/(z >/Ї?PࡰN@0,,X~Y#Gͳ9&e83uWĹ\G %3E[z̨ݶ7Vzm8{ru}tش<[oݞ8 wy>!'U+29U-{S6ogR<͒gfÃЇp3q=Z2Nx"U n*f*SJ:3IVt59wj,g;AnB$ˆ׆)wB7&#,9ĹuSV8۷~ӯa2N:Q=/`tDωSNտM0D< Wa5;5x/m0D;w`ȑk<_E|$LS?8Z}=p̢֕,ꜛgmd2!Q\ mX:I@?,u H&r(KXtVg.G*y|Ag)1sդ|dryR\Vmdr0͆ЦsO]yf̣9P0lµ7aDFx巺_w\9Mkh;1+iXʅ[$iXB)tO (X\jWTN$VY JW>"^ @HW! x%pٞݺ{ÏxVOڴȊxuSd$ &dV V+w^RJV&IZb³+&Bem$ x& 5J;-LH)[We,8s6OЈ?mu=Okh4oB`'糫6LQq`/=7'<1CL1A#@gnJUC:(= VZ}Z:ׅ#11:̔NCnKG @8,QϔY!Wa;ZHG$+ T) O?喼FZT8T2wUDE0"ƈV`qq#KJTyd )}Śy.7ꇝz?i%?CHe|q>쪰o#)>gs!ٕ(p@`ב—S <b$ jBXcv$3եԘuStSC}L&h4ղBQ:{q~u|pY'[ϸ?/ێE.?'>xW1V0oa+M9`;-T>Fm}tJG{ra,!7ŞƷϼA؝[ޘ G1m,xE< &W_dI[kɬԤ6~Z e{k4& ,:jUXE0VlnSC/f.QCy :/5mW33th !xc `ИW \ Y_kM&UC;A9qǞxz-qNҭlO?㭈Mr\u]{"u=&԰W pJ!4P0G؊1ֺ$lůB԰;S ‰F@ K=,B;(+VR½=r_%3HsAaznǮ ss2:IA16T>q0٣إ A"dmżDX7Ӏ4JT,[.Ov3ZJeARZư.Mka!+MUFTUZ_ :Y⎿0qHlEQw_-2>}3^M0M h.'*1eS{xF){̯v뜫;8SSF`q&zSxK';n%``Aoyh>PC0g[%8fkN&H+ BM,Vc0ĞϚq5")H|ԡ,Qޱ{$oK2 Q P k}=q5^H eкXv9R\ 3O+( {XW;fÞ|g"qå;[ KOƭ". Qj+>gj _̿-hfzT]͌ N;JldC![$u;!ˑ =K8X w+<yapȌ  w._V ok)qdoC"` haYybOb藴= |oģgɉ)Բd-T}~vu?W+dKyXYQLQi:nxOJ9G>pҌ1‡-n晩h| =o8%OlF@,mAI6;{~CZo"ϹUPmgƛ5FAd7:Y>w2/g+S 11 8AM'ш6ٜY]^7 ?'4rܟG+3 ^)J%aEDC9lJxl)xϏϗQ  ݀I}@ eov 6:EU{Hȧ?$l=˹[V tml5y4%BlOndžT;fJ/G6Ohcݬ,cIb#"5k:/.GEQhF$,K{D5A˼oJ!"+ysBOjGHJ E.GY^6Ͳ|BCE6e"h5(~ [ɉ*$pYmet1|3Q\g ^ EMhF ^9ZFI*{,jCV"Iڝ'V[.R^&!TZSc巷vV ׼"*8{iolM~&5%( 1&nqv-%ŀKC^*K%Q Ngޘ-kbXzQ.g*UƉ;IwihK7>/J Y̮S/:}pih~PWSԴh{~=}1zƴxE$C#yS2H/!Cf_ ⃰gm;Q']"jOTf̈H}eEMrb1wmDJUԀ́ t hurMT#Uʺy+q)M:c[^-Y:'唋"#=)aYz72e^bV^G.4`"ihթJq:@y_sm88]Cbe[t)\@ɥauO_ӑ.9];hC{w+vј L;ٜM"g7떚|Ul[4hEjY&sgm\??4Vz>dk"RUˏiSNi"RTStr[pAND,KW;Y H[' DNTTA^(9i#c h tI8OJi〜TG0VsRb~m_'C~ꪆA;_<X$eռuh/,lkѧ[hZwnhq!jqcҦ-9 Ϝ;˶ vO@C&аqҴ4s&k}FމFQbB9}d @eOmJ;4s)mTXTXhE 4&XhV#OjKFQI[Z~lOI`oUωO2<;_[~HVLͻRZ7q϶#_؇ɤƲ?}k>=[mp'yvt!y Ȣ>"wR~*eWŃ ^`XE@SJ_eFX)&V͋1\v5rҗO9 n%1-l)!t\p-&5@SC6Oḿr3xCGlQPs dQj_r)?O~kl%`V:)qf0\y\=#+׮]يA㪊s˔藆bkV< ?:E^. Ӗ` )<.*xmPg^Bh/+](/;2Eг$٥:U$>spGrsX8Cx+A顆A r{>q1E*2=833dRŭW/UJ0 j dTeUOO~dSbU'&n+`qBY[n 0Q2\Y kcrv}fH'F֦/!l/=ϘS߭Q|m I4&4EVJ+coTp>PbٟJ^P :ZO5.gŇFvD>EG4@Fe)}M3 uky};&IQO(cXT_8f`v dּTP?ϜdGlKsЁq# ӯD4 8Y{Mb:`"/\VzOXZcJ?7I$9.3x! IYn V'؈Mo%ԾGf^sИbr4+L%!}VE<P[4Jmeo^"j9qmZn5dz:Dx[ιՐؒ*D)Mq߷`E'}uنr.@: wtga;F땑 D r!D8EwN\#)W1[`N ,Gb6qR?;}3֛qT"sXMK[V9Xl@zOWd|?Ɔ;&檨6buo3쓥0K#t*>]wZqt1qxVM&B{yeŌzkQ.t!1HctJc)*[z#8t򙒴k6Yľ>+$1Q5~\hBw6_IqGv1j,wa .KGR skv@Zh!&lr݂4.'<Q :Nwu4kl2ȉ6)V,ZcRS\S}?#iTKs 7EIᮺm}_Dlkgl- gh5$( q?lTJ8N5U#;;5}'D%K͒>b/Y,Txc2lLl rFLR@X+{ħ)].CM+jvab@0|&TS8a]\<,QrOV`f57Řۏ:oG4^PJrM ؑyD,<>CԢ2Wfnh`#"i݈*B)HQ$jF _?HOL5 D(4'.?~ <_p{f5O~,VEcKSy|ўdH\5fgj5@{˽z/ͩ[PqFk5%/atfk;=ϲ! \_ךDWv'ڒD hAy!wa!c$EAѡ^kDڑ)ZhN<-(%ftɃO1]'>m^x [5h#;75zmS5ߝoтP9/ZH ];WO?( Q; ,WUƉqhݲ0a7H$gc2mɄ]72=]'mo VԜvĞ- gDخ u}z|9bQ+ /StCm endstream endobj 16 0 obj << /Length1 1814 /Length2 13996 /Length3 0 /Length 15138 /Filter /FlateDecode >> stream xڍP\-Ӹww@NAkp !kwϹWW]{ik̵\ULEI (rebcfH(XY9YY4l\.6 H8M]mL]@97;?6Ef@%lcehl||<6 5}EsS; J Z:xxx0ڻ0;8[ 1KMX9hl܀y7!cXYYy8@'ܚ^?l;qtpX7!n@?;!,l]f@+?@˿;xYߵ`7wyY8 |YԴe?>qqO'+x_4+ t}{8]K]@?"7`b5?Kϔ?Q&MHO7wѺw6UZظo 6.R6@ Ws])U\lVLG|epy. s?F`l~ >@?E `a9X:8#qܼD<E?ݧhXLA5A| {?fdߗ|d/^_ b/ `|g_ݿ;OY9{/;K:k_m3u|g//_@wR71 4GX^p0~_#F7!4KF:3xVpڎ$oz؈$ն'g6)~bx"& }'_-?rxQT1==VTfb5c Qeø2c\zș|#K`@;(b0^҅OG clG0Ewѧhm`ɳE41u =F-[>ɶJC8T[ڄeL]fZH|i)~miJ7SCֵ> lywZ` Omn&/BdF/iʟA ]:ѥ Bc\q m&|%H]ʪ >FtΕ{ٯRsk}0-[Ge!5If8m^d8U2˭J^ \KWEҽ{UQ^|Zm\dL AC!a}|۷FGZßj}T rQN(ۙ78w'oCz*~`,@ES/$l&P@`Hh럩2W!);^ 5)zgwFw(}* q5b!wl DZF5ohzկf 7OƠ)`;8X'Ne4 E Xo IٶYqBU.7~ tSn~<3) ,'kSMM8[NQYeN0!z4؎< +5L<0ܵAZSS]' 2I7,#(RVR\qWꑔoA&Y$lpomGMK֗Gp 55"&"7<)'8c_:ox=U9o\'eVP[؎8jJA|ũq>=wGoNBethںϩNa|'ۚ_sQW=oMI\~(Z1ý`p pqJ΁/t1+ϼ&wcdS@>ǜotUk%Hɾ_O v\ %‰ۅI"mvOC^'2B]6-!q!_gdݺygSwT}?=~nd7t(TZ۟K7-XyqwlKB@\ԥL'5%|xBC4c/J >46Hwy{lh.!rc>&JJ53*4hG'}yII,{:T$IXpι0$ 6|Fi)ZtAR0p`i`>jK}ڗfbƘ 蛯D ^o܁幵Fw٤pl'y,Y`RWɍ^$;樊Sz/oޓ1R0ve=rU@%-8mq"-x5U|Ƽoj88EmRe*Ẏ PӁPx:Z踃7Ts5T~> G_WBpCcL|w Ƣ2*59Xzl$eq f>ecA8ZZKy㑳(U[6 $!aV9BB[R[RyfOi ZAhVf*cɚSG ~X|x'tV/)d{~F,sF*/ qW(G)`J$dEd!'p-(w2^YL o=<)`c.^wofyL{ej/+5Hfy ,Gw[Lh>Z?׳e{ݕM̭, QlY@}#ZZx$&0}<\ѠCw[hKj|0 (x%EQ -]92㍰%cp4C[f'kK0>6a&>!Sw t!?^7Gn,ۥfgqOؠf]J >2 Q^Js5๳]sIvAR0K1J7Q[?,^+z+ў7ۣcbAt 6H!KHi[ou Qa&MH9tmOPwOspߩ-źEӘT :h?+%Ln" A4"^f(Fܧc׾6pgVG\92K/aRlUZ 9lPlF*H pka\hRNQ*W Ǡp\<L ^DT} aB \S25rV>kU&{0+U[Jf~ Ous( HfnմQF9`n3d )G" wt+BeT б?ΙBu!ܪifxKs$Z~>g/4Lg{s/X=7 MH6һߗXf> ^ pd>pz$ -30+4 Jiꉹ jAPͨvH-}) ^%!?{WDbqk8iѫnZ@Ƌ?_hgLwϳ'+n^` pTONp%x^*ily({ \/*MI!kyl>=[D9=_0;\M yTROlUFWam$i ~R5J>Cn,G{*:(#۞[h>5q#b5qS'uƹY߹UCmb]Db~nDm (yb- _#0I>?@M8TB #uU>`hyYJt ]fܩ*ͤ JN612[Jl5!:+ظ5vV'4#'PuL'( ozh(fUǧA2jͿ %Wk+zY^"I*="0ъt0Me6@`!hi-NA-I5N4S$=&ETOPgG(KQ\ bG{5@ G VcwT~F.mFɤTnVsj;U &Mrz @6. 7W2/Ouh^$Y|J!$<&kHvmEFW W.e Ww~2(JO ABNDr&oN g&#f4 Gqv\KAg^ZΛZb'@RM!xOg9+a4$#k&.C"ݞxtbeIhՖqߘVyy_2`ex^BM*.|bDZq2ZH&y,7<1Z0u@¶\{ 5 e2U k tPuⶳњ+ѤJ,LXbQGvq0`M(VM@>&vU&4v/[d{ygUzFN}a$.&MO?Bd5g8?"UyQ:_|qC3we'{Y7z$:Edtͥ۷ᗹT[2*Kiu:MWn1IGː3~ji7.c e'TW9׭رK-OlH~0 eV|֔ď%&KF3-\0'mPNb%b6n=뻽3hG~ŮXA\ёN[M ['羹'⇢iȗPzX '5*oCxF: xAҦۢq)|+2o&L9ѦR?qq7Y؟.`E8nv/ A UqJyXuh%k2x9Jz" dsL2ǃԔ){Z4v`X jϯ?uB6}9DRm'W}e#ZԑIaz BD`R_BccإL}m}"‹%30΍lro7VV㦻r{1u֛* !n8Et {9Kdg,^5rR:|L(IgCT|Wdaw}QWOVWeAi4r!d]Sr<(]+$pD3brH ?Z0V0~(QrW'EDW9@$l!@ BKZaBpˈԩj:A;C(kFΡ⒨-ɮb{bO֙yVToQCCbCx/i*~Q M(Rw/wFsI]j}2nso/Si4YCf͟I9nHlV]`YG36z= kZ`څ/r  :Kh0mJ jTeeMvh.I](!ªt VеYoi/upZH}h[e^Ӏ3Rf" y0"F)5KbOICEU6x^r 4.2Õ7 IlE\@E4 M=X~2~/棗eÙm0 Y†\vەʴ[. Y2B'ɋYAWui54ǿeaKQRi 4X" )C^:_лv +SWnj*,o#qp'_gѴ,iVCRItc~ 3i6=!v]5]?&(O ]6Ek#>>!9C5^mCP[*xeUĜjLmM\/><,w/P}}ȬTf;=% ;*&i'K̹W׻H(9U@BVЄ%2m0jw:)I@?$τ]=^%V&M҇ӷ[753&y4%Ӎ[Mr=]jCR1aJ 0+v*AՖm}Hk$;Gc5+>Ŏӡ`66Ǥp|I> Q?nu!3O vU G"%5oQw/e?+ DSŌp$\X ?FQklj3 <dk[W],HCe @]լwc4]hU̴M*GKKDl85YddYn;Q]X4+zE6^%+N^UReGgׄo0W@ϚhҰ1Š1BzDw*2CKf<ey`0C=(OA_CʑغYyyi2 hE&4K8J,X%篠-7DƼ³hK07{~?g*Ir#_T=r"3%FqtvxE|A2%Evr(JN8nzCct{s[dԧP}`](H3b)4_ a.g~ȭ6ܫ~W" 5ȏZ]f|jQR<[8M*_ ۨ4U Cʭ9$Y^xufʃs |b#Vn7@;v]\i-aBxe%]԰⑴c4Vþ.yQq!K'~)kb}TeL}'stVhIS'NRdB' &cdج- %'h1`Mr,p)QЃV-FS' 틏e j^^OWw!bA mV+NMk?2x'c#rrjtU<&u}> 03H}KHI{j}C~Xy^ *\W4z/#]OO`j> ͹P ?ҿ $qyw B<zz:,QAr$z~C8]x-Fg`2,y; C`3FYOoۋ|Iܱ^˦qKK;Cݷi$s=Fq1$&¯43#.pd lL4g(! _+q {j6QKTm+6qX,Xe <30'5ĕ-zw'2[*:'g(rF ^h5|P: Zԫk njr47y--%Ì9w}_ܖmfJwAdQyv8VD]F ؚՔ&7MYo;,)lb׵*drREn-C:`[Tѩc5 &&5+ZDXTk712W;8Vtb-UH"ߌxzTl-&Ȇ[:n6|IԮhE_#s[jMokdH닛s,6E1X*F)Ҧh{&{X6}TzjUQ~}X4(:Ȉ{$7攁ɔ_ӄ%lD+W>DM(M~ WQ:ۡ_Ie?[*: @غf>:_DҸi5YLs;TeⵀtΖ[,I:qG W3\cp(rӧӄ nq'Gy [݅.8.}s:Sgp'vQf*w9)[Klkx7!b7GB)ZIm拈& ,7Mfo4z˻֛fYc7\x߭Lǹj _BsG긲Fsz* fEaПKy2+VMWl"%i{#,xZz 澎u.gW*Y fZˠMb#vAl0cyS% q ťS͊H7bTB$bwoW2LoTnI%B cnPpaX}>3ִdsy^s,]-C"Z qlGɻk168\쉡h )t%؂GtP%3]m&uIYUDꣂ S̛1 ٳ oNw-j Rj9 \Og65JJ22* a`_v"@א^N؊NW+|y+mn谧/M&W7i>TDex(9hs(ݔFew6c8OsBht<LnD~2%:0a-_K0偄sɈs2X1[yYZg77H{yG+Hd!ߚ~KF;K; G#C[":Q9+ ,= ?NU6`nLV„ ps`w.(dzȎ^xKw zOl.k5"E{ d҉cmjEcXh'&- ߦ^`0#Oq[*n)ղD8#օ)Ժz\ïE? "S:*j۽PzM@̦Lτs2SN-RS89 &IE0?IY{3/ւ0A78۵|ļ"]T"$`+1X]&WLI~pYںše( I8+ǚ,{yk)t$m]zSh6ƶ$T%6IDԡpH1`!ؐXƋ*iKʙOp9JZ,;!אc}譼g41\t|2>K)՜qеՈ7|wS#vySc@ߴ#{kU8K*v,U^z% bL)yӤnȈ3аQ= CM, D&Q9 nA#MpPi54VsN3RS * 3^6}eeWSܸ͝Gmn͗ =7maO|!$Ẑ~©~MՠͷȡP !}\ q-@EX_ш ;9}΃ưڌ6Ty:;vP6b'2l 5˕rlm~٤-ԁW-ZRLF$Ըei^YAKr"yt~PsM 0 YW*ΤHӘ@#sz "&~l\2O~-ҭ:"+adl9~ո-ZԎw^S7⫐d,XsS= 9hĺixN5"c-m%V.w,<3V0DIʍ`UM5~?v0(]lYi\bq ]v3I :X>N*9PURc^iHb$`|Es~p)l&A vW5J}#R6s!Lu$&K97(5b"ySDjt_Fֲ[zj1FgТ5/(k1JdRT~q ꅸ [ \mf;o( W,ZAӀ.X,ln-Rޔ? ]H^:azO:5c#j~,EqALEuy4菫*ۮ?hiV'dd.+7pH*o(xD?:5 -b~kxYա+oae9Q ż|b߂cRD V|})F=*I^.j$ylI $X7tP~#^ [OW\Zo*wls|VY͊)Ky='iWm, ~ysE܅,Л^C@e)JJq'A93^gmp@UHZ sGtSIyJmK޵O$_xGT\Xf|ƙx_-ØbvnKE! &S "%} x1pƽ‹}>?}I~&B.6c\~Uiak%T,sN{dXv~ӞQT}Љ ;O%-‘\#8G֑\-RH"ێy"%V YWӞ=< q k:3'z˼+Śx ;%kP†'ͨ:i2KPkm/|{T8JG*'mE;|۟71i ¾zЕ~\ g- L"^ e̎!]Z92SGe!pg3B?- q:y]A/@~\o&z`ٶT-OM0%u0"s|*+xӡ٬Ev &}V&<$2&:mKd uɇntwƍGfB#s.'%:}>#UY YȢ-V5T#Y-nse=k 5.G^T#\hԷb)a٨X,VjlpArII֟3dG V ~ HPxt/t=!@/Ie/ϩTDME뺬 -VL8E6}lto>#G_?f5ȬuBPF0?VZٕ4x5K9z5񲐀JEq+Q w; $(ŤyE̪ ieG  OWY+iiOa eۗY+>߇H9 3=Tp> ғ,?-a6 +WЂMbp-2Sı ޕ.휁{?"фj endstream endobj 18 0 obj << /Length1 2203 /Length2 12552 /Length3 0 /Length 13878 /Filter /FlateDecode >> stream xڍveXk. ̀t]ҍtw0]R JtwwJJ;ǹffu|/59$wafcـ dGV؂~[Ш5AN`=q' T'a*@r67??`8$LF-qt[Z@3/Ό;@63(XLlj30Bнrqqgeuwwg1sf8Y 3.VU3 d6@os,hu+&5U؂@P'W{s&+Pr g<6+/g33'`Y\<\&激&  x ?:9\Yd:hI{sqW}`'t=l7Պ=$+ B@ /`A$l>|  h+ _l8|40LA`{ѡj2N`JB6PCm=:hV9Q ,&x3n677))WY{ zOn=w,E of/o"(|ߚ\mmB!L`vu  sZe]Lk!jo 63 'jd v1&5~- qjl@ؠffNg79v.n'J*v..7t9Abq, Nh*K`5-qB%g30 dj:&o04;Tt21ق,\Ps:_8X ["/f~ Krt7ڥJ,n=~!N8@!#B햿?!в~ TCZ!B]"7#_m9A?bC VP,QZo3dG:L! 8;B\@L c@lpBlklw^bXeCppCN>BxЉxr;-t$.ݗ SMCY׼k%vgָ|r%D`H`SB̙7郄P B !?e ܅F&OcZZX)9{B8x;x|B!TZ#{2͍Oq[RqFu:S.Xh}o4Kwp@LNJoLI|Gv߬MRmak6Hܠ˟7w3=(ۮ Qcc+:'s0-'|B%-j0: C8Gv{:!G*oկwctO6v<}lA@7K)Cc޾G0_1{Odk<<&.N\F͹;;Z-?/CĪg!@DW*O#o bf(%Y̏F4>@~{!&'N6`5I3~tTX-׊mܚ 1w_h/j{G5/2n*d !If_5~LLRO:Ssby(J~0?Zu:n$r׈'v|kvYsRT EZxx*ݝįK~b ㍔R΍(5gy=oyq"=\)9>ziuhKm$a ~^8InXfR 57i(Rzz tlϩM $)DTL3*\L헥BOiUqTN |KzKWߣdigT q#F'tE]w<#f͒1ن0D{kzWZ&'6JaM#{_H$IMoVW0dfH=4ڞoqƫ28bHCE[zÁYHq^6uj/ArV9!E\hj0!`4z <.'qXuТ<+DYmj2*49 cvЈ>#X;q(v'QUWmE(~!i*bY!{=%,M{󁑎xrEy?뻜NU Ƽ2sagF Rɻ2?D Q&HVi0JP}U*ER=5$[܍ml+?p,\gLf qۗ\kxSv,/b).3blpwDŽaqTB)~88ɀ@p7v;҈ G-92ڳcc~cF143VcBtÇ6;{'tc9xJ297pٞհAڴuEVZ?̪Cs.¦ ylۆxleeŊժoֆ\G}:Ʒebx&Obvy7G=}ɤ;? yhd3= ۲࡯ { 3՞u~<@u7']},x#ΡV&YIzVFۓ/c7p!KS`;l* 8PܮUPYW 4[2"c}|!B箝MY PN5DIM|XF:{4+oel1rZާٺCs6JiˬdHXE(3$qX4,s~븣1cMYdpl!5v-xӾ 2SUW K(0CK4׮<+S:?`98@$]!d$VzCxKMˉӧp:f7,kEhHqAzF )y2c7+GRY!<'iJ܃MV1DFg!b./7TB(ɑNA o)ŏ2>,SfPaՒYogS%LP*ʄ`+a|M1;E\tײ0#u"kyo`29ZX"2j_>WpxnO+kŌMzբ=4K*Z"5zd<+oP5v YD{BxsŻC殿Av)xeyjR,zX?uD+WxH&h:/^hdغN6N&5D;޶^<.:!|$R$CB[/h1|c.&ե {nڍ@;;j/~jg18*SOu;܍#yY9yS&H;1.5CB >1+g3{[ V (ȜP1 ٞʑMgWl#(jM}L?4TaF6} Q1W`K_{2 GF sK$ N/v.yqTp؜+0>K aeȾjlDrkoFX$sڔR)cz3n ‘+rETMuDR8o{^a‪ GJ³%o`1`" h䩔̒ bD\E ]dbqyVQm:oD'OdW]/k}Nַt)MHU2ijsΡMi.o?ӰUOplUux-wf%'åʗ?v"S8k?\D9Bd֣&SC'۞Wѯu!W:ӱHb7 {ꥣFO,3o嘷]`/hZU< ǘ^z'\62T J#qGsIݔzg*sEKm8XjQE=#qѕHڝڬmI|޼R9݋1K<l\z5oA*CHOT`߮T0#y? }iO&LiTJJ鎷*j\)[}1I$'ELVQC=Xl Ӡ&]H H)ߞMbZQ٠6iMI^T($~+dR:9U恃IfN~e+c!:a,5ͳJAeW 'ywȥ`Ѷ3&sqvJ̉I xTb"0ۧK?9DN)8nMKR?ۑHVGawYcVփD^amb?irRem Jd=hL~$#,w$j{a`GhJ䌖GI/TU 8$'V5 %|ܸĈ&uZ~cXNq& z[p9|zvWĨWOF\J=vL]_DPTHdܸ؃OT#}wU Ou1O)ΙНɣ$d wwz>b~ =/K0IG>rt ot>>I [/D'_Ui+M1rȇ"ϭՓF(ƇU2?3%Aw\1AI]6O`nsJā:klFx_…0j]#2DhEA/,VW?pn:_E=Z%DՌm d, [)Jݻ/s~"܍Jk2},W oYMp_y]/\mP7}Zw!Z pdR3Y di̭aHWe.g:3o"k a-1|MO.5ie;FJKv6MGkq~i<!pEf&t%ψFkTI{fX:eVnE.MXdi֞Q%sv V-7͈ӄՉ/?!xGP.-l8qHiriyVDU+ҔW|9}GvZ>:n>ySoOW?u0Y_sN(ԟ+^Sh "W=\ yMT5Tf!S}6-+ȔA:Hmx꒠uaHOv#apZzvӗ5|m"VWp@0^A)[~k"n0De8ϝgRe>3o39jai쐭#/?׮pĊON CF:}浡U»A/ *$*L jzث)bǝSV#f:w/ u-( ?}8O3zx&S0x5E37 iuȓaߣ\w!{I+VʞPb^,>hn~XAlA%4td62ٞ*KG8ܢ_ORj8h{蝨I]#/Js-|- }Yͫbyd$AR2ҿ |ux$mgM"Y.ׁ^ӟ\Ԙ"U7ðX2Sm?dͦ(귿)MY-Ac] ߫zRޅ/ZJc*ed4abF! uJw[aU[Ri=l)k6I]IU?`u5v(Ye)tMzi;O~܍vƪ-!ђ#rĬ*iB\uCӒfO E{9<⸭|}b6V'km;My?˱'rƮ >Vn W:.A$s;vD99ϳZ,q6KtLw|z$Ԉ$g'hͼO5 U6D1ƟO8MAܳoA)k^_BC\VĔiʎ9ϘyVr'٨b_5ķŶ4Ǘ/[ŽVKfVŜJ{;x@VѲB~{FImY%Mm,0Z߲ʗoJ{I5"_( jj7e+}T`֑+dK $F1OB3+Fx/}Y j?/6 l`gO|8"ImKcA%y怓)2U6MT$4\Sܸn)U[i:AnZxäIg5gǯxG75O#Ȃ3a{_ZO8 "p<+b q ^llr@Q/ɶuN \ZFuDR?/TԻk~uBr^N7Yi9d_&*Ei V7|-%{K>0c)RUJx5?t|O3GnCGE:1~b%'g-n jA- ~ƴ}>mFk۩[+h*﮿/K< WOa,5\ pţN4^(I1g۲|~#͖s vNxaXǥ='Ces~b@hςvV|zN/pM{ҩ,J3gJ@+7Teԃ'$DXf W@G|WV!8JY 0fkk&$S$bu$FPuԟtgMI$(3J$n5 `[WJթFP߼kt ډ pr*UOz"NV]o_з0uq1f WTmgQ9k!lS8d\ʕMۺ^ iQ[قf'5]?}Hz|@S&BS1'k62ܬrIi жQAGCuʱQ5cOD<;x:ç.gIRB:[B'[/Dz[#F.SĝBżqdKz\ +?ɜd2$Y1(֗y5sveM )4Իo4OӴf-C"4Y|KFbG"7TH5:EYXX %YsXҫ|1 Av. KBC^Rg]KvJmz|JrQBK4rt~0.솆#t*klb\gps#hEש8qs.=RȱYo Oe&cFͧGnѾ!hlMXIkm@^#rޚΟڍBBDAK:@DqKlv04 ^uFeA}'^mB@⋧7Nz10VQ&NE4SDH1]DJqB w'Ԯ)lQ/W_")80#ßI T+Kc龈ƏH jA`[PԤ"aw/`, osYEM^xy^EiԠiAc멻?7.^nRAΓG%I; V#AOq{s WÕYr8^ة/mEc3Xh~ .K O V=̅Եnfu!K?4XmF uLisN Oƪ], ~ޭ:kbBw9/?ncwå$C,CPV޷$) Xm{=5q*+0,H'Kt0\kgmy]$!Y9[F4 Š$a*0E0qpnygEcyXj O?U;}}3xA:n|\ԶX͘% H/ *bȟ/M}:iTُ֜VQkĆ)nO2찟AEUiԞW6٬?w@:3<UutqoVg _qk8 A"LW Qܦ!L+8ᮈ"BlF(ZN&,(NXIa&AyQLD%k֡Q)\+Bұxelf_]u"6_2M3;SHre㎄48nƃ3;z"BKI&[CVCK`n ʠ⾞jJknHI8CK)¹4GuؖS'K]/YFQ-Qs+H uo1G] PpdWtvݬz}5ݘpE"S&|GYU`p4J4[n̯Ul]׷]@(YH*l.+rQDZ8'7790\6+1j2mr]):~ؐ&~Cp# ³AC'PbPpVrkX}29ˏxo?<takj9|3UA^ejVOxty}\mU {˓@-FdO{Nou{]v/SQ`IG}aL5>KH+j.dq ĿBq=yF[j5$bg]Dv]J}mS$oG@?[zB7+/;_p:M>*Xj0}׵"[?|@F1`mLZ;ъ > ;{.G*wI/Oxˆj 7P2uEK3@dp folLI˽_a| "1Q_-ntڷO2-xq,V\ɖ1jA\ax"ܶ.<24Ԥ¦T9b C?N{|I.pMS˫2GY .|&ԓv>V`V8uá=9 N#zǢ9HÁyj\@\:-; U96І!RNr(J}?5",k$UaQK aNDu9Yet|̉BU_JYNEgy./.!6\$ TzR׫\ao:C5wth78G6XsXYv\~Y͆fC-na%?X VNĤ#)6\r'B47=D8="i\.$LjءԨԣ5^y'Ot!:OOLBw6&!nq2=˻RE> &Ǧoᣇn̈́%N[ox !W}|[AfvuzkJgv~=9RG&$x\5^#S+?${*b~]W= endstream endobj 20 0 obj << /Length1 1735 /Length2 11037 /Length3 0 /Length 12150 /Filter /FlateDecode >> stream xڍP   `Nwww 4xȽUWT սw՛LEI (01$5X̬TT ?vd*- /3j4v67??++?Dg~  SI88z:!@kN`a3 ftkvus?)h!G~wwwfS{fg+a:F;b P݀?Z(n a rˡ` q7u^ v s 5ltPS(;"03G"`Sss{GS'l #l5dgjJtS*ÿs1w9B\]@vGk[H8?9_ݓڂA mX:hAN@9ɿ9&lV@t=̭Y8@kގ6 K?doS7 2d@`;<c/WY8<9b Y=qmM[S\`bbsx^?|7):+tU=d5@7ërnjYSY_+vOoj\W(:R"jrm[*/;EPA̭R_v?8xa^XYum__Wi+6wcعΦȯ~E\o׭z)f 3x``y,D?b_^ճX r\_b/` n/z?U,J}:>w`;%+ֿt3k_Nz^] rXM~-_3=sW h|7@9œ@MMpm;qvη瑗H[Rn;wQb]3QwkXE/:xF|R-6}~` (owvp-NԾ_N3:ؓn)>Ai8dFB|ˁ .^!^-6=!Gr.:~^z.kc`JQ2c_n74VԻX˶f."r[?eϷԬd`S`%bOވ}00,j\H))-~7Fz0BdKRtj>H:vOI7#QA1 FjAP_m~N@5zeQ#H(}'Ge@ٸ),m4,߱Y {-.Ge{"V);ª'3˱i󅜷zP'Н4CYqYN?5o „z>'ej׬D)v62g -K;%A2TKNyV 9'ւj.;yiwktȸ/V_nMIfn2Uj1E"DKf3拏oeHQ>fϐ˓a-+Iʥ(w잪f-p"``Rkr>P{^Wy g7%CTLڑ@XGO)H {x`<`]^Q xDġaSiEW:=T)oٕĈT 7Oɺ(C ]Ϣ qңM@I5Üvsc2/_11OB0Ή ^g%U7t$bTY1SkS,0_.>\Fš"+(Qcb"2 Mb9Ӣđhks1UҀzl8QG ·Oym`EuÊ\"SO0#&VDViy;|Вg%mXـ">e<) oOLU1KP!\JKZ.T77z3{t ãa7КO Q5yFǷ֣qK緜a\;/:l7?=37;dEy3 b#M{1R?EYù'dÖXpH$4"'}X)/)\w) u ~$"F'yt2)x:ا4"G&e2@B &s<v~sɋaBKDYSKf/TQ$#t:䠾jy?nɿO]hcQ\0÷jq, HҞm wMG@/xe+q="CE`dg!?Bǹ/ƝMJEF5l»`SčOj^4"$Av%YqEcyAY0x_bj x֬Ez4Jó2 n6uI2ܕcU!5}s1Qw= SӶH}Nw4ĶZKXêP` ܋OĨnV,zhri֏ؾsy!jOVI1Eï:" $QSǜe'. 81fl pu АU h&+!B| :d)P0|`N44Wj _I:42νsۀo []8 6Ao2$3U`bơײ. ФҲ YLW"I<&ʠav nH&1"&j8S rnXs-nŸ,oaFN&:>4M}`ߟ4dfCڄ|aف2(Rlhk``W>p9cё'rUU ͂8) 3bgz/7nZSR+vnEb'\;dl94IdTu=C"GHgo4 Ĕ[j.ey^㲃IS* rf6'~Cvmf:EX'3Q :9ЭC|)'hoF~l/kZjŽb{?>2j[~pW`|qH!$7Ej[L\E4EЯ1[M.fi}IK p SL4Y#['|i0عr-prPuc'H]dԋ PntEU*yf?;;kkӞ|TTWh28w/nepiY3VH Ϳ)qUjB^@jj1U,2?`9&>33#gMOgwZ-T 6#G(t` qAA*3I4)ph8{\nwN@WG ľ b zMGyz {@c+T!7bz+Gg4:mZUlabn""Q)YZga6l]V'rax|jSyN SS{!f\Uik߃4ŢOrV)Kƣ]\=f,| GtkQFR&׍%{rnX[D<'m .ْ^Y.gE {,s M:/13C+K1A!(73ߎJMM{go`WlͅOLdMfM(90ё'ְdd2Xh H/OÏ;\f6`g*})2鑖xCvX|%u3fFRlh( OGW۪Z6 ^~k@ȳԯtw>2efa(Q[iU2@$c9U&e~B%Mx AQcẕSu`Ŭq./3Ir=hCjw}nHaybdp\9Ѕظa_%gYJv2" Q6qB>qcvHW9OgD9FEW>az~ՄLGpIk u;}\bŴfl l^i`LLh_wë|79Y*sȑ=Qx_h;{Zϝ/[Lz!@rxQN,EUIiܯ*yK5.nE)쌈(vp+ _N T~ח%#i;Y0x Nž#fR(ӿso̥zC)>/H^ǔ)N8H Ǯ&:}( 45~2PBՌgY #}4!0wM#<&G\A05Uz3۶;!1o+E@Sz.ܑ *x${ x:e*_^^ `E%JTwk+pigcP]--(cDju#SPO2ADx+yצ'cYDdfee}B5d7`^9.w]CY7vM/Aku/c5rNЯ ] yejrv`P[z|nu t-ur *bTݨQ됱7̀]*x`R_!6M(z2wpioAv rJmI>ph/4~ 7;|VwL5KRPg͚F2IcҗZ<1>d(WgIm6xDr^l-}RkdӼHiZL$F,^9urCCMuNs̴$l /m3H;Tc{ݝ7r{R{: W)9I-(57O!hKP3cHČJ[yZp L?#S"y`[@h'gcd̘ΙKȑOE3lkJS T#O* z= EQJE3{xۑ¾9܅ht]?xן1݇אR"M|Zk^)0_oqp(*rޤNq)13_%4RE"¬"F 8!D,MAC.ҨHDA4~uțefӣdZ _ВfWHģR 6TGMG='Olѫ;,ю,f>;w^U :sj/;*TQr *:D 2 {Tzֲ Ij&c NKw-*IJH\&O7ՄLTaq'&)Z%K8|ɺ2h%r}1Di0 LxxlYNdf(cH6:$#s 8# M,?vz\%+ /p6Ag]jp—RYhf!hCt&+i"|wN f#>ڶR(, D\R7  '0URl*s"u;zcrU%lK"gԓPƛuR' - ~v3fIkW8ōP#)T&WI1:7<,)=bغ455Y~o o󼕴]عruJ@lO >{/A zê?F~|9J]5OXmT4R}!)ЎgtmOD>&ũnq?8>M[Ռ-8.p>394.e"h6o5LiţFb)52c_z8<:2EQ$jcXcCzStLmn;Nzt X;&Y(`-Qqbb 5S7It>xG{;?K-ܠN|/(oGTFP֩)-WL-n*:%^^FQc[F=3!La( ;GVGwhԈ@ $vic ~]Ai[gCcqە+I9So&<ooٓ5cpKc8J97y'siN3hskz-;A>'~5iDIT:A\C_Em ؉A{ssqY-ȒCh9'ւj3#We 7Z66>p, b :P"o4nh'SzdAOퟶ?GN94q`Cg۳Aޟ#0C]r)oJ}i%}'[&ΓN gH3x:+~k񗹨!ͬ8e`]Wr[97>ETCNwA(қ|XQBۥ;TЯ$܍0ۓdX!}[%3A\RI왞]S0|-M'O,1\sȇ &/poPRݮ%]Y:Y?oo[̂-@~%txnMN]6]#cGvM Ac5˦ gq{YKLTeYE eTZ{d8_Ef µpsMEO~j`9 U9ugS$-wF},aaRwۧdn:a4"Vnqz?zL%{$8I)bw]52ܑ ;XO Vp- t=&^Lvš=i(MofU6Aϡ"@AVt3Q%yY ~QsLc\;1S#W' ,~|%1?vE!;)VcUZ'A73,j%;-8)G Upہn+mb8A꛾DП_)iTozwT1AۊO.gc{I{6-ʻss!Ger8Z=*|T@&#W~N_s) *pHM~*Fun>{SRXmhkLMثOr'M6YhC&ѥ3>i{ 6اO^2/ <(a+UWb\| 8meOJ"q sR^P }PÌre(7.r6p<I#2J`4Nr>Ȧt>St*п k(.F~NՖ~tך)f&+BzEZ&1.y}ׂ4 WQb%Y9TѴE m["K,֊d2r>v|1T%g?3y42ZRDj6":Ja9y:X tnd[_F^$rb@YR!5MĒ!$R>uWG!Iƍ':[]L[KpGtv7Qhȇ9!hG>\qΓ3gg-B ﴭ< ;qr>57`*>ڇ_L (bd5\Ӕ&cj&bsd>,Q=Zן ΰUwf[#@۔{:R>lC4[g\yƘd]çM)c0=duSxr-Yl ; }8߼9Se2{rt@88e01Ex<)[AhMV7NuwI[I e%Aڌt)f7>Dɤu/1_Iw[OA?Sl "f>[N[w/d.*dѽښEZlauFd*w>,(ܱ1 Gj0:5 G}p5[>3lw[`5Ntn+jNl+n]D͖w"9Ϩ3WJ t}.ȅn^Lֳ+gt=GXFmtӣ.lL4ϼˣY^oߣ4qæDE:stB)ESFh *ZeмDw\A-y 9vdBT:o3n,u;=p;T}򜿃XUR^e8ȯO:vt%Gѳ'M81wAj|EI4~B6;dֻ)9~/S2B4'9ǹ-1Q@<þ79<Vީ`j e KrƸӿ/I47!2!=3-0Qp`y!:K!힋 kx8`ň9\=epB+sʇ5eZ-1mP)I&ؤe%X++Wt}~5r&s52>@'90-y5S” R)wHs~qH UfBo!NҮIRdDE%(0`A9P=D!p֥5*$Ӻ!=Ag 3ZFEn\8)Hzf`5N3u@j7ͨW ׏;أn/ʙ0~N:&YR^XN7+i#ɻ;neeqqq AF+QjnkNDY^-1BfF*DSo%:L׾(} a*[&W! -3)<2R0e"1yAؾh?uP#7Y˅ *SMzo["XLVI8IW[ {*??s]Ä^Og@&Q4ٌ8"q |Vc~g2(y9j>Kxatu|qMZdF_kT/1&0b%WHolD&gwO J7bs rV3s4$0H_>xf6pq5lb">aLϿgTl4,<k㟗Lqu g8My/!a=1yH8Wؙ߹_Fv*~̒ʸ,;[?97̃<ܡ N44vbn&|}be^$Rnצk}W8K`]dwآPq,߁*L{)x#<4 r'z9U蒟a8_99>yUbײC϶_C-HߝэO>oShơBtC2w1M:AtIt)?zERG(6? +!RI! tQɀnBZ}{T.Z:s]gJ`V *³m7OZT᭶R![VcrEBh1y.bq`Q7M;EI}S?MHNOTKpc N6Ih;żeQIl01[>sE$c@\5a\]XM3zB0'[EZ'ħ1$}ˁY}vY  ~ٵ8 aQCkF+ߚ}Ê.(+/'ymg׵k8HzW߻|'P|;ٰ%mj`* 20580@?J\vJNF^>_MCs.H,MoU@}aQJQݾ%1#A\.e b׋Ntب6 p> endobj 8 0 obj << /Type /ObjStm /N 16 /First 114 /Length 1024 /Filter /FlateDecode >> stream xWk#7~߿Bo A`rm%nuǐwߌtIz1}*x73F#iD%QHR 炐 X+$~a,Q(HAf'T`c" -U6{G /'YmxKMK;3I+ި[DN`)M:!tM# |liܧ-mdű-]kmJᨥq0z#el5H0AX t p#Xotq9O2R4ٴ~>(:ɝar7.-k{/fd#rf6s>I},&I>#vؤ7xXw(֥ q޻F;kfԭ7 lv}5eit-&!F|e%R{]н"Y-1 () a8}\>4*ʫ'#u6BŦh0\\ARVs7q-V~wקe;_NW[t)RG^=4ܒWJwǰߌf~-Ke~&i[U:{X>0m_/*o5_զF|luݷ|t=_6oV~Yzax~p%= ,:_h3B!oVrq"[/d endstream endobj 24 0 obj << /Type /XRef /Index [0 25] /Size 25 /W [1 2 1] /Root 22 0 R /Info 23 0 R /ID [ ] /Length 81 /Filter /FlateDecode >> stream x Dk+KV\M.!R@YFd-K'﹞ց[Fɚzg0 endstream endobj startxref 54904 %%EOF numDeriv/inst/doc/Guide.R0000644000175100001440000000323513475450114014772 0ustar hornikusers### R code from vignette source 'Guide.Stex' ################################################### ### code chunk number 1: Guide.Stex:6-7 ################################################### options(continue=" ") ################################################### ### code chunk number 2: Guide.Stex:13-15 ################################################### library("numDeriv") ################################################### ### code chunk number 3: Guide.Stex:24-41 ################################################### grad(sin, pi) grad(sin, (0:10)*2*pi/10) func0 <- function(x){ sum(sin(x)) } grad(func0 , (0:10)*2*pi/10) func1 <- function(x){ sin(10*x) - exp(-x) } curve(func1,from=0,to=5) x <- 2.04 numd1 <- grad(func1, x) exact <- 10*cos(10*x) + exp(-x) c(numd1, exact, (numd1 - exact)/exact) x <- c(1:10) numd1 <- grad(func1, x) exact <- 10*cos(10*x) + exp(-x) cbind(numd1, exact, (numd1 - exact)/exact) ################################################### ### code chunk number 4: Guide.Stex:46-49 ################################################### func2 <- function(x) c(sin(x), cos(x)) x <- (0:1)*2*pi jacobian(func2, x) ################################################### ### code chunk number 5: Guide.Stex:54-60 ################################################### x <- 0.25 * pi hessian(sin, x) fun1e <- function(x) sum(exp(2*x)) x <- c(1, 3, 5) hessian(fun1e, x, method.args=list(d=0.01)) ################################################### ### code chunk number 6: Guide.Stex:65-68 ################################################### func <- function(x){c(x[1], x[1], x[2]^2)} z <- genD(func, c(2,2,5)) z numDeriv/tests/0000755000175100001440000000000012267353530013231 5ustar hornikusersnumDeriv/tests/trig01.R0000644000175100001440000000330012267353530014456 0ustar hornikusersif(!require("numDeriv"))stop("this test requires numDeriv.") ################################################################### # 3 test functions to test the accuracy of numerical derivatives # in "numDeriv" package written by Paul Gilbert # Author: Ravi Varadhan # March 27, 2006 ################################################################### options(digits=12) ################################################################### # asin test ################################################################### func1 <- function(x){asin(x)} x <- c(0.9,0.99,0.999) exact <- 1/sqrt(1 - x^2) # With d = 0.0001 print(g.calcS <- grad(func1, x,method.args=list(d=0.0001))) rel.err <- g.calcS/exact - 1 cbind(x, g.calcS, exact, rel.err) if(any(rel.err > 1e-10)) stop("trig01 test 1 FAILED") ################################################################### # sin test ################################################################### func2 <- function(x){sin(1/x)} x <- c(0.1,0.01,0.001,0.0001) exact <- cos(1/x) * (-1/x^2) # With d = 0.0001 print(g.calcS <- grad(func2, x,method.args=list(d=0.0001))) rel.err <- g.calcS/exact - 1 cbind(x, g.calcS, exact, rel.err) if(any(rel.err > 1e-10)) stop("trig02 test 1 FAILED") ################################################################### # power test ################################################################### func3 <- function(x){(x-100)^2 + 1.e-06 * (x - 300)^3} x <- c(100.001,300.001) exact <- 2*(x-100) + 3.e-06*(x-300)^2 # With d = 0.0001 print(g.calcS <- grad(func3, x,method.args=list(d=0.0001))) rel.err <- g.calcS/exact - 1 cbind(x, g.calcS, exact, rel.err) if(any(rel.err > 1e-10)) stop("trig03 test 1 FAILED") numDeriv/tests/grad01.R0000644000175100001440000000327712267353530014443 0ustar hornikusersif(!require("numDeriv"))stop("this test requires numDeriv.") ################################################################### # sin test. scalar valued function with scalar arg ################################################################### print(g.anal <- cos(pi)) print(g.calcR <- grad(sin, pi, method="Richardson")) cat("error: ", err <- max(abs(g.calcR - g.anal)),"\n") if(err > 1e-11) stop("grad01 test 1 FAILED") # 1e-13 with d=0.01 print(g.calcS <- grad(sin, pi, method="simple")) cat("error: ", err <- max(abs(g.calcS - g.anal)),"\n") if(err > 1e-8) stop("grad01 test 2 FAILED") ################################################################### # sin test. vector argument, scalar result ################################################################### func2a <- function(x) sum(sin(x)) x <- (0:10)*2*pi/10 print(g.anal <- cos(x)) print(g.calcR <- grad(func2a, x, method="Richardson")) cat("error: ", err <- max(abs(g.calcR - g.anal)),"\n") if(err > 1e-10) stop("grad01 test 3 FAILED") print(g.calcS <- grad(func2a, x, method="simple")) cat("error: ", err <- max(abs(g.calcS - g.anal)),"\n") if(err > 1e-4) stop("grad01 test 4 FAILED") ################################################################### # sin test. vector argument, vector result ################################################################### x <- (0:10)*2*pi/10 print(g.anal <- cos(x)) print(g.calcR <- grad(sin, x, method="Richardson")) cat("error: ", err <- max(abs(g.calcR - g.anal)),"\n") if(err > 1e-10) stop("grad01 test 5 FAILED")# 1e-12 with d=0.01 print(g.calcS <- grad(sin, x, method="simple")) cat("error: ", err <- max(abs(g.calcS - g.anal)),"\n") if(err > 1e-4) stop("grad01 test 6 FAILED") numDeriv/tests/CSD.R0000644000175100001440000001216112267353530013766 0ustar hornikusersrequire("numDeriv") ##### Example 0 set.seed(123) f <- function(x) { n <- length(x) f <- rep(NA, n) vec <- 1:(n-1) f[vec] <- x[vec]^2 + (-1)^vec * x[vec]*exp(x[vec+1]) f[n] <- x[n]*exp(x[n]) f } x0 <- runif(5) ans1 <- jacobian(func=f, x=x0, method="complex") print(ans1, digits=18) #max.diff1: 3.571277e-11 ans2 <- jacobian(func=f, x=x0) err <- max(abs(ans1 - ans2)) cat("max.diff1: ", err, "\n") if (1e-10 < err ) stop("Example 0 jacobian test failed.") ###### Example 1 broydt <- function(x, h=0.5) { n <- length(x) f <- numeric(n) f[1] <- ((3 - h*x[1]) * x[1]) - 2*x[2] + 1 tnm1 <- 2:(n-1) f[tnm1] <- ((3 - h*x[tnm1])*x[tnm1]) - x[tnm1-1] - 2*x[tnm1+1] + 1 f[n] <- ((3 - h*x[n]) * x[n]) - x[n-1] + 1 sum(f*f) } set.seed(123) p0 <- runif(10) ans1 <- grad(func=broydt, x=p0, method="complex") #print(ans1, digits=18) ans2 <- grad(func=broydt, x=p0) err <- max(abs(ans1 - ans2)) cat("max.diff1: ", err, "\n") #max.diff1: 4.977583e-10 ##max.diff1: 9.386859e-09 if (1e-8 < err ) stop("broydt gradient test failed.") h1 <- hessian(func=broydt, x=p0, method="complex") #print(h1, digits=18) h2 <- hessian(func=broydt, x=p0) #print(h2, digits=18) err <- max(abs(h1 - h2)) #print(err, digits=18) cat("max.diff1: ", err , "\n") #max.diff1: 9.386859e-09 ##max.diff1: 8.897979e-08 if (1e-7 < err ) stop("broydt hessian test failed.") ###### Example 2 sc2.f <- function(x){ n <- length(x) vec <- 1:n sum(vec * (exp(x) - x)) / n } sc2.g <- function(x){ n <- length(x) vec <- 1:n vec * (exp(x) - 1) / n } sc2.h <- function(x){ n <- length(x) hess <- matrix(0, n, n) vec <- 1:n diag(hess) <- vec*exp(x)/n hess } set.seed(123) #x0 <- rexp(10, rate=0.1) x0 <- rnorm(100) exact <- sc2.g(x0) ans1 <- grad(func=sc2.f, x=x0, method="complex") #print(ans1, digits=18) err <- max(abs(exact - ans1)/(1 + abs(exact))) err #[1] 0 if (1e-14 < err ) stop("sc2 grad complex test failed.") ans2 <- grad(func=sc2.f, x=x0) err <- max(abs(exact - ans2)/(1 + abs(exact))) err # [1] 9.968372e-08 ##[1] 9.968372e-08 if (1e-7 < err ) stop("sc2 grad Richardson test failed.") exact <- sc2.h(x0) system.time(ah1 <- hessian(func=sc2.f, x=x0, method="complex")) #elapsed 4.14 err <- max(abs(exact - ah1)/(1 + abs(exact))) err # [1] 1.13183e-13 ## [1] 1.13183e-13 if (1e-12 < err ) stop("sc2 hessian complex test failed.") system.time(ah2 <- hessian(func=sc2.f, x=x0)) #elapsed 2.537 err <- max(abs(exact - ah2)/(1 + abs(exact))) err # [1] 3.415308e-06 ##[1] 6.969096e-08 if (1e-5 < err ) stop("sc2 hessian Richardson test failed.") ###### Example 3 rosbkext.f <- function(p, cons=10){ n <- length(p) j <- 1: (n/2) tjm1 <- 2*j - 1 tj <- 2*j sum (cons^2*(p[tjm1]^2 - p[tj])^2 + (p[tj] - 1)^2) } rosbkext.g <- function(p, cons=10){ n <- length(p) g <- rep(NA, n) j <- 1: (n/2) tjm1 <- 2*j - 1 tj <- 2*j g[tjm1] <- 4*cons^2 * p[tjm1] * (p[tjm1]^2 - p[tj]) g[tj] <- -2*cons^2 * (p[tjm1]^2 - p[tj]) + 2 * (p[tj] - 1) g } set.seed(123) p0 <- runif(10) exact <- rosbkext.g(p0, cons=10) numd1 <- grad(func=rosbkext.f, x=p0, cons=10, method="complex") # not as good #print(numd1, digits=18) err <- max(abs(exact - numd1)/(1 + abs(exact))) err # [1] 1.203382e-16 ##[1] 1.691132e-16 if (1e-15 < err ) stop("rosbkext grad complex test failed.") numd2 <- grad(func=rosbkext.f, x=p0, cons=10) err <- max(abs(exact - numd2)/(1 + abs(exact))) err # [1] 5.825746e-11 ##[1] 4.020598e-10 if (1e-9 < err ) stop("rosbkext grad Richardson test failed.") ###### Example 4 genrose.f <- function(x, gs=100){ # objective function ## One generalization of the Rosenbrock banana valley function (n parameters) n <- length(x) 1.0 + sum (gs*(x[1:(n-1)]^2 - x[2:n])^2 + (x[2:n] - 1)^2) } genrose.g <- function(x, gs=100){ # vectorized gradient for genrose.f # Ravi Varadhan 2009-04-03 n <- length(x) gg <- as.vector(rep(0, n)) tn <- 2:n tn1 <- tn - 1 z1 <- x[tn] - x[tn1]^2 z2 <- 1 - x[tn] gg[tn] <- 2 * (gs * z1 - z2) gg[tn1] <- gg[tn1] - 4 * gs * x[tn1] * z1 return(gg) } #set.seed(123) #p0 <- runif(10) p0 <- rep(pi, 1000) exact <- genrose.g(p0, gs=100) numd1 <- grad(func=genrose.f, x=p0, gs=100, method="complex") err <- max(abs(exact - numd1)/(1 + abs(exact))) err # [1] 2.556789e-16 ##[1] 2.556789e-16 if (1e-15 < err ) stop("genrose grad complex test failed.") numd2 <- grad(func=genrose.f, x=p0, gs=100) err <- max(abs(exact - numd2)/(1 + abs(exact))) err # [1] 1.847244e-09 ##[1] 1.847244e-09 if (1e-8 < err ) stop("genrose grad Richardson test failed.") ##### Example 5 # function of single variable fchirp <- function(x, b, k) exp(-b*x) * sin(k*x^4) dchirp <- function(x, b, k) exp(-b*x) * (4 * k * x^3 * cos(k*x^4) - b * sin(k*x^4)) x <- seq(-3, 3, length=500) y <- dchirp(x, b=1, k=4) #plot(x, y, type="l") y1 <- grad(func=fchirp, x=x, b=1, k=4, method="complex") #lines(x, y1, col=2, lty=2) err <- max(abs(y-y1)) err # [1] 4.048388e-10 ##[1] 4.048388e-10 if (1e-9 < err ) stop("chirp grad complex test failed.") y2 <- grad(func=fchirp, x=x, b=1, k=4) #lines(x, y2, col=3, lty=2) err <- max(abs(y-y2)) err # [1] 5.219681e-08 ##[1] 5.219681e-08 if (1e-7 < err ) stop("chirp grad Richardson test failed.") numDeriv/tests/BWeg.R0000644000175100001440000000617112267353530014205 0ustar hornikusersif(!require("numDeriv"))stop("this test requires numDeriv.") Sys.info() ####################################################################### # Test gradient and hessian calculation in genD using data for calculating # curvatures in Bates and Watts. #model A p329,data set 3 (table A1.3, p269) Bates & Watts (Puromycin example) ####################################################################### puromycin <- function(th){ x <- c(0.02,0.02,0.06,0.06,0.11,0.11,0.22,0.22,0.56,0.56,1.10,1.10) y <- c(76,47,97,107,123,139,159,152,191,201,207,200) ( (th[1] * x)/(th[2] + x) ) - y } D.anal <- function(th){ # analytic derivatives. Note numerical approximation gives a very good # estimate of these, but neither give D below exactly. The results are very # sensitive to th, so rounding error in the reported value of th could explain # the difference. But more likely th is correct and D has been rounded for # publication - and the analytic D with published th seems to work best. # th = c(212.70188549 , 0.06410027) is the nls est of th for BW published D. x <- c(0.02,0.02,0.06,0.06,0.11,0.11,0.22,0.22,0.56,0.56,1.10,1.10) y <- c(76,47,97,107,123,139,159,152,191,201,207,200) cbind(x/(th[2]+x), -th[1]*x/(th[2]+x)^2, 0, -x/(th[2]+x)^2, 2*th[1]*x/(th[2]+x)^3) } # D matrix from p235. This may be useful for rough comparisons, but rounding # used for publication introduces substantial errors. check D.anal1 - D.BW D.BW <- t(matrix(c( 0.237812, -601.458, 0, -2.82773, 14303.4, 0.237812, -601.458, 0, -2.82773, 14303.4, 0.483481, -828.658, 0, -3.89590, 13354.7, 0.483481, -828.658, 0, -3.89590, 13354.7, 0.631821, -771.903, 0, -3.62907, 8867.4, 0.631821, -771.903, 0, -3.62907, 8867.4, 0.774375, -579.759, 0, -2.72571, 4081.4, 0.774375, -579.759, 0, -2.72571, 4081.4, 0.897292, -305.807, 0, -1.43774, 980.0, 0.897292, -305.807, 0, -1.43774, 980.0, 0.944936, -172.655, 0, -0.81173, 296.6, 0.944936, -172.655, 0, -0.81173, 296.6), 5,12)) cat("\nanalytic D:\n") print( D.anal <- D.anal(c(212.7000, 0.0641)), digits=16) cat("\n********** note the results here are better with d=0.01 ********\n") cat("\n********** in both relative and absolute terms. ********\n") cat("\nnumerical D:\n") print( D.calc <- genD(puromycin,c(212.7000, 0.0641), method.args=list(d=0.01)), digits=16) # increasing r does not always help #D.calc <- genD(puromycin,c(212.7000, 0.0641), r=10)#compares to 0.01 below #D.calc <- genD(puromycin,c(212.7000, 0.0641), d=0.001) cat("\ndiff. between analytic and numerical D:\n") print( D.calc$D - D.anal, digits=16) cat("\nmax. abs. diff. between analtic and numerical D:\n") print( max(abs(D.calc$D - D.anal)), digits=16) # These are better tests except for 0 column, so add an epsilon cat("\nrelative diff. between numerical D and analytic D (plus epsilon):\n") print(z <- (D.calc$D - D.anal) / (D.anal + 1e-4), digits=16) # d=0.0001 [12,] 1.184044172787111e-04 7.451545953037876e-03 # d=0.01 [12,] 1.593395089728741e-08 2.814629092064831e-07 cat("\nmax. abs. relative diff. between analtic and numerical D:") print( max(abs(z)), digits=16) if(max(abs(z)) > 1e-6) stop("BW test FAILED") numDeriv/tests/hessian01.R0000644000175100001440000000425012267353530015150 0ustar hornikusers# check hessian if(!require("numDeriv"))stop("this test requires numDeriv.") #################################################################### # sin tests #################################################################### x <- 0.25 * pi print(calc.h <- hessian(sin, x) ) print(anal.h <- sin(x+pi)) cat("error: ", err <- max(abs(calc.h - anal.h)),"\n") if( err > 1e-4) stop("hessian test 1 FAILED") # 1e-8 with d=0.01 func1 <- function(x) sum(sin(x)) x <- (0:2)*2*pi/2 #x <- (0:10)*2*pi/10 print(anal.h <- matrix(0, length(x), length(x))) print(calc.h <- hessian(func1, x) ) cat("error: ", err <- max(abs(anal.h - calc.h)),"\n") if( err > 1e-10) stop("hessian test 2 FAILED") funcD1 <- function(x) grad(sin,x) print(calc.j <- jacobian(funcD1, x) ) cat("error: ", err <- max(abs(calc.h - calc.j)),"\n") if( err > 1e-5) stop("hessian test 3 FAILED") # 1e-8 with d=0.01 #################################################################### # exp tests #################################################################### fun1e <- function(x) exp(2*x) funD1e <- function(x) 2*exp(2*x) x <- 1 print(anal.h <- 4*exp(2*x) ) print(calc.h <- hessian(fun1e, x) ) cat("\nerror: ", err <- max(abs(calc.h - anal.h)),"\n") if( err > 1e-3) stop("hessian test 5 FAILED") # 1e-7 with d=0.01 print(calc.j <- jacobian(funD1e, x) ) cat("\nerror: ", err <- max(abs(calc.j - anal.h)),"\n") if( err > 1e-9) stop("hessian test 6 FAILED") # 1e-10 with d=0.01 fun1e <- function(x) sum(exp(2*x)) funD1e <- function(x) 2*exp(2*x) x <- c(1,3,5) print(anal.h <- diag(4*exp(2*x)) ) cat("\n************ d=0.01 works better here.*********\n") print(calc.h <- hessian(fun1e, x, method.args=list(d=0.01)) ) cat("\n relative error: \n") print( err <- (calc.h - anal.h) /(anal.h+1e-4)) cat("\n max relative error: ", err <- max(abs(err)),"\n") # above is 901.4512 with d=0.0001 cat("\n error: \n") print( err <- calc.h - anal.h) cat("\n max error: ", err <- max(abs(err)),"\n") # above is 0.1670381 with d=0.0001 if( err > 1e-5) stop("hessian test 7 FAILED") print(calc.j <- jacobian(funD1e, x) ) cat("error: ", err <- max(abs(calc.j - anal.h)),"\n") if( err > 1e-5) stop("hessian test 8 FAILED") # 1e-6 with d=0.01 numDeriv/tests/jacobian01.R0000644000175100001440000000250112267353530015261 0ustar hornikusers# check jacobian if(!require("numDeriv"))stop("this test requires numDeriv.") x <- pi print(j.calc <- jacobian(sin, x)) cat("error: ", err <- max(abs(j.calc - cos(x))),"\n") if( err > 1e-11) stop("jacobian matrix test 1 FAILED") # 1e-13 with d=0.01 x <- (1:2)*2*pi/2 print(j.calc <- jacobian(sin, x)) cat("error: ", err <- max(abs(j.calc - diag(cos(x)))),"\n") if( err > 1e-11) stop("jacobian matrix test 2 FAILED") # 1e-13 with d=0.01 func2 <- function(x) c(sin(x), cos(x)) x <- (1:2)*2*pi/2 print(j.calc <- jacobian(func2, x)) cat("error: ", err <- max(abs(j.calc - rbind(diag(cos(x)), diag(-sin(x))))),"\n") if( err > 1e-11) stop("jacobian matrix test 3 FAILED") # 1e-13 with d=0.01 x <- (0:1)*2*pi print(j.calc <- jacobian(func2, x)) cat("error: ", err <- max(abs(j.calc - rbind(diag(cos(x)), diag(-sin(x))))),"\n") if( err > 1e-11) stop("jacobian matrix test 4 FAILED") # 1e-13 with d=0.01 x <- (0:10)*2*pi/10 print(j.calc <- jacobian(func2, x)) cat("error: ", err <- max(abs(j.calc - rbind(diag(cos(x)), diag(-sin(x))))),"\n") if( err > 1e-10) stop("jacobian matrix test 5 FAILED")# 1e-12 with d=0.01 func3 <- function(x) sum(sin(x)) # R^n -> R x <- (1:2)*2*pi/2 print(j.calc <- jacobian(func3, x)) cat("error: ", err <- max(abs(j.calc - cos(x))),"\n") if( err > 1e-11) stop("jacobian matrix test 6 FAILED")# 1e-13 with d=0.01 numDeriv/tests/oneSided.R0000644000175100001440000001155612267353530015116 0ustar hornikusers# test one-sided derivatives library(numDeriv) fuzz <- 1e-8 ##### scalar argument, scalar result (case 1)##### f <- function(x) if(x<=0) sin(x) else NA ################################################## ## grad err <- 1.0 - grad(f, x=0, method="simple", side=-1) if( fuzz < err ) stop("grad case 1 method simple one-sided test 1 failed.") if( ! is.na(grad(f, x=0, method="simple", side=1))) stop("grad case 1 method simple one-sided test 2 failed.") err <- 1.0 - grad(f, x=0, method="Richardson", side=-1) if( fuzz < err ) stop("grad case 1 method Richardson one-sided test 1 failed.") # print(grad(sin, x=-0.5, method="Richardson") , digits=16) # 0.8775825618862814 # print(grad(sin, x=-0.5, method="Richardson", side=-1), digits=16) # 0.8775807270501326 err <- 0.8775807270501326 - grad(sin, x=-0.5, method="Richardson", side=-1) if( fuzz < err ) stop("grad case 1 method Richardson one-sided test 2 failed.") ## jacobian err <- 1.0 - jacobian(f, x=0, method="simple", side= -1) if( fuzz < err ) stop("jacobian case 1 method simple one-sided test failed.") err <- 1.0 - jacobian(f, x=0, method="Richardson", side= -1) if( fuzz < err ) stop("jacobian case 1 method Richardson one-sided test 1 failed.") if( ! is.na(jacobian(f, x=0, method="Richardson", side= 1))) stop("jacobian case 1 method Richardson one-sided test 2 failed.") ##### vector argument, vector result (case 3)##### f <- function(x) if(x[1]<=0) sin(x) else c(NA, sin(x[-1])) ################################################## ## grad err <- 1.0 - grad(f, x=c(0,0), method="simple", side=c(-1, -1)) # 1 1 if( fuzz < max(err) ) stop("grad case 3 method simple one-sided test 1 failed.") err <- 1.0 - grad(f, x=c(0,0), method="simple", side=c(-1, 1)) # 1 1 if( fuzz < max(err) ) stop("grad case 3 method simple one-sided test 2 failed.") err <- 1.0 - grad(f, x=c(0,0), method="simple", side=c(-1, NA)) # 1 1 if( fuzz < max(err) ) stop("grad case 3 method simple one-sided test 3 failed.") err <- 1.0 - grad(f, x=c(0,0), method="simple", side=c( 1, 1)) # NA 1 if( fuzz < err[2] ) stop("grad case 3 method simple one-sided test 4 failed.") if(!is.na( err[1]) ) stop("grad case 3 method simple one-sided test 4b failed.") err <- 1.0 - grad(f, x=c(0,0), method="Richardson", side=c(-1, -1)) # 1 1 if( fuzz < max(err) ) stop("grad case 3 method Richardson one-sided test 1 failed.") err <- 1.0 - grad(f, x=c(0,0), method="Richardson", side=c(-1, 1)) # 1 1 if( fuzz < max(err) ) stop("grad case 3 method Richardson one-sided test 2 failed.") err <- 1.0 - grad(f, x=c(0,0), method="Richardson", side=c(-1, NA)) # 1 1 if( fuzz < max(err) ) stop("grad case 3 method Richardson one-sided test 3 failed.") ## jacobian err <- 1.0 - jacobian(f, x=0, method="simple", side= -1) if( fuzz < err ) stop("jacobian case 3 method simple one-sided test failed.") err <- 1.0 - jacobian(f, x=0, method="Richardson", side= -1) if( fuzz < err ) stop("jacobian case 3 method Richardson one-sided test 1 failed.") if( ! is.na(jacobian(f, x=0, method="Richardson", side= 1))) stop("jacobian case 3 method Richardson one-sided test 2 failed.") ##### vector argument, scalar result (case 2)##### f <- function(x) if(x[1]<=0) sum(sin(x)) else NA ################################################## ## grad err <- 1.0 - grad(f, x=c(0,0), method="simple", side=c(-1, -1)) # 1 1 if( fuzz < max(err) ) stop("grad case 2 method simple one-sided test 1 failed.") err <- 1.0 - grad(f, x=c(0,0), method="simple", side=c(-1, 1)) # 1 1 if( fuzz < max(err) ) stop("grad case 2 method simple one-sided test 2 failed.") err <- 1.0 - grad(f, x=c(0,0), method="simple", side=c(-1, NA)) # 1 1 if( fuzz < max(err) ) stop("grad case 2 method simple one-sided test 3 failed.") err <- 1.0 - grad(f, x=c(0,0), method="simple", side=c( 1, 1)) # NA 1 if( fuzz < err[2] ) stop("grad case 2 method simple one-sided test 4 failed.") if(!is.na( err[1]) ) stop("grad case 2 method simple one-sided test 4b failed.") err <- 1.0 - grad(f, x=c(0,0), method="Richardson", side=c(-1, -1)) # 1 1 if( fuzz < max(err) ) stop("grad case 2 method Richardson one-sided test 1 failed.") err <- 1.0 - grad(f, x=c(0,0), method="Richardson", side=c(-1, 1)) # 1 1 if( fuzz < max(err) ) stop("grad case 2 method Richardson one-sided test 2 failed.") err <- 1.0 - grad(f, x=c(0,0), method="Richardson", side=c(-1, NA)) # 1 1 if( fuzz < max(err) ) stop("grad case 2 method Richardson one-sided test 3 failed.") ## jacobian err <- 1.0 - jacobian(f, x=0, method="simple", side= -1) if( fuzz < err ) stop("jacobian case 2 method simple one-sided test failed.") err <- 1.0 - jacobian(f, x=0, method="Richardson", side= -1) if( fuzz < err ) stop("jacobian case 2 method Richardson one-sided test 1 failed.") if( ! is.na(jacobian(f, x=0, method="Richardson", side= 1))) stop("jacobian case 2 method Richardson one-sided test 2 failed.") numDeriv/NAMESPACE0000644000175100001440000000027512267353530013312 0ustar hornikusersexport("grad") S3method("grad", "default") export("jacobian") S3method("jacobian", "default") export("hessian") S3method("hessian", "default") export("genD") S3method("genD", "default") numDeriv/NEWS0000644000175100001440000000555412756431406012601 0ustar hornikusersKnown BUGS o the hessian function in numDeriv does not accept method="simple". o When method="Richardson", it does not work when r=1, because of subscripting issues. Should fix this such that it does a central difference approximation, without any extrapolation. Changes in numDeriv version 2016.8-1 o simplification of hessian.default() call to jacobian() and grad() in the case of method 'complex' (pointed out by Andreas Rappold). o added argument 'side=NULL' in the hessian.default() call to jacobian() and grad() in the case of method 'complex' to ensure proper passing of ... arguments to the function for which the hessian is being calculated (pointed out by Andreas Rappold). Changes in numDeriv version 2014.2-1 o added argument 'side' to allow one-sided first derivatives (grad and jacobian) for simple and Richardson methods. o minor documentation improvements. Changes in numDeriv version 2013.2-1 o updated R dependency from 1.8.1 to 2.11.1 because of complex step derivative dependency on a fix to exponentiation with integers (pointed out by Hans W. Borchers). o added flag in DESCRIPTION to ByteComple. Changes in numDeriv version 2012.9-1 o added complex step derivatives (from Ravi Varadhan) and related tests. o changed method.args to an empty list in the default methods, as the real defaults depend on the approximation, and are documented in details. Changes in numDeriv version 2012.3-1 o no real changes, but bumping version for new CRAN suitability check. Changes in numDeriv version 2011.11-2 o fixed genD documentation error for denominator in f" (d^2 rather than 2*d noticed by Yilun Wang) Changes in numDeriv version 2011.11-1 o updated maintainer email address. Changes in numDeriv version 2010.11-1 o Added warning in the documentation regarding trying to pass arguments in ... with the same names as numDeriv function arguments. Changes in numDeriv version 2010.2-1 o Added more graceful failure in the case of NA returned by a function (thanks to Adam Kramer). Changes in numDeriv version 2009.2-2 o Standardized NEWS format for new function news(). Changes in numDeriv version 2009.2-1 o argument zero.tol was added to grad, jacobian and genD, and is used to test if parameters are zero in order to determine if eps should be used in place of d. Previous tests using == did not work for very small values. o defaults argument d to grad was 0.0001, but specification made it appear to be 0.1. Specification was changed to make default clear. o unnecessary hessian.default argument setting was removed (they are just passed to genD which duplicated the setting). o Some documentation links to [stats]numericDeriv mistakenly called numericalDeriv were fixed. Changes in numDeriv version 2006.4-1 o First released version. numDeriv/R/0000755000175100001440000000000012647267113012273 5ustar hornikusersnumDeriv/R/numDeriv.R0000644000175100001440000002047612267353530014215 0ustar hornikusers # grad case 1 and 2 are special cases of jacobian, with a scalar rather than # vector valued function. Case 3 differs only because of the interpretation # that the vector result is a scalar function applied to each argument, and the # thus the result has the same length as the argument. # The code of grad could be consolidated to use jacobian. # There is also some duplication in genD. ############################################################################ # functions for gradient calculation ############################################################################ grad <- function (func, x, method="Richardson", side=NULL, method.args=list(), ...) UseMethod("grad") grad.default <- function(func, x, method="Richardson", side=NULL, method.args=list(), ...){ # modified by Paul Gilbert from code by Xingqiao Liu. # case 1/ scalar arg, scalar result (case 2/ or 3/ code should work) # case 2/ vector arg, scalar result (same as special case jacobian) # case 3/ vector arg, vector result (of same length, really 1/ applied multiple times)) f <- func(x, ...) n <- length(x) #number of variables in argument if (is.null(side)) side <- rep(NA, n) else { if(n != length(side)) stop("Non-NULL argument 'side' should have the same length as x") if(any(1 != abs(side[!is.na(side)]))) stop("Non-NULL argument 'side' should have values NA, +1, or -1.") } case1or3 <- n == length(f) if((1 != length(f)) & !case1or3) stop("grad assumes a scalar valued function.") if(method=="simple"){ # very simple numerical approximation args <- list(eps=1e-4) # default args[names(method.args)] <- method.args side[is.na(side)] <- 1 eps <- rep(args$eps, n) * side if(case1or3) return((func(x+eps, ...)-f)/eps) # now case 2 df <- rep(NA,n) for (i in 1:n) { dx <- x dx[i] <- dx[i] + eps[i] df[i] <- (func(dx, ...) - f)/eps[i] } return(df) } else if(method=="complex"){ # Complex step gradient if (any(!is.na(side))) stop("method 'complex' does not support non-NULL argument 'side'.") eps <- .Machine$double.eps v <- try(func(x + eps * 1i, ...)) if(inherits(v, "try-error")) stop("function does not accept complex argument as required by method 'complex'.") if(!is.complex(v)) stop("function does not return a complex value as required by method 'complex'.") if(case1or3) return(Im(v)/eps) # now case 2 h0 <- rep(0, n) g <- rep(NA, n) for (i in 1:n) { h0[i] <- eps * 1i g[i] <- Im(func(x+h0, ...))/eps h0[i] <- 0 } return(g) } else if(method=="Richardson"){ args <- list(eps=1e-4, d=0.0001, zero.tol=sqrt(.Machine$double.eps/7e-7), r=4, v=2, show.details=FALSE) # default args[names(method.args)] <- method.args d <- args$d r <- args$r v <- args$v show.details <- args$show.details a <- matrix(NA, r, n) #b <- matrix(NA, (r - 1), n) # first order derivatives are stored in the matrix a[k,i], # where the indexing variables k for rows(1 to r), i for columns (1 to n), # r is the number of iterations, and n is the number of variables. h <- abs(d*x) + args$eps * (abs(x) < args$zero.tol) pna <- (side == 1) & !is.na(side) # double these on plus side mna <- (side == -1) & !is.na(side) # double these on minus side for(k in 1:r) { # successively reduce h ph <- mh <- h ph[pna] <- 2 * ph[pna] ph[mna] <- 0 mh[mna] <- 2 * mh[mna] mh[pna] <- 0 if(case1or3) a[k,] <- (func(x + ph, ...) - func(x - mh, ...))/(2*h) else for(i in 1:n) { if((k != 1) && (abs(a[(k-1),i]) < 1e-20)) a[k,i] <- 0 #some func are unstable near zero else a[k,i] <- (func(x + ph*(i==seq(n)), ...) - func(x - mh*(i==seq(n)), ...))/(2*h[i]) } if (any(is.na(a[k,]))) stop("function returns NA at ", h," distance from x.") h <- h/v # Reduced h by 1/v. } if(show.details) { cat("\n","first order approximations", "\n") print(a, 12) } #------------------------------------------------------------------------ # 1 Applying Richardson Extrapolation to improve the accuracy of # the first and second order derivatives. The algorithm as follows: # # -- For each column of the derivative matrix a, # say, A1, A2, ..., Ar, by Richardson Extrapolation, to calculate a # new sequence of approximations B1, B2, ..., Br used the formula # # B(i) =( A(i+1)*4^m - A(i) ) / (4^m - 1) , i=1,2,...,r-m # # N.B. This formula assumes v=2. # # -- Initially m is taken as 1 and then the process is repeated # restarting with the latest improved values and increasing the # value of m by one each until m equals r-1 # # 2 Display the improved derivatives for each # m from 1 to r-1 if the argument show.details=T. # # 3 Return the final improved derivative vector. #------------------------------------------------------------------------- for(m in 1:(r - 1)) { a <- (a[2:(r+1-m),,drop=FALSE]*(4^m)-a[1:(r-m),,drop=FALSE])/(4^m-1) if(show.details & m!=(r-1) ) { cat("\n","Richarson improvement group No. ", m, "\n") print(a[1:(r-m),,drop=FALSE], 12) } } return(c(a)) } else stop("indicated method ", method, "not supported.") } jacobian <- function (func, x, method="Richardson", side=NULL, method.args=list(), ...) UseMethod("jacobian") jacobian.default <- function(func, x, method="Richardson", side=NULL, method.args=list(), ...){ f <- func(x, ...) n <- length(x) #number of variables. if (is.null(side)) side <- rep(NA, n) else { if(n != length(side)) stop("Non-NULL argument 'side' should have the same length as x") if(any(1 != abs(side[!is.na(side)]))) stop("Non-NULL argument 'side' should have values NA, +1, or -1.") } if(method=="simple"){ # very simple numerical approximation args <- list(eps=1e-4) # default args[names(method.args)] <- method.args side[is.na(side)] <- 1 eps <- rep(args$eps, n) * side df <-matrix(NA, length(f), n) for (i in 1:n) { dx <- x dx[i] <- dx[i] + eps[i] df[,i] <- (func(dx, ...) - f)/eps[i] } return(df) } else if(method=="complex"){ # Complex step gradient if (any(!is.na(side))) stop("method 'complex' does not support non-NULL argument 'side'.") # Complex step Jacobian eps <- .Machine$double.eps h0 <- rep(0, n) h0[1] <- eps * 1i v <- try(func(x+h0, ...)) if(inherits(v, "try-error")) stop("function does not accept complex argument as required by method 'complex'.") if(!is.complex(v)) stop("function does not return a complex value as required by method 'complex'.") h0[1] <- 0 jac <- matrix(NA, length(v), n) jac[, 1] <- Im(v)/eps if (n == 1) return(jac) for (i in 2:n) { h0[i] <- eps * 1i jac[, i] <- Im(func(x+h0, ...))/eps h0[i] <- 0 } return(jac) } else if(method=="Richardson"){ args <- list(eps=1e-4, d=0.0001, zero.tol=sqrt(.Machine$double.eps/7e-7), r=4, v=2, show.details=FALSE) # default args[names(method.args)] <- method.args d <- args$d r <- args$r v <- args$v a <- array(NA, c(length(f),r, n) ) h <- abs(d*x) + args$eps * (abs(x) < args$zero.tol) pna <- (side == 1) & !is.na(side) # double these on plus side mna <- (side == -1) & !is.na(side) # double these on minus side for(k in 1:r) { # successively reduce h ph <- mh <- h ph[pna] <- 2 * ph[pna] ph[mna] <- 0 mh[mna] <- 2 * mh[mna] mh[pna] <- 0 for(i in 1:n) { a[,k,i] <- (func(x + ph*(i==seq(n)), ...) - func(x - mh*(i==seq(n)), ...))/(2*h[i]) #if((k != 1)) a[,(abs(a[,(k-1),i]) < 1e-20)] <- 0 #some func are unstable near zero } h <- h/v # Reduced h by 1/v. } for(m in 1:(r - 1)) { a <- (a[,2:(r+1-m),,drop=FALSE]*(4^m)-a[,1:(r-m),,drop=FALSE])/(4^m-1) } # drop second dim of a, which is now 1 (but not other dim's even if they are 1 return(array(a, dim(a)[c(1,3)])) } else stop("indicated method ", method, "not supported.") } numDeriv/R/num2Deriv.R0000644000175100001440000001154712647267113014301 0ustar hornikusers hessian <- function (func, x, method="Richardson", method.args=list(), ...) UseMethod("hessian") hessian.default <- function(func, x, method="Richardson", method.args=list(), ...){ if(1!=length(func(x, ...))) stop("Richardson method for hessian assumes a scalar valued function.") if(method=="complex"){ # Complex step hessian args <- list(eps=1e-4, d=0.1, zero.tol=sqrt(.Machine$double.eps/7e-7), r=4, v=2) args[names(method.args)] <- method.args # the CSD part of this uses eps=.Machine$double.eps # but the jacobian is Richardson and uses method.args fn <- function(x, ...){ grad(func=func, x=x, method="complex", side=NULL, method.args=list(eps=.Machine$double.eps), ...) } return(jacobian(func=fn, x=x, method="Richardson", side=NULL, method.args=args, ...)) } else if(method != "Richardson") stop("method not implemented.") args <- list(eps=1e-4, d=0.1, zero.tol=sqrt(.Machine$double.eps/7e-7), r=4, v=2, show.details=FALSE) # default args[names(method.args)] <- method.args D <- genD(func, x, method=method, method.args=args, ...)$D if(1!=nrow(D)) stop("BUG! should not get here.") H <- diag(NA,length(x)) u <- length(x) for(i in 1:length(x)) for(j in 1:i){ u <- u + 1 H[i,j] <- D[,u] } H <- H + t(H) diag(H) <- diag(H)/2 H } ####################################################################### # Bates & Watts D matrix calculation ####################################################################### genD <- function(func, x, method="Richardson", method.args=list(), ...)UseMethod("genD") genD.default <- function(func, x, method="Richardson", method.args=list(), ...){ # additional cleanup by Paul Gilbert (March, 2006) # modified substantially by Paul Gilbert (May, 1992) # from original code by Xingqiao Liu, May, 1991. # This function is not optimized for S speed, but is organized in # the same way it could be (was) implemented in C, to facilitate checking. # v reduction factor for Richardson iterations. This could # be a parameter but the way the formula is coded it is assumed to be 2. if(method != "Richardson") stop("method not implemented.") args <- list(eps=1e-4, d=0.0001, zero.tol=sqrt(.Machine$double.eps/7e-7), r=4, v=2) # default args[names(method.args)] <- method.args d <- args$d r <- args$r v <- args$v if (v!=2) stop("The current code assumes v is 2 (the default).") f0 <- func(x, ...) #f0 is the value of the function at x. n <- length(x) # number of parameters (theta) h0 <- abs(d*x) + args$eps * (abs(x) < args$zero.tol) D <- matrix(0, length(f0),(n*(n + 3))/2) #length(f0) is the dim of the sample space #(n*(n + 3))/2 is the number of columns of matrix D.( first # der. & lower triangle of Hessian) Daprox <- matrix(0,length(f0),r) Hdiag <- matrix(0,length(f0),n) Haprox <- matrix(0,length(f0),r) for(i in 1:n){ # each parameter - first deriv. & hessian diagonal h <-h0 for(k in 1:r){ # successively reduce h f1 <- func(x+(i==(1:n))*h, ...) f2 <- func(x-(i==(1:n))*h, ...) #f1 <- do.call("func",append(list(x+(i==(1:n))*h), func.args)) #f2 <- do.call("func",append(list(x-(i==(1:n))*h), func.args)) Daprox[,k] <- (f1 - f2) / (2*h[i]) # F'(i) Haprox[,k] <- (f1-2*f0+f2)/ h[i]^2 # F''(i,i) hessian diagonal h <- h/v # Reduced h by 1/v. } for(m in 1:(r - 1)) for ( k in 1:(r-m)){ Daprox[,k]<-(Daprox[,k+1]*(4^m)-Daprox[,k])/(4^m-1) Haprox[,k]<-(Haprox[,k+1]*(4^m)-Haprox[,k])/(4^m-1) } D[,i] <- Daprox[,1] Hdiag[,i] <- Haprox[,1] } u <- n for(i in 1:n){ # 2nd derivative - do lower half of hessian only for(j in 1:i){ u <- u + 1 if (i==j) D[,u] <- Hdiag[,i] else { h <-h0 for(k in 1:r){ # successively reduce h f1 <- func(x+(i==(1:n))*h + (j==(1:n))*h, ...) f2 <- func(x-(i==(1:n))*h - (j==(1:n))*h, ...) Daprox[,k]<- (f1 - 2*f0 + f2 - Hdiag[,i]*h[i]^2 - Hdiag[,j]*h[j]^2)/(2*h[i]*h[j]) # F''(i,j) h <- h/v # Reduced h by 1/v. } for(m in 1:(r - 1)) for ( k in 1:(r-m)) Daprox[,k]<-(Daprox[,k+1]*(4^m)-Daprox[,k])/(4^m-1) D[,u] <- Daprox[,1] } } } D <- list(D=D, p=length(x), f0=f0, func=func, x=x, d=d, method=method, method.args=args)# Darray constructor (genD.default) class(D) <- "Darray" invisible(D) } numDeriv/vignettes/0000755000175100001440000000000013475450114014075 5ustar hornikusersnumDeriv/vignettes/Guide.Stex0000644000175100001440000000307712267353530016010 0ustar hornikusers\documentclass[english]{article} \begin{document} %\VignetteIndexEntry{numDeriv Guide} \SweaveOpts{eval=TRUE,echo=TRUE,results=hide,fig=FALSE} \begin{Scode}{echo=FALSE,results=hide} options(continue=" ") \end{Scode} \section{Functions to calculate Numerical Derivatives and Hessian Matrix} In R, the functions in this package are made available with \begin{Scode} library("numDeriv") \end{Scode} The code from the vignette that generates this guide can be loaded into an editor with \emph{edit(vignette("Guide", package="numDeriv"))}. This uses the default editor, which can be changed using \emph{options()}. Here are some examples of grad. \begin{Scode} grad(sin, pi) grad(sin, (0:10)*2*pi/10) func0 <- function(x){ sum(sin(x)) } grad(func0 , (0:10)*2*pi/10) func1 <- function(x){ sin(10*x) - exp(-x) } curve(func1,from=0,to=5) x <- 2.04 numd1 <- grad(func1, x) exact <- 10*cos(10*x) + exp(-x) c(numd1, exact, (numd1 - exact)/exact) x <- c(1:10) numd1 <- grad(func1, x) exact <- 10*cos(10*x) + exp(-x) cbind(numd1, exact, (numd1 - exact)/exact) \end{Scode} Here are some examples of jacobian. \begin{Scode} func2 <- function(x) c(sin(x), cos(x)) x <- (0:1)*2*pi jacobian(func2, x) \end{Scode} Here are some examples of hessian. \begin{Scode} x <- 0.25 * pi hessian(sin, x) fun1e <- function(x) sum(exp(2*x)) x <- c(1, 3, 5) hessian(fun1e, x, method.args=list(d=0.01)) \end{Scode} Here are some examples of genD. \begin{Scode} func <- function(x){c(x[1], x[1], x[2]^2)} z <- genD(func, c(2,2,5)) z \end{Scode} \end{document} numDeriv/MD50000644000175100001440000000233213476161015012375 0ustar hornikusers022eadf593c93a5174c7d2f0fd9a592a *DESCRIPTION 68a5918eb427271dd79f07d62fce33a7 *NAMESPACE 57f0f954f57be55d5ebca2eaa03d6894 *NEWS 30bc245893abd0960083733df3998cd1 *R/num2Deriv.R 7671b0b77de2d0d960b5382c8aa561fe *R/numDeriv.R af67430eeb55f14116484cf8f14078a0 *build/vignette.rds c216cb58cd7989e4db1a31e42f9976e9 *inst/doc/Guide.R 65ba7a8040ac6c9475a6ec23e9c5e3cd *inst/doc/Guide.Stex 5edf223d46876fa732fd47671492ecd8 *inst/doc/Guide.pdf 6f429cff9fd52e47bf6ff11bb78de727 *man/00.numDeriv.Intro.Rd 4a081c87cbf11c80b9f60ef0becd4056 *man/genD.Rd b3d62c76fb5efccb959f0687b7ad378c *man/grad.Rd 7674625ef48ddcd2f6d09970a5ce8b33 *man/hessian.Rd 914475976afb72baec0c4c2c737a9d05 *man/jacobian.Rd d8a40f6fcb06290212e91b33d3187719 *man/numDeriv-package.Rd 9fcb6106c63385edde30838d7c2e81c6 *po/R-ko.po c3cd86f8240c28ae85d704f4fe0f5067 *po/R-numDeriv.pot 338ece2354dd67caa573d43821ffe28c *tests/BWeg.R d9b261b7989677a5ba6491a3fcbf7945 *tests/CSD.R 1c14632bb7692efc750c890044afe3b7 *tests/grad01.R 6932a6ef2283f55bd98fd58d44440bf2 *tests/hessian01.R 6280a2be34665543907c6e84d4490fd7 *tests/jacobian01.R 0e55f5a3fc320980bb941f30507e1816 *tests/oneSided.R 0d4b05477704df019b1e3f5a0810fde0 *tests/trig01.R 65ba7a8040ac6c9475a6ec23e9c5e3cd *vignettes/Guide.Stex numDeriv/build/0000755000175100001440000000000013475450114013164 5ustar hornikusersnumDeriv/build/vignette.rds0000644000175100001440000000031113475450114015516 0ustar hornikusersb```b`f@&0rHsfV溤e)rBIC$ !@„5/17]KjAj^ HvѴpxVaaqIY0AAn0Ez0?Ht&${+%$Q/niƪnumDeriv/DESCRIPTION0000644000175100001440000000166613476161015013604 0ustar hornikusersPackage: numDeriv Version: 2016.8-1.1 Title: Accurate Numerical Derivatives Description: Methods for calculating (usually) accurate numerical first and second order derivatives. Accurate calculations are done using 'Richardson''s' extrapolation or, when applicable, a complex step derivative is available. A simple difference method is also provided. Simple difference is (usually) less accurate but is much quicker than 'Richardson''s' extrapolation and provides a useful cross-check. Methods are provided for real scalar and vector valued functions. Depends: R (>= 2.11.1) LazyLoad: yes ByteCompile: yes License: GPL-2 Copyright: 2006-2011, Bank of Canada. 2012-2016, Paul Gilbert Author: Paul Gilbert and Ravi Varadhan Maintainer: Paul Gilbert URL: http://optimizer.r-forge.r-project.org/ NeedsCompilation: no Packaged: 2019-06-04 11:04:44 UTC; hornik Repository: CRAN Date/Publication: 2019-06-06 09:51:09 UTC numDeriv/man/0000755000175100001440000000000012756430565012651 5ustar hornikusersnumDeriv/man/genD.Rd0000644000175100001440000001106712276527601014015 0ustar hornikusers\name{genD} \alias{genD} \alias{genD.default} \title{Generate Bates and Watts D Matrix} \description{Generate a matrix of function derivative information.} \usage{ genD(func, x, method="Richardson", method.args=list(), ...) \method{genD}{default}(func, x, method="Richardson", method.args=list(), ...) } \arguments{ \item{func}{a function for which the first (vector) argument is used as a parameter vector.} \item{x}{The parameter vector first argument to \code{func}.} \item{method}{one of \code{"Richardson"} or \code{"simple"} indicating the method to use for the aproximation.} \item{method.args}{arguments passed to method. See \code{\link{grad}}. (Arguments not specified remain with their default values.)} \item{...}{any additional arguments passed to \code{func}. WARNING: None of these should have names matching other arguments of this function.} } \value{ A list with elements as follows: \code{D} is a matrix of first and second order partial derivatives organized in the same manner as Bates and Watts, the number of rows is equal to the length of the result of \code{func}, the first p columns are the Jacobian, and the next p(p+1)/2 columns are the lower triangle of the second derivative (which is the Hessian for a scalar valued \code{func}). \code{p} is the length of \code{x} (dimension of the parameter space). \code{f0} is the function value at the point where the matrix \code{D} was calculated. The \code{genD} arguments \code{func}, \code{x}, \code{d}, \code{method}, and \code{method.args} also are returned in the list. } \details{ The derivatives are calculated numerically using Richardson improvement. Methods "simple" and "complex" are not supported in this function. The "Richardson" method calculates a numerical approximation of the first and second derivatives of \code{func} at the point \code{x}. For a scalar valued function these are the gradient vector and Hessian matrix. (See \code{\link{grad}} and \code{\link{hessian}}.) For a vector valued function the first derivative is the Jacobian matrix (see \code{\link{jacobian}}). For the Richardson method \code{method.args=list(eps=1e-4, d=0.0001, zero.tol=sqrt(.Machine$double.eps/7e-7), r=4, v=2)} is set as the default. See \code{\link{grad}} for more details on the Richardson's extrapolation parameters. A simple approximation to the first order derivative with respect to \eqn{x_i}{x_i} is \deqn{f'_{i}(x) = /(2*d)}{% f'_{i}(x) = /(2*d)} A simple approximation to the second order derivative with respect to \eqn{x_i}{x_i} is \deqn{f''_{i}(x) = /(d^2) }{% f''_{i}(x) = /(d^2) } The second order derivative with respect to \eqn{x_i, x_j}{x_i, x_j} is \deqn{f''_{i,j}(x) = /(2*d^2) - (f''_{i}(x) + f''_{j}(x))/2 }{% f(x_{1},\dots,x_{i}-d,\dots,x_{j}-d,\dots,x_{n})>/(2*d^2) - (f''_{i}(x) + f''_{j}(x))/2 } Richardson's extrapolation is based on these formula with the \code{d} being reduced in the extrapolation iterations. In the code, \code{d} is scaled to accommodate parameters of different magnitudes. \code{genD} does \code{1 + r (N^2 + N)} evaluations of the function \code{f}, where \code{N} is the length of \code{x}. } \references{ Linfield, G.R. and Penny, J.E.T. (1989) "Microcomputers in Numerical Analysis." Halsted Press. Bates, D.M. & Watts, D. (1980), "Relative Curvature Measures of Nonlinearity." J. Royal Statistics Soc. series B, 42:1-25 Bates, D.M. and Watts, D. (1988) "Non-linear Regression Analysis and Its Applications." Wiley. } \seealso{ \code{\link{hessian}}, \code{\link{grad}} } \examples{ func <- function(x){c(x[1], x[1], x[2]^2)} z <- genD(func, c(2,2,5)) } \keyword{multivariate} numDeriv/man/numDeriv-package.Rd0000644000175100001440000000252312267353530016315 0ustar hornikusers\name{numDeriv-package} \alias{numDeriv-package} \alias{numDeriv.Intro} \docType{package} \title{Accurate Numerical Derivatives} \description{Calculate (accurate) numerical approximations to derivatives.} \details{ The main functions are \preformatted{ grad to calculate the gradient (first derivative) of a scalar real valued function (possibly applied to all elements of a vector argument). jacobian to calculate the gradient of a real m-vector valued function with real n-vector argument. hessian to calculate the Hessian (second derivative) of a scalar real valued function with real n-vector argument. genD to calculate the gradient and second derivative of a real m-vector valued function with real n-vector argument. } } \author{Paul Gilbert, based on work by Xingqiao Liu, and Ravi Varadhan (who wrote complex-step derivative codes)} \references{ Linfield, G. R. and Penny, J. E. T. (1989) \emph{Microcomputers in Numerical Analysis}. New York: Halsted Press. Fornberg, B. and Sloan, D, M. (1994) ``A review of pseudospectral methods for solving partial differential equations.'' \emph{Acta Numerica}, 3, 203-267. Lyness, J. N. and Moler, C. B. (1967) ``Numerical Differentiation of Analytic Functions.'' \emph{SIAM Journal for Numerical Analysis}, 4(2), 202-210. } \keyword{package} numDeriv/man/jacobian.Rd0000644000175100001440000000527212276553006014705 0ustar hornikusers\name{jacobian} \alias{jacobian} \alias{jacobian.default} \title{Gradient of a Vector Valued Function} \description{ Calculate the m by n numerical approximation of the gradient of a real m-vector valued function with n-vector argument. } \usage{ jacobian(func, x, method="Richardson", side=NULL, method.args=list(), ...) \method{jacobian}{default}(func, x, method="Richardson", side=NULL, method.args=list(), ...) } \arguments{ \item{func}{a function with a real (vector) result.} \item{x}{a real or real vector argument to func, indicating the point at which the gradient is to be calculated.} \item{method}{one of \code{"Richardson"}, \code{"simple"}, or \code{"complex"} indicating the method to use for the approximation.} \item{method.args}{arguments passed to method. See \code{\link{grad}}. (Arguments not specified remain with their default values.)} \item{...}{any additional arguments passed to \code{func}. WARNING: None of these should have names matching other arguments of this function.} \item{side}{an indication of whether one-sided derivatives should be attempted (see details in function \code{\link{grad}}).} } \value{A real m by n matrix.} \details{ For \eqn{f:R^n -> R^m}{f:R^n -> R^m} calculate the \eqn{m x n}{m x n} Jacobian \eqn{dy/dx}{dy/dx}. The function \code{jacobian} calculates a numerical approximation of the first derivative of \code{func} at the point \code{x}. Any additional arguments in \dots are also passed to \code{func}, but the gradient is not calculated with respect to these additional arguments. If method is "Richardson", the calculation is done by Richardson's extrapolation. See \code{link{grad}} for more details. For this method \code{method.args=list(eps=1e-4, d=0.0001, zero.tol=sqrt(.Machine$double.eps/7e-7), r=4, v=2, show.details=FALSE)} is set as the default. If method is "simple", the calculation is done using a simple epsilon difference. For method "simple" \code{method.args=list(eps=1e-4)} is the default. Only \code{eps} is used by this method. If method is "complex", the calculation is done using the complex step derivative approach. See addition comments in \code{\link{grad}} before choosing this method. For method "complex", \code{method.args} is ignored. The algorithm uses an \code{eps} of \code{.Machine$double.eps} which cannot (and should not) be modified. } \seealso{ \code{\link{grad}}, \code{\link{hessian}}, \code{\link[stats]{numericDeriv}} } \examples{ func2 <- function(x) c(sin(x), cos(x)) x <- (0:1)*2*pi jacobian(func2, x) jacobian(func2, x, "complex") } \keyword{multivariate} numDeriv/man/hessian.Rd0000644000175100001440000000666212756430565014604 0ustar hornikusers\name{hessian} \alias{hessian} \alias{hessian.default} \title{Calculate Hessian Matrix} \description{Calculate a numerical approximation to the Hessian matrix of a function at a parameter value.} \usage{ hessian(func, x, method="Richardson", method.args=list(), ...) \method{hessian}{default}(func, x, method="Richardson", method.args=list(), ...) } \arguments{ \item{func}{a function for which the first (vector) argument is used as a parameter vector.} \item{x}{the parameter vector first argument to func.} \item{method}{one of \code{"Richardson"} or \code{"complex"} indicating the method to use for the approximation.} \item{method.args}{arguments passed to method. See \code{\link{grad}}. (Arguments not specified remain with their default values.)} \item{...}{an additional arguments passed to \code{func}. WARNING: None of these should have names matching other arguments of this function.} } \value{An n by n matrix of the Hessian of the function calculated at the point \code{x}.} \details{ The function \code{hessian} calculates an numerical approximation to the n x n second derivative of a scalar real valued function with n-vector argument. The argument \code{method} can be \code{"Richardson"} or \code{"complex"}. Method \code{"simple"} is not supported. For method \code{"complex"} the Hessian matrix is calculated as the Jacobian of the gradient. The function \code{grad} with method "complex" is used, and \code{method.args} is ignored for this (an \code{eps} of \code{.Machine$double.eps} is used). However, \code{jacobian} is used in the second step, with method \code{"Richardson"} and argument \code{method.args} is used for this. The default is \code{method.args=list(eps=1e-4, d=0.1, zero.tol=sqrt(.Machine$double.eps/7e-7), r=4, v=2, show.details=FALSE)}. (These are the defaults for \code{hessian} with method \code{"Richardson"}, which are slightly different from the defaults for \code{jacobian} with method \code{"Richardson"}.) See addition comments in \code{\link{grad}} before choosing method \code{"complex"}. Methods \code{"Richardson"} uses \code{\link{genD}} and extracts the second derivative. For this method \code{method.args=list(eps=1e-4, d=0.1, zero.tol=sqrt(.Machine$double.eps/7e-7), r=4, v=2, show.details=FALSE)} is set as the default. \code{hessian} does one evaluation of \code{func} in order to do some error checking before calling \code{genD}, so the number of function evaluations will be one more than indicated for \code{\link{genD}}. The argument \code{side} is not supported for second derivatives and since \dots are passed to \code{func} there may be no error message if it is specified. } \seealso{ \code{\link{jacobian}}, \code{\link{grad}}, \code{\link{genD}} } \examples{ sc2.f <- function(x){ n <- length(x) sum((1:n) * (exp(x) - x)) / n } sc2.g <- function(x){ n <- length(x) (1:n) * (exp(x) - 1) / n } x0 <- rnorm(5) hess <- hessian(func=sc2.f, x=x0) hessc <- hessian(func=sc2.f, x=x0, "complex") all.equal(hess, hessc, tolerance = .Machine$double.eps) # Hessian = Jacobian of the gradient jac <- jacobian(func=sc2.g, x=x0) jacc <- jacobian(func=sc2.g, x=x0, "complex") all.equal(hess, jac, tolerance = .Machine$double.eps) all.equal(hessc, jacc, tolerance = .Machine$double.eps) } \keyword{multivariate} numDeriv/man/00.numDeriv.Intro.Rd0000644000175100001440000000052312267353530016232 0ustar hornikusers\name{00.numDeriv.Intro} \alias{00.numDeriv.Intro} \docType{package} \title{Accurate Numerical Derivatives} \description{Calculate (accurate) numerical approximations to derivatives.} \details{ See \code{\link{numDeriv-package}} ( in the help system use package?numDeriv or ?"numDeriv-package") for an overview. } \keyword{package} numDeriv/man/grad.Rd0000644000175100001440000002177412276553001014054 0ustar hornikusers\name{grad} \alias{grad} \alias{grad.default} \title{Numerical Gradient of a Function} \description{Calculate the gradient of a function by numerical approximation.} \usage{ grad(func, x, method="Richardson", side=NULL, method.args=list(), ...) \method{grad}{default}(func, x, method="Richardson", side=NULL, method.args=list(), ...) } \arguments{ \item{func}{a function with a scalar real result (see details).} \item{x}{a real scalar or vector argument to func, indicating the point(s) at which the gradient is to be calculated.} \item{method}{one of \code{"Richardson"}, \code{"simple"}, or \code{"complex"} indicating the method to use for the approximation.} \item{method.args}{arguments passed to method. Arguments not specified remain with their default values as specified in details} \item{side}{an indication of whether one-sided derivatives should be attempted (see details).} \item{...}{an additional arguments passed to \code{func}. WARNING: None of these should have names matching other arguments of this function.} } \value{A real scalar or vector of the approximated gradient(s).} \details{ The function \code{grad} calculates a numerical approximation of the first derivative of \code{func} at the point \code{x}. Any additional arguments in \dots are also passed to \code{func}, but the gradient is not calculated with respect to these additional arguments. It is assumed \code{func} is a scalar value function. If a vector \code{x} produces a scalar result then \code{grad} returns the numerical approximation of the gradient at the point \code{x} (which has the same length as \code{x}). If a vector \code{x} produces a vector result then the result must have the same length as \code{x}, and it is assumed that this corresponds to applying the function to each of its arguments (for example, \code{sin(x)}). In this case \code{grad} returns the gradient at each of the points in \code{x} (which also has the same length as \code{x} -- so be careful). An alternative for vector valued functions is provided by \code{\link{jacobian}}. If method is "simple", the calculation is done using a simple epsilon difference. For method "simple" \code{method.args=list(eps=1e-4)} is the default. Only \code{eps} is used by this method. If method is "complex", the calculation is done using the complex step derivative approach of Lyness and Moler, described in Squire and Trapp. This method requires that the function be able to handle complex valued arguments and return the appropriate complex valued result, even though the user may only be interested in the real-valued derivatives. It also requires that the complex function be analytic. (This might be thought of as the complex equivalent of the requirement for continuity and smoothness of a real valued function.) So, while this method is extremely powerful it is applicable to a very restricted class of functions. \emph{Avoid this method if you do not know that your function is suitable. Your mistake may not be caught and the results will be spurious.} For cases where it can be used, it is faster than Richardson's extrapolation, and it also provides gradients that are correct to machine precision (16 digits). For method "complex", \code{method.args} is ignored. The algorithm uses an \code{eps} of \code{.Machine$double.eps} which cannot (and should not) be modified. If method is "Richardson", the calculation is done by Richardson's extrapolation (see e.g. Linfield and Penny, 1989, or Fornberg and Sloan, 1994.) This method should be used if accuracy, as opposed to speed, is important (but see method "complex" above). For this method \code{method.args=list(eps=1e-4, d=0.0001, zero.tol=sqrt(.Machine$double.eps/7e-7), r=4, v=2, show.details=FALSE)} is set as the default. \code{d} gives the fraction of \code{x} to use for the initial numerical approximation. The default means the initial approximation uses \code{0.0001 * x}. \code{eps} is used instead of \code{d} for elements of \code{x} which are zero (absolute value less than zero.tol). \code{zero.tol} tolerance used for deciding which elements of \code{x} are zero. \code{r} gives the number of Richardson improvement iterations (repetitions with successly smaller \code{d}. The default \code{4} general provides good results, but this can be increased to \code{6} for improved accuracy at the cost of more evaluations. \code{v} gives the reduction factor. \code{show.details} is a logical indicating if detailed calculations should be shown. The general approach in the Richardson method is to iterate for \code{r} iterations from initial values for interval value \code{d}, using reduced factor \code{v}. The the first order approximation to the derivative with respect to \eqn{x_{i}}{x_{i}} is \deqn{f'_{i}(x) = /(2*d)}{% f'_{i}(x) = /(2*d)} This is repeated \code{r} times with successively smaller \code{d} and then Richardson extraplolation is applied. If elements of \code{x} are near zero the multiplicative interval calculation using \code{d} does not work, and for these elements an additive calculation using \code{eps} is done instead. The argument \code{zero.tol} is used determine if an element should be considered too close to zero. In the iterations, interval is successively reduced to eventual be \code{d/v^r} and the square of this value is used in second derivative calculations (see \code{\link{genD}}) so the default \code{zero.tol=sqrt(.Machine$double.eps/7e-7)} is set to ensure the interval is bigger than \code{.Machine$double.eps} with the default \code{d}, \code{r}, and \code{v}. If \code{side} is \code{NULL} then it is assumed that the point at which the calculation is being done is interior to the domain of the function. If the point is on the boundary of the domain then \code{side} can be used to indicate which side of the point \code{x} should be used for the calculation. If not \code{NULL} then it should be a vector of the same length as \code{x} and have values \code{NA}, \code{+1}, or \code{-1}. \code{NA} indicates that the usual calculation will be done, while \code{+1}, or \code{-1} indicate adding or subtracting from the parameter point \code{x}. The argument \code{side} is not supported for all methods. Since usual calculation with method "simple" uses only a small \code{eps} step to one side, the only effect of argument \code{side} is to determine the direction of the step. The usual calculation with method "Richardson" is symmetric, using steps to both sides. The effect of argument \code{side} is to take a double sized step to one side, and no step to the other side. This means that the center of the Richardson extrapolation steps is moving slightly in the reduction, and is not exactly on the boundary. (Warning: I am not aware of theory or published experimental evidence to support this, but the results in my limited testing seem good.) } \references{ Linfield, G. R. and Penny, J. E. T. (1989) \emph{Microcomputers in Numerical Analysis}. New York: Halsted Press. Fornberg, B. and Sloan, D, M. (1994) ``A review of pseudospectral methods for solving partial differential equations.'' \emph{Acta Numerica}, 3, 203-267. Lyness, J. N. and Moler, C. B. (1967) ``Numerical Differentiation of Analytic Functions.'' \emph{SIAM Journal for Numerical Analysis}, 4(2), 202-210. Squire, William and Trapp, George (1998) ``Using Complex Variables to Estimate Derivatives of Real Functions.'' \emph{SIAM Rev}, 40(1), 110-112. } \seealso{ \code{\link{jacobian}}, \code{\link{hessian}}, \code{\link{genD}}, \code{\link[stats]{numericDeriv}} } \examples{ grad(sin, pi) grad(sin, (0:10)*2*pi/10) func0 <- function(x){ sum(sin(x)) } grad(func0 , (0:10)*2*pi/10) func1 <- function(x){ sin(10*x) - exp(-x) } curve(func1,from=0,to=5) x <- 2.04 numd1 <- grad(func1, x) exact <- 10*cos(10*x) + exp(-x) c(numd1, exact, (numd1 - exact)/exact) x <- c(1:10) numd1 <- grad(func1, x) numd2 <- grad(func1, x, "complex") exact <- 10*cos(10*x) + exp(-x) cbind(numd1, numd2, exact, (numd1 - exact)/exact, (numd2 - exact)/exact) sc2.f <- function(x){ n <- length(x) sum((1:n) * (exp(x) - x)) / n } sc2.g <- function(x){ n <- length(x) (1:n) * (exp(x) - 1) / n } x0 <- rnorm(100) exact <- sc2.g(x0) g <- grad(func=sc2.f, x=x0) max(abs(exact - g)/(1 + abs(exact))) gc <- grad(func=sc2.f, x=x0, method="complex") max(abs(exact - gc)/(1 + abs(exact))) f <- function(x) if(x[1]<=0) sum(sin(x)) else NA grad(f, x=c(0,0), method="Richardson", side=c(-1, 1)) } \keyword{multivariate}