Mediana/0000755000176200001440000000000013464553603011621 5ustar liggesusersMediana/inst/0000755000176200001440000000000013464544414012576 5ustar liggesusersMediana/inst/figures/0000755000176200001440000000000013434027611014232 5ustar liggesusersMediana/inst/figures/hexMediana.png0000644000176200001440000012332713434027611017013 0ustar liggesusersPNG  IHDR K$tRNSTC pHYs\F\FCA IDATxydUask{KkUfQAAAAٺ4E\&&1QgDGA38$ѸbF"QQVAEYf][jx>_WUVݥO>@SuŮ_{΍z @+ @+ @+ @+=wG y# #zHǎ<ؙ`p:`-RK0Z@:`PtQb#399b>|ob` Z(6PLMH-N(>/$ #ťj`t@dd3"PEȽDBH|@es BO-Nș\|j OȓEB0mb Gt@S `0Z82( 3PM,dNȐ a%CήF'dEZ~jvߪNȂtvENHS a>0t@:\RH!Z i }B@ }BO- s(@@/zz% êRd>W Y-L,NH |jxt@ҊD'$fЦv& C'$@!̧ @'ĢvE-N.EhZ)  y:E'K!ħB'E$$(Z KP`b t.)^S I-dP)dH)8 eb ;t@ !K@"A!,B-K'4BI`@)P QyL'E$Z P R?tP| x@B @aV-HtP@ aИXHN E! 2 GZ2E-$B'E`@|:78tW v ,E#jS:tG*tD'O-I'9HZXNN$#j`:.@I] @)I-L'٢HZO'"HZLPdT:HB 0t"A!& ,4y!~SZNG!wj:@a`# @)M-P N'@j($I!0ȤP$:HB9j(ĥ`gj;"`j/tI!@;S:BN wtЙ"A!Pj4$"Z @Gt42dNwe:%}l T2H' -j E$@F' P|  )Z B'\ZL,;:L#P N( H-,P( T A:8,EZ !:L#-1|S"] yvHrI$Q tJ'BhN $B- HT`I:!D#jER@h TZuՍTé/p%̗kԇo+_Ї]LyQ?mţo{.o~g$x@#iՂTȬ~|8#iET'BSݷs->BruN@ޥu7'7,"i&v6XRZտ{Tx 쾝julT&>[ Ys>vB;SwZ-mnֿ3sj9߃1RkKFUtu}htz }=cK bHCې2B'iv򡇕;glQyɡZygEW-dҡvBӦ>>tں|ƨ36ېN7Тv֝Z'ՓsrQXhn}>飧>h}>HE E'i]ISl\=a]OBHidpQ)ť)%sG_1@܆4tB)v ' jIk)9gc>Q 澣]QT~G!h1hNw`XPT:!y- t&T G^hw$Щo;5fNZ]9kS{~6 PI>7tՓ?jDCͩTQ ᾣXеtsZJ7ͱ>h'b!U)ɏ~'jCg܇r+WIqR{7\mLh@L,NBHD2[G<ލ*ǝ ,(Z]Og>6= B~c ;Zϯޱ4 !¢ d ՓJ'fWY(d.5!eNx OY\[]wj:!0`>S Y㾣_ϲ%wVT=GT4"T|)@X>:`!%QrkSe^^spy!;nMj{42 \il~#M46=H}ͳ?q}ÍY(~D#K{;Jhoc?.Upw5&G n'(%xQsj*uN~ꟓO!T*Փ9@YKӟY{Փ/hWVW>fWm߼>jgvjm%x ׏)w*O:rQ#-Ŕ-t裁lo.WoiSGQy!ՓUN8rQ؊'V6tKxM\uEc.n@."Q )NHq)B*ͦ$?NM67osQDV7y8U9.Z> 6ljޏz֞N_NC˿SS_W}[aѷy/Dc+F_^`˟wCJr>cG۽EO*}q/mkC'<UO\W7I'?1/n߻[=ˣOͦ ! Oc ZNZ;'?eq1( 7@yr$ɵ9954|(z: ۗ1t˲Rl_ۗ/~r+>աCyc/lVk?~k)phly =WR؍ʑG=}|O'-:t_(wQTOqE*lJ SS^ongՆN7]R-*ǝ {uwc#o={k{wf?[sޝb[bٹI~ƴЁh=/Xvadd]񹯍}Cg,ŏץ=ZQlB*ME;FUZ*C5&C1(D#?3TOM4͛r;!H )9d'4Wg8O<2fGs=TtاX=q]Ym{CK{ !P>Wq&zȝ`!]BGB:N{B MhhxٹysM,5$;1W_&x@P߸!!t,Z>>:פ==1܆SZ5m !|Ҙǩ;5=yt̃\qI/C_~cd_BTU*-;6R>^@&UN0;mvB9sٷb*hdz܉10saf&МLụRb>C/,ę<ˎ|ihزsߗ2ԧE?<ZHJ;A!T·Og~hN$>!$qQi}Id0iwRiO"ѽ!ߕȤDJ{5GkE"6-OzxK!?wb0QZ9wEvU3] 1hT 'l۶g.ṿU=VN%9G[<,~ǭWo_f.x[{xRyկ^y{4V/CC/<$%Qmf.vP ]Hi}`;"Ix>aNeQ\p{K1mu"bޗ;>rēgQ]AOt xs yOݵtH(}no*OfVO95Noq/vms2+똇^҃Y|~bnspN~_gA~jC/zS[/.*mWEYZM49mks֭ۚ۶6'Ght4.+=nO(xPOw=>J{˃M%BxDR! )wBq;R \*thf.y̾dث523mo;9y} مԵVP~⑥=̄w^m ͭ[[h{.͛Ҹ}4q=io~q>^P%=ո6}mۢeˢȇ-g^>Npy+&ڑNP:te?8U٫[j:' ]kĿ'vیfr_>/EI 'ڻK2ܶu j](\}skt8.hK+Pe}0TOX7/3ON-/py,oY?4Йzkz7oV6{Ϛ7t3Ɩ_1z]<3dٝ}vxp+{&?odҾZ-LOw^KJ YH$^i.+-ZVwV<ؘgh,wXYySFa fwקVjzKO:}G> )w&>3/2ɏ\ueO?Ωr[>$ַh6>ӟ}n'~۶퓟hOFU*u˃6 ig{1OMFwBi! IDATۿzP_NNv}Bi.v֫>fzmq0Lj0!v-W[O?3u54~S}(p`T_n}붿M={5ɎjN4 .fY0P'4B'lj ɉnsJtPthC_J#oi+L|PYb xm/L?LgS[!~${4o?z}CЈ+xzp@Q ;d I }ިO'pQ{k3Oy]ޜjd B+?N^ T--]=ߨܶG??/Di%{4~C3{{to﵂](ŏI;!iTKH Қ,sy% V%?O1F95 ?ŜN gߣ'b67}ϤNC} n::}@݆ԏߤBEw _>POG<9kN]tTOL:bOh>ArYݒ|v ,jNOz=ize@j=ќK/ ՓJB(oqBX|ӛD{D?hu+CbZ޾(LJ@5:APZoL$RsO!\v%U+O>z<;Α,"-:OH9NXٯ&{^K!&<X?5J*.<񈞦BO~W EDWR_2- 767iN[ i)MNj"-:ɩ %x*[lwOǤ]Ji=0<F##h Q{oe-KryЕN,$Q,)E!mfsҋ^8g;umhXg9l[iM;4'[G 鄞Ү6%yZO-|hk*xpisyBb@]Bhn[B[1;|Сjy_>׿c0ϗhh8^& !j6af&cXn+'rZS?.qySBA$о;aB3?AsN9u3+ C3Ww c!Vnv'5 ɮMgnjsʇY pRCWtf.VN 8; O<"HH$+ϩK޺Bs~qI.2)_!5H(;jΜ˖gqhi'bX1;`# Jz>! Q1䧤= 2AenaN0@OhNL~ǭ;nyI !_Н=w7+O(=^JSV>׽!QQ.r!P$/Oh93}C✮B}s0wQc^`>cIOcͩW7մ /2~.hD/D#SXļC,"[B<Βa!cT㗿L$~́R^spus wҽ ip' aaG;НhX(eu쏯nxm:T> sԞFC \TZ؟4ǂ€Ha3W\REcEB{l|똗-۵)öio{S9uךl_ۡ!:!s߻^764iShwi[v{7 Oͦr҂G'@^DØ盹[:alyuNCK)ꄽ(gy8r\u˃ O3/)|Y1wev=y{G7'y#G}ÍJk'?M˃BJs_;4ɯcd91\><֜[gO|_Q'<`.g?fui;l4o <(T>Es0иM7$;6oqݝHy9Z5mOFF g[?eD:nc2睠/Z~'f.柤tzѣZoܐC~3cO F~=,^_7bK+`uo;A!" zԵ<_Z`>ᶍɞ:P{!?lO5^]-G7HG.h8X)ͬH˃џ/{ XPO=gW7уg-776Z75+ٳЋ'c/a+G~2fP-p .J?o' ]괻h 3~˧wQaK={a-gAHKFU=zҳ:}xBO]|EqtC1I˃Ի wB`خ;cz[Ѓ<T422/ݓ!ktYȣ^e眷k>ّ7iǙu*vgy79DVcn_xUۣd\ss\˃Ջi>A\1w5''u\{L4[(C΋g?h媱fs?7}5XM5,`5 n~RZsP(:[ܨOKï SW;y-?M?oojnܜ([rT>|{7hnI`ah5O䷵vBsjrWT?'w/z{1(v[|>=5: !PC/{w/ 75yCm[AT*0< P4<Kc+Uv=Z{j{=B晣Ҟ{w^N?^(zIïODQ 1< A:}GvBaҋ tB=xZiz{B:c=BQm91;ۜZTICo@H]_#oL|}-K|,|hy/ JЋ^'1B;t:|E[fzćJ%Z>듔wMˇ?bOGkqc7GOu{"Żfdg?yT}{jҞ{_X{ KF\?vH.w|Y=N]^?0RT:YCcC쵽x>R'ˇ'3sYem; ]L|}~HR^sHwoe3W|'Щ~$o\@$?cmwdcNqqЋ ]inwhٷ^ygk8f.V2\@$9f3NDmH=層f&>7$::Ӹ0QQ.HNOaMxBN|œ}z}2[BS_W`'>.$B'/J4"C6'kin8 j>?{ coߞ;ɏ$8btBh6睳?:ЙmgGAF< :>I|sHbs!y/84&ۛu1u4xߝpQnˣ[nJh8tl.LO= 2$kco^z^wrU(o}| !_47=^5{uN7n((ħ^tBH{Y/6hNO^Ӄu̡Tǽ l+_o|-_i!ߴϚ$Fԙٟ'͚f sF}צsjt}fGIwBs~ȡ,-:пmgɖ?|U?>FGyhksoÉmظo9]5WS|ϧ{8}3y I; (cw近^5o;O7\1#R|l/=H͉`%1}hl;&_ k߼Ϫo#.Ġ'$ /Qh6g.$#`x5k^/9)_>0s奓P{Ӹ/a+_hr_:`M4:ګs5l}g;wٸ/uGs|sZ_;V>&}UW\oGsJ0WVO8|ګq]^3g~.EuP ^_ @EQi֔XS>kE#p42Ghx$͙05j>Pᇚ?x5~e<4iՆʇ-yTCK[ghp44JQs۶涭mۚ?X߸~-7af&^42R~*GU^ݣchlE|,K01xƽw7kAU`3J'tJ'tJ'tJ'tJ'tJ'tJ'tJ'tJ'tJ'tJ'tJ'tJ'tJ'tJ'tJ'tJ'tJ'tJ'tJ'tJ'tJ'tJ'tJ'tJ'tJ'tJ'tJ'tJ'tJ'tJ'tJ'tJ'tJ'tJ'tJ'tJ'tJ'tJ'tJ'tJ't=2A'tV;LS @ @Ȝg@:'g}G@jt +6[(HN`>rTL:]*V* &b` ,A* Ҥ E* @ t@* @g tl.=V:n[[>lmڣWt]Rt]VH tO*N PH:@  F' P$:H $I*N aRtɛKG@t=G@t@NNL)N@zN*N@D*N@^J*Nߤ@R Ô@ұFiR3-;iT;LA:I O*dNZ2@B*dN C@FE*dN s@tYT;̴0teJ E:~S4" "@vU?g#K dZRtY'O'Rt I'RotJ''C'3s( N'?֚R)@.td@Oj.J@ H'c֮WʇM{ EȷhlGΗ THN@t!(@HDf.@ ([aJ  U qɂf8t%(hlŊj= ߲sΓ   &` e眗N'0L),N'0}8 .+:hht9o9M{٢ N?S*̧ X:%v kR`Nǐ A'jR``>NI`% , ` g}hlEK{TNZhK]R::0 itt|ZS @蘻 1  N.IttO*E NR|ڴ @HF4B* 1R( )GNEc+zNYNY N @N @N-NKʱǧ=v臹T :j9=7S @7w٧ R8 S @HT2K'@M:R& >]Sv IDATdNL @Ȋe眷= tdJ3:L޺%![3N ۾Ƴ0L) dCnTp:@I n3pc+H'uQ!}ϒ V:LxYR=@nD!LoH*N vL,̥BCITL'cvz-_ G*Ⱥh_ܳ3>r~ڣN _T{|4"Qŧș00k:yŹg{4 @aW0g.R*$G' fnHnHvs0uKǔR@ snxYR &@, !( oo:O$N'Dœ70PS A*I @+@>,Q70gV)S @t9NȆ3pc/Ǖ9֮WP:"loAKҾ[( dZב03[ǿ9x0N 7:9Ӄ Ĥȇ"\*$= @L:4v~o[/ qeT d;.sϖ d]D@ĉ_g'=L @ty?&AT:ȍ"!p>iV`:|vyɅ 涋.TH{@>r`Bus:hȺM&̹ )/N{@20+]tE3[t5\ZvyRXN 7z sp÷tTA'Knju$$ @OKs+v d]Gu4i yY6O* dZ/"ḁB[+ L'}_:>PS A*; C"a%oTK'F’Y3[ǥ0tYd"=mf7_w]c9= t>KW.! L?#!5V*;>@? Z*r#Hs͹g.>B @+@>;0^sً 0t9Ѯ}w\t]TRN{@tY./ 'vy P*GRN Ӳ swBg.RN O !;.׽hf]4=T.IX:H$tVQS!@* CKS$Dfa" N RW/^,16oBBA*@ d]"an2aNj7pڼƝ_ws0G* dZ#a #aNC'ٌ93[Nj ;GB!jGJK. B'HSTh5-” @t /0vђ+j*숄åH* d];.Eu$̙:~kX┹2?BQH:hȴԟdQE,|hL^%)Xv{k(Kُ}vhgΑ0o7(uL@OA휷OKiHsgzVaH>R2`@IHX}s]{7K/ډ9RJ' ;R>K$@0M瞽Ի3H#t 4@^d ﱯh9]]p]Sav&j6NA'Ⱦ\G£0m*}D*@ l;iv0'׬NBH./ٺegBw?M%l@> w Fi%^a$:vxM)@1QR,[b>w \420\ :5@6eyj:ׂgWgk (voL{޻b((F(*h&jAP#(ru_1fwnwfvϫٙ9؇Ng5+IYH #!>Ns,)!`w&s x 7H kH p'"bz UNNPC pl[.L(HnӃC|ϐAtn p>R-gcB?]qTP#Ш(HcVH]YpFEQR:@H`=+?.@NP\%8} Pq=?{wU&`~B'( C p\&ޞH# JGpH(G>HX_I$UNPKIg?*>KۗH[RpX˙ǧ?]Y_>Fr%bH ؠ775O9T S%dDi + pP1Ds [ p P1DEHTŅJ`֞H|jpJr|j֑ E @Y z!Tl.`BEG &?8H̅`keOE[Wf$Px/771 *7. @`܂P%vPlO$tH?7*z0[Bo}/~i.1 +}T@cJ݋|b5W#/7RP}.{{ ۍ쀟 "*7.G^A'cʄH<"3:@#, T"e771>A$T K "3 &T74FB%lcgw[}SߛbgkEaB*\8ZO|=-!kH SH;Zί"L,'tM )Lj2!>)|w`ÑNH$Z>Пߒ,]ˇ؞wM֮3UBn$(ƹVmn2g nuc-8"FDH߽v#7XF O/5 2RnK߽YyHuNJݖDBH}f$tsn{":@.:[sHx`Bm]ty{0PK_cYO.T0b$sp" ~h"!sDy Dx] 9w)o9oolHG$t/ u| 3RG۽߽/DBDwSa`HE#@EzO~$H(x5E X%H(|"@e \oWǤVW$d& >)Պh?b:pEZx| DB##46+=2xՌTp,*H4 }\7`8 +9#P?7! |p]" rO$D$<%=Dd03Lv},Ŗ#kލ$SLrO`cby$^6vvH(## ("xSᆮQrDE#vT Yow<5kn$:>x3| {ܰPS~k=UBEJ xS!}@$Xq|j@2B'_}T z;е1H {|5ڈ!jw wR- zMm= &tm`PTYo]|5Z, z | ,j'\-4b-d+l[ӅH|NU~KݳS`kwqbe_,)rew\xÌTp,*$3kx YgyO@].} MonɐU"u|EIs䥂PPPlO$XlD xnTFlrwԃ^R,B$E'>L!< OHZZXst*mN$>>x jmHa`0ZH˛X㜟wr0!LRQ|D#ּDit矫Dd1/4[l BH(|@"HNP|udw|} ymf7>H(:=@'Bmͅez}"=R-]_nӃFB  Bonj9xc&nyRj瓻ҫa()'6P"/*9r~$ %_1s  وT险`qx6 `E2pT`"k{"@t?g*m>5H(SP-4Ub-oֵ -B$NSK.@r<l.ŅfuSPOT 9\P-#HRmN>f"ڙH:"J >mN_7 HNL$XO Lx |uCd )2[v{ 5 O/5 ʍN?Y=Rzw[ރ޵ ݜC۞-őPp _*lsa֍ ̻l͸o1z0Zb+ ]< W/y6.eJ5"&:-3H Ă'؞w9 BN,Jm.r.E"H8!,W*LoH -pT`EB ]'@$Տ߶PR!TW7,?bov)լ~G*يoPNE: O$\(W ݩy_,z3!nw2OI"VnYTB~S"JΧL$YͧfqzEwtXIC }* VJN \}$RPD "#܂N"sX^TVZ n$:>H(:#x 3=U5܅6 ow<5kn$:~_EBEJuGg r P)̻lqzה{JmOf}>Lp:JTiB!,O]=\)tTǤmF$t;."๨w `Ra6d>&5ow  ]{/x 'Buf*؍rϖo), lo]V$޷o A:zok̅.[?kOj9;[B\Ec+rw/an>D'@YTB7, e5),vwü˥/JTT &B'@ϥ?]!iqf*<,nvC$}eӳ{z'WIF* H(m傷[#[ܨ EB& ^kEzD'@o-v̻WAz=x jֻ6#svwOAuq$>C-"; | T|jm\SJZ,~zE_+p$; 7 ;]hb$|>O{b <:R!T@$PI͕*71^C'@_U*0 DB8DD-.B'E'@ϐBÈ~f\{ `B Tt0ADB\wx }ϟP)լ=hhq/Ձ;7("ۉw4J6tR!wB)ҊEBo" .[FBll+rw/-D'@YT nR`qM%]v&,J]8oA2:(? )Hjnz"wH(mJ"M ̜x哫DdVo6uzDBAD#AD\txDd蘉9` Ep`$dE~ t?S;wL/vSPFBU$^H -"A$CQ g*0ﲵL`}>،ݳKxQ&{71^D'@*2G<=ZDitTRHHo| L$dlH xTXkTȾT >j+l[ů xa*`>} u$*!PvCcxHf$X> r cH鳨N좜y&X\f'SYyK_Lf9PtJ5_m.:IȐ1]7*!H(W,rXGp3et8'w)ȪT ].]XtIYD+-6pO$>DBo,:71F' kLȟ-!]P n{GBs>"c7yJ O=1rV`J6.}Se-?"_gK>FB{K JS3\deSemlJݳKxQ&[p HyWmйCлd5ů!NՑܜxTx5Wݧ[lاKѿPDB{']<>:o`7>6zwS8V$n39ܣpPp)#>JU>/n;ҳ,N//K yLU׹tW%~RV"&]7@'#HLDB x D9t\tnhN=Os>L(H &n`B@'@I*D dB#P:{)7H(|*sPB>Io2oa0cTT),#Ps6 یn[sTu=ijP:'FU0U`g.[^ .[OofK0 PO,r*!'* }5rPڔjEBT3X&tT3? )HF* 厄N%~A'@5GR$"!d7 LN*Tظ' َ<} )5=rG.4 P S%dT 99H={e .0:|WѼU53S% o J -"ٚO͌q>TH8  ! if0 &A'$b#6L"o> >m7rw ݜHu"9:?0GS(=R2Nf En.aͪ|>xbӧP N>uzW̻{Kw|dJ5 @3? )ȀY ].=rH'>c!KrDn؉"e%:|οPH(}·ZDBw un\ЅNo*d jλ9 E]HzH+N/.JLw^,Ghh}3aMSt=!|2Jq$DzH=:?E3| !MεFq O.@0v"4UG$z(TXmDY vHf$tsV݃ۚuGr1(WC ^=w9sR]8̖1hR1(H؉æ0&JRH)B$Sɳh ,TM$N!1;t'WЩ3DdSdh$DZ@ߦE$H";U"A]Q@@MH={fUu3l}NThz+\9teB'JTPti|q_$u Z;*"<ifk"!KDci>D 6'XHjz4t"uG{|rҐf"O=,#!GHPu=+A]H.@'lK*4?H*A&H@UE<+q'TzC7v6#!KHhZiz4ץ?G5=$b<C[C "KB0.]7@Q4=jοtO,ɨBD$"¥A|NNNNNNNNNNNNN;HoZ9ܵ#G|L_R @BޟeH@z%\}p~9̟eH@򬤵tΪvޤ2Н?_?s O )Pp0A? 74 W}%.< N )X8U &E's 1HN0P @sm!ߟPQUWh} 'Xz` jv'xP"o ,%b$Ho:@-y <]!q=g`[U0`N0P j*CY:@-U_!mZj)xT*.C%:@- T}!* jT@/ a`HG:@-V:@-Y;M p? ,5qO'8B0 *BNRT$ .b;>G!X@' 'P*M? }H(gL ,([ j%z`pHOz۝ ,DBox LBUI'JUu ~E!j+DBN0P UB(jP(NR8 ~(/:@-T*G``#" [<:hpcE+5g^(R0g'!V՜i`"}{ nSYSl("m\qm]EKFnV{U_l$"zln=m("7"]mW]""5MLln|́w]qC?xv*YKJ,;lI-0CVI6)"mv0"AD[m}Lx-O?-r="`GW褻G!tt'Q :`{+keO}~E;s}9'%^}߃Ϫ ɑLԁo2^$Z՗CVH5ѡߠg-WՄN<;!CC;:>uGUː(*/Gae]oU7gY˙'_yx;!Go5O9(ΛJCuW B#-:jQ S ] o_7[|6^Dʟ> k߸0ɗk9rʁNrNpR=^vAEYjݜ%"zG{o@]kX绉DnEvI$ҟ}BizkKs_u%C(PXL;Lj"67F[>^rH"}_|DdscK϶sRF~ǯ!H+~ďc[\TV?7#`E-*k{5i3;|*yQo_|."FI$ڹC*}j5ṽ'Ϭ, #"~-_&уhcQ0xAh]ÿ}x^ƚ/;R[BH4hxȤ'/L #2$@ ޫ"bHй6l:h|OErF D$\|5˕Ӧa  cqvR6fZUUu|iү;_B_#tѨk{ϥ"zqozgd?N7`w[/8k,9!YtDHGڮ[̕ڲZN8xxD&Ts4WԸϮ <]so60Voϻ#/s':n6q7#pEIHAoߜ!a- ʀ5ۮkˡbg^(rw26Mڣ[xr1p#kΚ33";aSe?.j@DӧQ_}shtVx-MIeQ&CN/Zgz{܉. Wa<[ps Xs <feʎoDxx΍kj,:ZtW+q?##Bt 2$t(a~ZC6.=spn qcr[^v] Xsޥ!ZtM gi0_D}|sNS%QӮVӹSzX|OPA--_fVԵo?i.N8=ҳ-g':56/% n^yQOZ8Í0܏N@IT3:AOn2_!N_m9pUDpĜٸHDoj:`\՗D}bR%Q :b`Paڏ |+'ɿl.FO8C%y˲dtP@C-*Fq>D? oD$G9+VdVx'iPnq#xtIAաkN?P,sܴ([yׁ Gwx}M&{ +\(G>8d=TQ݆ax_? SːZe?5wny9(T:}Z sH4>15)pⅧ;Sm2;AOKE>x'w]R&t?f``P‘R5?9v9JCs])=6{J}yF:>kPMi  =W?vuYDcf\tef$!Ckg$& "ϊ DrTn΂7CI8;7c;ADm9Ʊ$|7 ZGD#T1:l-8Ѵ''^x:DQ[D;xvمɿMwmss豧C Ef< J``y_i>֩g~[Mw[m-/k0׫mZ[Bҝw'dS;'X ?9]Ҭ0;EDuRSGLMȸI~ zKs~C ji9Y}uڹLμ0^w U/у.@}\+W=(m3.&a!uځ_ld[[4 l0"zKs|L,"Q[7߲sƕ矪Y[B#7Հː7ںG:2'MNMQՆ;6FoZ8~p#61W2-g#~.!G'zP uׯuƢywت-;ioA%{3/Lr:ՆZ{@=S)+~jb,^vMx-tҟvǘh>?5ײs NNUCbk'\Fn&"w]nCw/UG4hXLZˉGkJ8-_su;/Ur9:U~DcC>ED7Mj࣢Gk<4-kjZTq^ѣOy3MRh?xe\P&tX]b7Tk6dnx֩gC[ki9mu3v隉Y}+?88#G>$:@-kn {B2ٴgGmeS}9vѣO4Wuk7Gj;se%m.KVϽ8r#ܴWa{9 EmŏC%C(@[|8y/%jxEuAJH*NQ <-/VEQ~̜#9Cqeh]Dաkg]=;; w=*b,v<@e(XB;*7ݑzXP#cOwP[=YD"Rs[v<`6ܱ(iezJc ;Ƨ?q7c pP $uoRGLN}{ʀfڏsf&~L4-8jougӌ{I5SRPsy_H$#=4ioEu DcbZNԇ9t!ע✪RA=RW/"W^h9xQְ9%1IӦHx}k/WonHmG^1;ƫsI2P#8I`h1LK G~h-X8* D$#?2+ F$h. 3 9?|9'6p sͅWCeM$}5<:h:MT* O7f]4W r\Ol`(ϯw:t'|Mη "?u]bAd"3Kn>tR%: #8򹀧1 -v9æH_>E_SxKgY""R[ +v VOzeX=zQ O+~m7|YIǭw @Q $1ݟ:Znyi%Ʃ&Xd.ke,O#op%>AâM+ۮxyQ[C+"^T:3>Ts#ڵNwҼHz9<$rv`A pU \[EK֮R~jx64~"x;FDjފoU'tP [ͷ}QpQ"i֩gdwBɗI}n` ֖`Pj]m̓T" c*ͩ[IB٣n (qe]s<w{'s1GM7M tNb=j;c1-gkL\n+tT7%tD^xzp5g10r[+TPjj}QFdKU{U}98R)1P%=jN?_]kpxDU#7vYʀwJB_}!"}*ceүEDiW4Eq|Y -:pT%kx%uAb58θ AsRÍ+x\źL;3>T{UO>zYo1RԿ>[[* H|Dixi3wf^qBsGl."ܟg5+7_51YyEλ[Oi`0r0=$"K-|?vJ梮k _=5""Z;x$0+xNh_7_5: jT)X#6;1'[/8-ܓk]ws`z[N:"*tݣN|W*XâkvմG6DDD[eoX8_oеE ;|Iw޸RDD k^7xЮ52vMyc%P(NBKtP%ǔںׇv\-_r1/6ãj)ɿ}Lf]q @l3n͙ "ꮞS5J}ћ2gtN#jP jN{ӏ nx OEE$ɒ? "zKS|쎇B#UN Ծ]|˺9 o^9wZ`?dIqs}?^כVVA!(:f` K oj9m5<538z]D55ē^|Z ( 9 7 ?0Wjj#`֖/k֮]GD}|^k:#[O|h#D$ 矦':DQo/8jcqn+DDN^6ngWxw@0OCBSMQ&[ "u?)zD;7pFU9ZumΎ! P JmoQu9յ߸vLuYۏ:xI/Wc#lyh]k/iL[FPI\wSW"q:tX£ƛӢܫ>Htۓ/==.5x.u d>4Gjg-_V @1 O7"AD3@Pn6ִqS+Rv}`V>na{o]z~: \h)m {c /s7"0|F:XċϘwGdv5M]o܋c O]1덚ewjc/G;XD_|r!zsSƉD۵WK5#]γ. qtG 1}{ow=T 5Lt774^""։HoI{cC*F\NЉG];Tc2 mGd"=뮿:(eՃ=H4vea,Q\(y 5/q.C{uߋwբ=8-t8ui60s3q9뛏 *#.x[]kԡЗ/7_+ ƻ+NB_@ݜ9 "J}CcD$~i2TH"; M pV`3$T@OtK Aj񇩷nVEDokk投}YpEDoZkՁ[կ鞞qQCPB=kvInpF (Ņp8@#*VyĠ :Q1ш5>F~VWC5 zkۇ{o՝Z,hX|+%_2Nr.sĂ h@)r)R+z$߹]l#E?>[Q lN3c#QځOGoSu "mojI,߶׹(C'1s'>&iZe Nf+Ԣ]e$(}C'%jV;^wT?k/>z㌪I@yuGqceF$/:@ IiY[4@' Z7: !>}>(vwCB_HUi3#ḾZڭ;^Fݻ'RTJeH1I48sxOy{6'ZkYٔ?N"i :zZ NޠB;̪6E'@0 lNڣ(jBǠ\iH'6@! P.n Nrj-Pl3Km x'@8cbB[  &]- Ns%FwLV ܌NY(VaV-tX @'X G!5 `Q L6{9k?8 `uX6B'ش('  `>l 6fZ=K=|5H}|8 8 NSZpNE'c5(p6:@! % ܃++>|K/&(p!:ܨ@!k ^Ǩ \Ny q#:9XIENDB`Mediana/inst/figures/logo_MEDIANAINC.png0000644000176200001440000002554513434027611017363 0ustar liggesusersPNG  IHDR pHYs.#.#x?v OiCCPPhotoshop ICC profilexڝSgTS=BKKoR RB&*! J!QEEȠQ, !{kּ> H3Q5 B.@ $pd!s#~<<+"x M0B\t8K@zB@F&S`cbP-`'{[! eDh;VEX0fK9-0IWfH  0Q){`##xFW<+*x<$9E[-qWW.(I+6aa@.y24x6_-"bbϫp@t~,/;m%h^ uf@Wp~<5j>{-]cK'Xto(hw?G%fIq^D$.Tʳ?D*A, `6B$BB dr`)B(Ͱ*`/@4Qhp.U=pa( Aa!ڈbX#!H$ ɈQ"K5H1RT UH=r9\F;2G1Q= C7F dt1r=6Ыhڏ>C03l0.B8, c˱" VcϱwE 6wB aAHXLXNH $4 7 Q'"K&b21XH,#/{C7$C2'ITFnR#,4H#dk9, +ȅ3![ b@qS(RjJ4e2AURݨT5ZBRQ4u9̓IKhhitݕNWGw Ljg(gwLӋT071oUX**| J&*/Tު UUT^S}FU3S ԖUPSSg;goT?~YYLOCQ_ cx,!k u5&|v*=9C3J3WRf?qtN (~))4L1e\kXHQG6EYAJ'\'GgSSݧ M=:.kDwn^Loy}/TmG X $ <5qo</QC]@Caaᄑ.ȽJtq]zۯ6iܟ4)Y3sCQ? 0k߬~OCOg#/c/Wװwa>>r><72Y_7ȷOo_C#dz%gA[z|!?:eAAA!h쐭!ΑiP~aa~ 'W?pX15wCsDDDޛg1O9-J5*>.j<74?.fYXXIlK9.*6nl {/]py.,:@LN8A*%w% yg"/6шC\*NH*Mz쑼5y$3,幄'L Lݛ:v m2=:1qB!Mggfvˬen/kY- BTZ(*geWf͉9+̳ې7ᒶKW-X潬j9(xoʿܔĹdff-[n ڴ VE/(ۻCɾUUMfeI?m]Nmq#׹=TR+Gw- 6 U#pDy  :v{vg/jBFS[b[O>zG499?rCd&ˮ/~јѡ򗓿m|x31^VwwO| (hSЧc3- cHRMz%u0`:o_F IDATxo]GdEfam m.Ȋ /VHƼH)u8i#+!EK$/;KMݶ MV 2\j0v3,]ܾs9W3w=sϹmgo/-E*3%ubDTW~5"-l;Sj.XVGIJ:%! naR3IꦅՈx5gur̢0p>I4g0΂f2fLvr lDX$NYIl1"eܙKۙzJZ5?9wfi7lڛɹ3 ~qo $i T&dM%if3 K=&`v ?S$BD<89w0'I0 $ $Y5"/%A0;I@;a77c[KHAppMDc$iF]EưfmHX3ITxWQ/,d,}ʋ$)WX=iTͦj^?"$x.{ $~q4)IE<=[K IRY:<9w. XzOT2#G'7?{l'TڪqK .$&`sF”$pX: ПP)aA)7K !i;nRo&;)2 (PZ֞7?Rzn<@ie+cCv_v1N./T\vQ8$@Fn'6*(t%JkUʁ`vόUbӃ8QݾY/z1nQaG`.p;$ R7MӒ pG ϟBvfG 3 ~= *P]Oh̴%Paчâ=E>gz|2( ‘,RV̮Hp;[Y zaЗLR-N]8xST)KIaPiUc쉧MJX F0;ts"P`{ &PF]$x_DLy$JChK&:F0; ʡa@( {=^TiI0Om#A3\$@ Rړ?MR!3~Ce!I^k, vo2٘K rDN3 %Jy-ZB0/M7N;í$JŨ%ZEСP RQ 2k"]1mM%JEWRTZwZK RndVE0;:e-%Jl!nB)}[h %~+,Zg`ذa "IpToA"vsl$PoSg$MeW:K#$P)L#@I Tɻ]of@($Pnb/C0)k& *4lV@us>PUMQSSn k%B=4i[$m T B%PfJL+ b$Pemb~F@*GUf"$ *I׬4 NXI TɛZ k# *y3+!1k# *{g6"8mi$ ‘,US'aA8ѭ,ѴIPROPhB xvk" 2 JU#8mM$PNDJ u8T4XGf! J)$ BXK! JL w; J*t?BuH}$ J$AI T%BAP@:}4%P61B%IPAI{@d$*F" )7B2,Zv2% J!( 2 JAI (I % $P%j]$$ &AI!(I !%)mǞx:*c^>{!(I6u!(I !% $P$ BAP@(JAI (I %nf1 (I %)*c $$ BAP@(JAI (I % $PRB7Y JRqŀ$نBAP@(JAI (I % $P$ BAP@(JR ܰa AIJ;n!%w/^ 4;~nS !% $;;*obҩoijw!!()-{|wYRD݋YR |!%%޳;N?j@AI @8Q끰M 7u!()a%"~x`o8t"S{-!()-9~4 4AAI) t BPRR6B!(0z=aeVL !%%`= !%`w݊ݺ ]|{v0ld=tBPRa@ol< !(ih(v`wABJJܥ޻x>=?VL l8 !!(;>X@BJJz?:7. ?*~䞄 !!()=/@P`#/^'wAI٣SڰAAIAq{{T{vABJJ AAI)8>wXt+!!!(i(߻x CAI >S0bF !%%`DW{vƂ[1%!%pY'7^a_!$8 J-~e%0~8APXoS~AIJ /o?VLGo-$ǨīheBP[҉!BAICH6B_K}oJ5:24!( mX+I5!%M+97^vvF!( X!{Ԇ!%Cq~T=;Pt0&BJ`V&Bv`#7!(ih(z]<Xk}A=g!( =0!%6B8{xͭ:4 AI1?K . /añ~q K'u C-Vs|gWgYLX_׷G+z&¡|{w zph P`JɷԷ]¡BɎ.%x=yG*FPJL /} ={\\"B ) DA(KFpgw XSG ޓc@(0% as#Yc@(0%]8}*^C@?Sh\6U+. @v`{GX[)P`*Im/=޹JOn\oAJ ;^ZDa,B `'~8T١uE\w5""1R8_D-$h"@v`#"B `~/;x[R82`+eY_xS! `fkDž/ Rvv`N,պP [{J [B ,%@(wG@ dJ }/`&30J ܄B `%{$SP>O @<v `P  l3R |h@& A(AJ`Bs`%  A(AJ`B`P s8B$XI# A(IAJ`VAJ߱y$B);BpFDTn{}"c쉧ct6޽x!{w5j/vRݿx{8m=*tG1i6=~eGkBk@0r,n}Hʖm S!BA8B@`7F ,( -&Ab" N  oph W ,<¡Q\ @&F@!A@X ?AX8@`ۇ@&\#@@+A±q\@>vC Ͷ" !@6:AXz?dhar!@V:nB@mG , A& Zwvԭ@Xp8);S XL!Bu?! X.B!A. T/!\!|AZ2@-hBk,@0oFD:/mP`.l{{&S S!(F@@ .`nBP   s5.DĂ]@8? `>CP Ƕl ݎBuWſzHLl@pnya~ p" 8xM!! {?ܙY$0 &A0A0+"0U 9@gsnx=ZDt/]@W6}0^scGb|Qm^! S/v9a~!\8N__l|E0~#qE@vW?΂]@Ʒ\w ;sjOCP , ;#ZբV]xv3a&Bm҅j#h`[^IP ,0@`Dy0]@X,ozt2 .>ub$`"B-m +K2\+"X; 皜5 @ܙpr@X@6;-@g A0׵!q\KؖƇ ƠgK䍟;@U'u铷 B0QCa va9! SpdV~Ӡ@XBɡQ0#"YvRJ?KeF\ikT.rC@&]399F ,?Y`9Xqi\+KmC@][ &t  K` XӀ3_;@~Ϭ݃@FX!X2#V9eU@`D-*Qk{\UN }@ NܙjӼ@\[o @55K4 )mĭpn@BFpIP Ԛ0"]!]*B0uo@ J->qzMFZx |n@ L!sDBf0"⩞ 萨@&B0.\ul&oy t`lˬ.B0gFD)AP +\P@NY#89wXD,x Z Bp=?e/`vj_o8}O#bK_Emt3̟ `T"nYuF:M*Do!zܯ?~$6l@ VloܙAw q_H\A@B5&Bp@.l:y`x f4 Z•=٧rirh!]d#]ndB!Cg;9#gfwpXT TK=0"nn'I2ǼB3]xh CGA/uPAg= \ T+6;kMm)b`LΝ &Hlغ cN>֫qܙ"zPm@@bH]*PB+V׫gA`- V'Ǐ|!p/sBG\?~$-/ɔ~m-@(%5L`+60h <Bu!0*opDM&BDǏ8~ sXzܾFagP T ˁ`X0E/sP-MM. ;m'4q4"U,㕆'Xhb`NYa"Fč6L2jіb}{X nIpT?".VW?w`/ ైkg4  ™X:<:.Xm"Ƽx߫@಩p*"/}DX_!L,"xp1"tlA=t@ח5 x(pHF@ `Ox~86:/~?_j.RIENDB`Mediana/inst/figures/makeSticker.R0000644000176200001440000000153613434027611016624 0ustar liggesusersfont_family = "Ekibastuz" sysfonts::font_add(family = font_family, regular = "ekibastuz_heavy.otf") hexSticker::sticker("inst/figures/logo_MEDIANAINC.png", package="Mediana", s_x=1, s_y=0.95, s_width=0.4, s_height=0.4, p_color = "white", p_family = font_family, p_size = 40, p_x = 1, p_y = 1.55, h_fill = "#EE3223", h_color = "#AE3927", url = "http://gpaux.github.io/Mediana", u_family = font_family, u_size = 6, u_color = "white", asp = 1, filename = "inst/figures/hexMediana.png", dpi = 600) Mediana/inst/doc/0000755000176200001440000000000013464544414013343 5ustar liggesusersMediana/inst/doc/mediana.html0000644000176200001440000040116713464544414015640 0ustar liggesusers Mediana: an R package for clinical trial simulations

Mediana: an R package for clinical trial simulations

Gautier Paux and Alex Dmitrieniko

2019-05-08

Introduction

About

Mediana is an R package which provides a general framework for clinical trial simulations based on the Clinical Scenario Evaluation approach. The package supports a broad class of data models (including clinical trials with continuous, binary, survival-type and count-type endpoints as well as multivariate outcomes that are based on combinations of different endpoints), analysis strategies and commonly used evaluation criteria.

Expert and development teams

Package design: Alex Dmitrienko (Mediana Inc.).

Core development team: Gautier Paux (Servier), Alex Dmitrienko (Mediana Inc.).

Extended development team: Thomas Brechenmacher (Novartis), Fei Chen (Johnson and Johnson), Ilya Lipkovich (Quintiles), Ming-Dauh Wang (Lilly), Jay Zhang (MedImmune), Haiyan Zheng (Osaka University).

Expert team: Keaven Anderson (Merck), Frank Harrell (Vanderbilt University), Mani Lakshminarayanan (Pfizer), Brian Millen (Lilly), Jose Pinheiro (Johnson and Johnson), Thomas Schmelter (Bayer).

Installation

Latest release

Install the latest version of the Mediana package from CRAN using the install.packages command in R:

Alternatively, you can download the package from the CRAN website.

Development version

The up-to-date development version can be found and installed directly from the GitHub web site. You need to install the devtools package and then call the install_github function in R:

Clinical Scenario Evaluation Framework

The Mediana R package was developed to provide a general software implementation of the Clinical Scenario Evaluation (CSE) framework. This framework introduced by Benda et al. (2010) and Friede et al. (2010) recognizes that sample size calculation and power evaluation in clinical trials are high-dimensional statistical problems. This approach helps decompose this complex problem by identifying key elements of the evaluation process. These components are termed models:

  • Data models define the process of generating trial data (e.g., sample sizes, outcome distributions and parameters).
  • Analysis models define the statistical methods applied to the trial data (e.g., statistical tests, multiplicity adjustments).
  • Evaluation models specify the measures for evaluating the performance of the analysis strategies (e.g., traditional success criteria such as marginal power or composite criteria such as disjunctive power).

Find out more about the role of each model and how to specify the three models to perform Clinical Scenario Evaluation by reviewing the dedicated pages (click on the links above).

Case studies

Multiple case studies are provided on the web site’s package to facilitate the implementation of Clinical Scenario Evaluation in different clinical trial settings using the Mediana package. These case studies will be updated on a regular basis. Another vignette accessible with the following command is also available presenting these case studies.

The Mediana package has been successfully used in multiple clinical trials to perform power calculations as well as optimally select trial designs and analysis strategies (clinical trial optimization). For more information on applications of the Mediana package, download the following papers:

Data model

Data models define the process of generating patient data in clinical trials.

Initialization

A data model can be initialized using the following command

It is highly recommended to use this command as it will simplify the process of specifying components of the data model, e.g., OutcomeDist, Sample, SampleSize, Event and Design objects.

Components of a data model

Once the DataModel object has been initialized, components of the data model can be specified by adding objects to the model using the ‘+’ operator as shown below.

OutcomeDist object

Description

This object specifies the distribution of patient outcomes in a data model. An OutcomeDist object is defined by two arguments:

  • outcome.dist defines the outcome distribution.

  • outcome.type defines the outcome type (optional). There are two acceptable values of this argument: standard (fixed-design setting) and event (event-driven design setting).

Several distributions that can be specified using the outcome.dist argument are already implemented in the Mediana package. These distributions are listed below along with the required parameters to be included in the outcome.par argument of the Sample object:

  • UniformDist: generate data following a univariate distribution. Required parameter: max.

  • NormalDist: generate data following a normal distribution. Required parameters: mean and sd.

  • BinomDist: generate data following a binomial distribution. Required parameter: prop.

  • BetaDist: generate data following a beta distribution. Required parameter: a and b.

  • ExpoDist: generate data following an exponential distribution. Required parameter: rate.

  • WeibullDist: generate data following a weibull distribution. Required parameter: shape and scale.

  • TruncatedExpoDist: generate data following a truncated exponential distribution. Required parameter: rate an trunc.

  • PoissonDist: generate data following a Poisson distribution. Required parameter: lambda.

  • NegBinomDist: generate data following a negative binomial distribution. Required parameters: dispersion and mean.

  • MultinomialDist: generate data following a multinomial distribution. Required parameters: prob.

  • MVNormalDist: generate data following a multivariate normal distribution. Required parameters: par and corr. For each generated endpoint, the par parameter must contain the required parameters mean and sd. The corr parameter specifies the correlation matrix for the endpoints.

  • MVBinomDist: generate data following a multivariate binomial distribution. Required parameters: par and corr. For each generated endpoint, the par parameter must contain the required parameter prop. The corr parameter specifies the correlation matrix for the endpoints.

  • MVExpoDist: generate data following a multivariate exponential distribution. Required parameters: par and corr. For each generated endpoint, the par parameter must contain the required parameter rate. The corrparameter specifies the correlation matrix for the endpoints.

  • MVExpoPFSOSDist: generate data following a multivariate exponential distribution to generate PFS and OS endpoints. The PFS value is imputed to the OS value if the latter occurs earlier. Required parameters: par and corr. For each generated endpoint, the par parameter must contain the required parameter rate. Thecorr parameter specifies the correlation matrix for the endpoints.

  • MVMixedDist: generate data following a multivariate mixed distribution. Required parameters: type, par and corr. The type parameter assumes the following values: NormalDist, BinomDist and ExpoDist. For each generated endpoint, the par parameter must contain the required parameters according to the distribution type. The corr parameter specifies the correlation matrix for the endpoints.

The outcome.type argument defines the outcome’s type. This argument accepts only two values:

  • standard: for fixed design setting.

  • event: for event-driven design setting.

The outcome’s type must be defined for each endpoint in case of multivariate disribution, e.g. c("event","event") in case of multivariate exponential distribution. The outcome.type argument is essential to get censored events for time-to-event endpoints if the SampleSize object is used to specify the number of patients to generate.

A single OutcomeDist object can be added to a DataModel object.

For more information about the OutcomeDist object, see the documentation for OutcomeDist on the CRAN web site.

If a certain outcome distribution is not implemented in the Mediana package, the user can create a custom function and use it within the package (see the dedicated vignette vignette("custom-functions", package = "Mediana")).

Sample object

Description

This object specifies parameters of a sample (e.g., treatment arm in a trial) in a data model. Samples are defined as mutually exclusive groups of patients, for example, treatment arms. A Sample object is defined by three arguments:

  • id defines the sample’s unique ID (label).

  • outcome.par defines the parameters of the outcome distribution for the sample.

  • sample.size defines the sample’s size (optional).

The sample.size argument is optional but must be used to define the sample size only if an unbalanced design is considered (i.e., the sample size varies across the samples). The sample size must be either defined in the Sample object or in the SampleSize object, but not in both.

Several Sample objects can be added to a DataModel object.

For more information about the Sample object, see the documentation Sample on the CRAN web site.

Example

Examples of Sample objects:

Specify two samples with a continuous endpoint following a normal distribution:

Specify two samples with a binary endpoint following a binomial distribution:

Specify two samples with a time-to-event (survival) endpoint following an exponential distribution:

Specify three samples with two primary endpoints that follow a binomial and a normal distribution, respectively:

SampleSize object

Description

This object specifies the sample size in a balanced trial design (all samples will have the same sample size). A SampleSize object is defined by one argument:

  • sample.size specifies a list or vector of sample size(s).

A single SampleSize object can be added to a DataModel object.

For more information about the SampleSize object, see the package’s documentation SampleSize.

Example

Examples of SampleSize objects:

Several equivalent specifications of the SampleSize object:

Event object

Description

This object specifies the total number of events (total event count) among all samples in an event-driven clinical trial. An Event object is defined by two arguments:

  • n.events defines a vector of the required event counts.

  • rando.ratio defines a vector of randomization ratios for each Sample object defined in the DataModel object.

A single Event object can be added to a DataModel object.

For more information about the Event object, see the package’s documentation Event.

Example

Examples of Event objects:

Specify the required number of events in a trial with a 2:1 randomization ratio (Treatment:Placebo):

Design object

Description

This object specifies the design parameters used in event-driven designs if the user is interested in modeling the enrollment (or accrual) and dropout (or loss to follow up) processes. A Design object is defined by seven arguments:

  • enroll.period defines the length of the enrollment period.

  • enroll.dist defines the enrollment distribution.

  • enroll.dist.par defines the parameters of the enrollment distribution (optional).

  • followup.period defines the length of the follow-up period for each patient in study designs with a fixed follow-up period, i.e., the length of time from the enrollment to planned discontinuation is constant across patients. The user must specify either followup.period or study.duration.

  • study.duration defines the total study duration in study designs with a variable follow-up period. The total study duration is defined as the length of time from the enrollment of the first patient to the discontinuation of the last patient.

  • dropout.dist defines the dropout distribution.

  • dropout.dist.par defines the parameters of the dropout distribution.

Several Design objects can be added to a DataModel object.

For more information about the Design object, see the package’s documentation Design.

A convienient way to model non-uniform enrollment is to use a beta distribution (BetaDist). If enroll.dist = "BetaDist", the enroll.dist.par should contain the parameter of the beta distribution (a and b). These parameters must be derived according to the expected enrollment at a specific timepoint. For example, if half the patients are expected to be enrolled at 75% of the enrollment period, the beta distribution is a Beta(log(0.5)/log(0.75), 1). Generally, let q be the proportion of enrolled patients at 100p% of the enrollment period, the Beta distribution can be derived as follows:

  • If q < p, the Beta distribution is Beta(a,1) with a = log(q) / log(p)

  • If q > p, the Beta distribution is Beta (1,b) with b = log(1-q) / log(1-p)

  • Otherwise the Beta distribution is Beta(1,1)

Example

Examples of Design objects:

Specify parameters of the enrollment and dropout processes with a uniform enrollment distribution and exponential dropout distribution:

Analysis model

Analysis models define statistical methods (e.g., significance tests or descriptive statistics) that are applied to the study data in a clinical trial.

Initialization

An analysis model can be initialized using the following command:

It is highly recommended to use this command to initialize an analysis model as it will simplify the process of specifying components of the data model, including the MultAdj, MultAdjProc, MultAdjStrategy, Test, Statistic objects.

Components of an analysis model

After an AnalysisModel object has been initialized, components of the analysis model can be specified by adding objects to the model using the ‘+’ operator as shown below.

Test object

Description

This object specifies a significance test that will be applied to one or more samples defined in a data model. A Test object is defined by the following four arguments:

  • id defines the test’s unique ID (label).

  • method defines the significance test.

  • samples defines the IDs of the samples (defined in the data model) that the significance test is applied to.

  • par defines the parameter(s) of the statistical test.

Several commonly used significance tests are already implemented in the Mediana package. In addition, the user can easily define custom significance tests (see the dedicated vignette vignette("custom-functions", package = "Mediana")). The built-in tests are listed below along with the required parameters that need to be included in the par argument:

  • TTest: perform the two-sample t-test between the two samples defined in the samples argument. Optional parameter: larger (Larger value is expected in the second sample (TRUE or FALSE)).

  • TTestNI: perform the non-inferiority two-sample t-test between the two samples defined in the samples argument. Required parameter: margin (positive non-inferiority margin). Optional parameter: larger (Larger value is expected in the second sample (TRUE or FALSE)).

  • WilcoxTest: perform the Wilcoxon-Mann-Whitney test between the two samples defined in the samples argument. Optional parameter: larger (Larger value is expected in the second sample (TRUE or FALSE)).

  • PropTest: perform the two-sample test for proportions between the two samples defined in the samples argument. Optional parameter: yates (Yates’ continuity correction flag that is set to TRUE or FALSE) and larger (Larger value is expected in the second sample (TRUE or FALSE)).

  • PropTestNI: perform the non-inferiority two-sample test for proportions between the two samples defined in the samples argument. Required parameter: margin (positive non-inferiority margin). Optional parameter: yates (Yates’ continuity correction flag that is set to TRUE or FALSE) and larger (Larger value is expected in the second sample (TRUE or FALSE)).

  • FisherTest: perform the Fisher exact test between the two samples defined in the samples argument. Optional parameter: larger (Larger value is expected in the second sample (TRUE or FALSE)).

  • GLMPoissonTest: perform the Poisson regression test between the two samples defined in the samples argument. Optional parameter: larger (Larger value is expected in the second sample (TRUE or FALSE)).

  • GLMNegBinomTest: perform the Negative-binomial regression test between the two samples defined in the samples argument. Optional parameter: larger (Larger value is expected in the second sample (TRUE or FALSE)).

  • LogrankTest: perform the Log-rank test between the two samples defined in the samples argument. Optional parameter: larger (Larger value is expected in the second sample (TRUE or FALSE)).

  • OrdinalLogisticRegTest: perform an ordinal logistic regression test between the two samples defined in the samples argument. Optional parameter: larger (Larger value is expected in the second sample (TRUE or FALSE)).

It needs to be noted that the significance tests listed above are implemented as one-sided tests and thus the sample order in the samples argument is important. In particular, the Mediana package assumes by default that a numerically larger value of the endpoint is expected in Sample 2 compared to Sample 1. Suppose, for example, that a higher treatment response indicates a beneficial effect (e.g., higher improvement rate). In this case Sample 1 should include control patients whereas Sample 2 should include patients allocated to the experimental treatment arm. The sample order needs to be reversed if a beneficial treatment effect is associated with a lower value of the endpoint (e.g., lower blood pressure), or alternatively (from version 1.0.6), the optional parameters larger must be set to FALSE to indicate that a larger value is expected on the first Sample.

Several Test objects can be added to an AnalysisModelobject.

For more information about the Test object, see the package’s documentation Test on the CRAN web site.

Statistic object

Description

This object specifies a descriptive statistic that will be computed based on one or more samples defined in a data model. A Statistic object is defined by four arguments:

  • id defines the descriptive statistic’s unique ID (label).

  • method defines the type of statistic/method for computing the statistic.

  • samples defines the samples (pre-defined in the data model) to be used for computing the statistic.

  • par defines the parameter(s) of the statistic.

Several methods for computing descriptive statistics are already implemented in the Mediana package and the user can also define custom functions for computing descriptive statistics (see the dedicated vignette vignette("custom-functions", package = "Mediana")). These methods are shown below along with the required parameters that need to be defined in the par argument:

  • MedianStat: compute the median of the sample defined in the samples argument.

  • MeanStat: compute the mean of the sample defined in the samples argument.

  • SdStat: compute the standard deviation of the sample defined in the samples argument.

  • MinStat: compute the minimum value in the sample defined in the samples argument.

  • MaxStat: compute the maximum value in the sample defined in the samples argument.

  • DiffMeanStat: compute the difference of means between the two samples defined in the samples argument. Two samples must be defined.

  • EffectSizeContStat: compute the effect size for a continuous endpoint. Two samples must be defined.

  • RatioEffectSizeContStat: compute the ratio of two effect sizes for a continuous endpoint. Four samples must be defined.

  • PropStat: generate the proportion of the sample defined in the samples argument.

  • DiffPropStat: compute the difference of the proportions between the two samples defined in the samples argument. Two samples must be defined.

  • EffectSizePropStat: compute the effect size for a binary endpoint. Two samples must be defined.

  • RatioEffectSizePropStat: compute the ratio of two effect sizes for a binary endpoint. Four samples must be defined.

  • HazardRatioStat: compute the hazard ratio of the two samples defined in the samples argument. Two samples must be defined. By default the Log-Rank method is used. Optional argument: method taking as value Log-Rank or Cox.

  • EffectSizeEventStat: compute the effect size for a survival endpoint (log of the HR. Two samples must be defined. Two samples must be defined. By default the Log-Rank method is used. Optional argument: method taking as value Log-Rank or Cox.

  • RatioEffectSizeEventStat: compute the ratio of two effect sizes for a survival endpoint based on the Log-Rank method. Four samples must be defined. By default the Log-Rank method is used. Optional argument: method taking as value Log-Rank or Cox.

  • EventCountStat: compute the number of events observed in the sample(s) defined in the samples argument.

  • PatientCountStat: compute the number of patients observed in the sample(s) defined in the samples argument

Several Statistic objects can be added to an AnalysisModel object.

For more information about the Statistic object, see the R documentation Statistic.

MultAdjProc object

Description

This object specifies a multiplicity adjustment procedure that will be applied to the significance tests in order to protect the overall Type I error rate. A MultAdjProc object is defined by three arguments:

  • proc defines a multiplicity adjustment procedure.

  • par defines the parameter(s) of the multiplicity adjustment procedure (optional).

  • tests defines the specific tests (defined in the analysis model) to which the multiplicity adjustment procedure will be applied.

If no tests are defined, the multiplicity adjustment procedure will be applied to all tests defined in the AnalysisModel object.

Several commonly used multiplicity adjustment procedures are included in the Mediana package. In addition, the user can easily define custom multiplicity adjustments. The built-in multiplicity adjustments are defined below along with the required parameters that need to be included in the par argument:

  • BonferroniAdj: Bonferroni procedure. Optional parameter: weight (vector of hypothesis weights).

  • HolmAdj: Holm procedure. Optional parameter: weight (vector of hypothesis weights).

  • HochbergAdj: Hochberg procedure. Optional parameter: weight (vector of hypothesis weights).

  • HommelAdj: Hommel procedure. Optional parameter: weight (vector of hypothesis weights).

  • FixedSeqAdj: Fixed-sequence procedure.

  • ChainAdj: Family of chain procedures. Required parameters: weight (vector of hypothesis weights) and transition (matrix of transition parameters).

  • FallbackAdj: Fallback procedure. Required parameters: weight (vector of hypothesis weights).

  • NormalParamAdj: Parametric multiple testing procedure derived from a multivariate normal distribution. Required parameter: corr (correlation matrix of the multivariate normal distribution). Optional parameter: weight (vector of hypothesis weights).

  • ParallelGatekeepingAdj: Family of parallel gatekeeping procedures. Required parameters: family (vectors of hypotheses included in each family), proc (vector of procedure names applied to each family), gamma (vector of truncation parameters).

  • MultipleSequenceGatekeepingAdj: Family of multiple-sequence gatekeeping procedures. Required parameters: family (vectors of hypotheses included in each family), proc (vector of procedure names applied to each family), gamma (vector of truncation parameters).

  • MixtureGatekeepingAdj: Family of mixture-based gatekeeping procedures. Required parameters: family (vectors of hypotheses included in each family), proc (vector of procedure names applied to each family), gamma (vector of truncation parameters), serial (matrix of indicators), parallel (matrix of indicators).

Several MultAdjProc objects can be added to an AnalysisModelobject using the ‘+’ operator or by grouping them into a MultAdj object.

For more information about the MultAdjProc object, see the package’s documentation MultAdjProc.

MultAdjStrategy object

Description

This object specifies a multiplicity adjustment strategy that can include several multiplicity adjustment procedures. A multiplicity adjustment strategy may be defined when the same Clinical Scenario Evaluation approach is applied to several clinical trials.

A MultAdjStrategy object serves as a wrapper for several MultAdjProc objects.

For more information about the MultAdjStrategy object, see the package’s documentation MultAdjStrategy.

Example

Example of a MultAdjStrategy object:

Perform complex multiplicity adjustments based on gatekeeping procedures in two clinical trials with three endpoints:

# Parallel gatekeeping procedure parameters
family = families(family1 = c(1), 
                  family2 = c(2, 3))

component.procedure = families(family1 ="HolmAdj", 
                               family2 = "HolmAdj")

gamma = families(family1 = 0.8, 
                 family2 = 1)

# Multiple sequence gatekeeping procedure parameters for Trial A
mult.adj.trialA = MultAdjProc(proc = "ParallelGatekeepingAdj",
                              par = parameters(family = family,
                                               proc = component.procedure,
                                               gamma = gamma),
                              tests = tests("Trial A Pla vs Trt End1",
                                            "Trial A Pla vs Trt End2",
                                            "Trial A Pla vs Trt End3"))

mult.adj.trialB = MultAdjProc(proc = "ParallelGatekeepingAdj",
                              par = parameters(family = family,
                                               proc = component.procedure,
                                               gamma = gamma),
                              tests = tests("Trial B Pla vs Trt End1",
                                            "Trial B Pla vs Trt End2",
                                            "Trial B Pla vs Trt End3"))

# Analysis model
analysis.model = AnalysisModel() +
  MultAdjStrategy(mult.adj.trialA, mult.adj.trialB) +
  # Tests for study A
  Test(id = "Trial A Pla vs Trt End1",
       method = "PropTest",
       samples = samples("Trial A Plac End1", "Trial A Trt End1")) +
  Test(id = "Trial A Pla vs Trt End2",
       method = "TTest",
       samples = samples("Trial A Plac End2", "Trial A Trt End2")) +
  Test(id = "Trial A Pla vs Trt End3",
       method = "TTest",
       samples = samples("Trial A Plac End3", "Trial A Trt End3")) +
  # Tests for study B
  Test(id = "Trial B Pla vs Trt End1",
       method = "PropTest",
       samples = samples("Trial B Plac End1", "Trial B Trt End1")) +
  Test(id = "Trial B Pla vs Trt End2",
       method = "TTest",
       samples = samples("Trial B Plac End2", "Trial B Trt End2")) +
  Test(id = "Trial B Pla vs Trt End3",
       method = "TTest",
       samples = samples("Trial B Plac End3", "Trial B Trt End3"))

MultAdj object

Description

This object can be used to combine several MultAdjProc or MultAdjStrategy objects and add them as a single object to an AnalysisModel object . This object is provided mainly for convenience and its use is optional. Alternatively, MultAdjProc or MultAdjStrategy objects can be added to an AnalysisModel object incrementally using the ‘+’ operator.

For more information about the MultAdj object, see the package’s documentation MultAdj.

Evaluation model

Evaluation models are used within the Mediana package to specify the success criteria or metrics for evaluating the performance of the selected clinical scenario (combination of data and analysis models).

Initialization

An evaluation model can be initialized using the following command:

It is highly recommended to use this command to initialize an evaluation model because it simplifies the process of specifying components of the evaluation model such as Criterion objects.

Components of an evaluation model

After an EvaluationModel object has been initialized, components of the evaluation model can be specified by adding objects to the model using the ‘+’ operator as shown below.

Criterion object

Description

This object specifies the success criteria that will be applied to a clinical scenario to evaluate the performance of selected analysis methods. A Criterion object is defined by six arguments:

  • id defines the criterion’s unique ID (label).

  • method defines the criterion.

  • tests defines the IDs of the significance tests (defined in the analysis model) that the criterion is applied to.

  • statistics defines the IDs the descriptive statistics (defined in the analysis model) that the criterion is applied to.

  • par defines the parameter(s) of the criterion.

  • label defines the label(s) of the criterion values (the label(s) will be used in the simulation report).

Several commonly used success criteria are implemented in the Mediana package. The user can also define custom significance criteria. The built-in success criteria are listed below along with the required parameters that need to be included in the par argument:

  • MarginalPower: compute the marginal power of all tests included in the test argument. Required parameter: alpha (significance level used in each test).

  • WeightedPower: compute the weighted power of all tests included in the test argument. Required parameters: alpha (significance level used in each test) and weight (vector of weights assigned to the significance tests).

  • DisjunctivePower: compute the disjunctive power (probability of achieving statistical significance in at least one test included in the test argument). Required parameter: alpha (significance level used in each test).

  • ConjunctivePower: compute the conjunctive power (probability of achieving statistical significance in all tests included in the test argument). Required parameter: alpha (significance level used in each test).

  • ExpectedRejPower: compute the expected number of statistical significant tests. Required parameter: alpha(significance level used in each test).

Several Criterion objects can be added to an EvaluationModel object.

For more information about the Criterion object, see the package’s documentation Criterion.

If a certain success criterion is not implemented in the Mediana package, the user can create a custom function and use it within the package (see the dedicated vignette vignette("custom-functions", package = "Mediana")).

Clinical Scenario Evaluation

Clinical Scenario Evaluation (CSE) is performed based on the data, analysis and evaluation models as well as simulation parameters specified by the user. The simulation parameters are defined using the SimParameters object.

Clinical Scenario Evaluation objects

SimParameters object

Description

The SimParameters object is a required argument of the CSE function and has the following arguments:

  • n.sims defines the number of simulations.
  • seed defines the seed to be used in the simulations.
  • proc.load defines the processor load in parallel computations.

The proc.load argument is used to define the number of processor cores dedicated to the simulations. A numeric value can be defined as well as character value which automatically detects the number of cores:

  • low: 1 processor core.

  • med: Number of available processor cores / 2.

  • high: Number of available processor cores 1.

  • full: All available processor cores.

Examples

Examples of SimParameters object specification:

Perform 10000 simulations using all available processor cores:

Perform 10000 simulations using 2 processor cores:

CSE function

Description

The CSE function is invoked to runs simulations under the Clinical Scenario Evaluation approach. This function uses four arguments:

  • data defines a DataModel object.

  • analysis defines an AnalysisModel object.

  • evaluation defines an EvaluationModel object.

  • simulation defines a SimParameters object.

Summary of results

Once Clinical Scenario Evaluation-based simulations have been run, the CSE object returned by the CSE function contains a list with the following components:

  • simulation.results: a data frame containing the results of the simulations for each scenario.

  • analysis.scenario.grid: a data frame containing the grid of the combination of data and analysis scenarios.

  • data.structure: a list containing the data structure according to the DataModel object.

  • analysis.structure: a list containing the analysis structure according to the AnalysisModel object.

  • evaluation.structure: a list containing the evaluation structure according to the EvaluationModel object.

  • sim.parameters: a list containing the simulation parameters according to SimParameters object.

  • timestamp: a list containing information about the start time, end time and duration of the simulation runs.

The simulation results can be summarized in the R console using the summary function:

A Microsoft Word-based simulation report can be generated from the simulation results produced by the CSE function using the GenerateReport function, see Simulation report.

Simulation report

The Mediana R package uses the officer R package package to generate a Microsoft Word-based report that summarizes the results of Clinical Scenario Evaluation-based simulations.

The user can easily customize this simulation report by adding a description of the project as well as labels to each scenario, including data scenarios (sample size, outcome distribution parameters, design parameters) and analysis scenarios (multiplicity adjustment). The user can also customize the report’s structure, e.g., create sections and subsections within the report and specify how the rows will be sorted within each table.

In order to customize the report, the user has to use a PresentationModel object described below.

Once a PresentationModel object has been defined, the GenerateReport function can be called to generate a Clinical Scenario Evaluation report.

Initialization

A presentation model can be initialized using the following command

Initialization with this command is highly recommended as it will simplify the process of adding related objects, e.g., the Project, Section, Subsection, Table, CustomLabel objects.

Specific objects

Once the PresentationModel object has been initialized, specific objects can be added by simply using the ‘+’ operator as in data, analysis and evaluation models.

Project object

Description

This object specifies a description of the project. The Project object is defined by three optional arguments:

  • username defines the username to be included in the report (by default, the username is “[Unknown User]”).

  • title defines the project’s in the report (the default value is “[Unknown title]”).

  • description defines the project’s description (the default value is “[No description]”).

This information will be added in the report generated using the GenerateReport function.

A single object of the Project class can be added to an object of the PresentationModel class.

Section object

Description

This object specifies the sections that will be created within the simulation report. A Section object is defined by a single argument:

  • by defines the rules for setting up sections.

The by argument can contain several parameters from the following list:

  • sample.size: a separate section will be created for each sample size.

  • event: a separate section will be created for each event count.

  • outcome.parameter: a separate section will be created for each outcome parameter scenario.

  • design.parameter: a separate section will be created for each design parameter scenario.

  • multiplicity.adjustment: a separate section will be created for each multiplicity adjustment scenario.

Note that, if a parameter is defined in the by argument, it must be defined only in this object (i.e., neither in the Subection object nor in the Table object).

A single object of the Section class can be added to an object of the PresentationModel class.

Examples

A Section object can be defined as follows:

Create a separate section within the report for each outcome parameter scenario:

Create a separate section for each unique combination of the sample size and outcome parameter scenarios:

Subsection object

Description

This object specifies the rules for creating subsections within the simulation report. A Subsection object is defined by a single argument:

  • by defines the rules for creating subsections.

The by argument can contain several parameters from the following list:

  • sample.size: a separate subsection will be created for each sample size.

  • event: a separate subsection will be created for each number of events.

  • outcome.parameter: a separate subsection will be created for each outcome parameter scenario.

  • design.parameter: a separate subsection will be created for each design parameter scenario.

  • multiplicity.adjustment: a separate subsection will be created for each multiplicity adjustment scenario.

As before, if a parameter is defined in the by argument, it must be defined only in this object (i.e., neither in the Section object nor in the Table object).

A single object of the Subsection class can be added to an object of the PresentationModel class. #### Examples

Subsection objects can be set up as follows:

Create a separate subsection for each sample size scenario:

Create a separate subsection for each unique combination of the sample size and outcome parameter scenarios:

Table object

Description

This object specifies how the summary tables will be sorted within the report. A Table object is defined by a single argument:

  • by defines how the tables of the report will be sorted.

The by argument can contain several parameters, the value must be contain in the following list:

  • sample.size: the tables will be sorted by the sample size.

  • event: the tables will be sorted by the number of events.

  • outcome.parameter: the tables will be sorted by the outcome parameter scenario.

  • design.parameter: the tables will be sorted by the design parameter scenario.

  • multiplicity.adjustment: the tables will be sorted by the multiplicity adjustment scenario.

If a parameter is defined in the by argument it must be defined only in this object (i.e., neither in the Section object nor in the Subsection object).

A single object of class Table can be added to an object of class PresentationModel.

Examples

Examples of Table objects:

Create a summary table sorted by sample size scenarios:

Create a summary table sorted by sample size and outcome parameter scenarios:

CustomLabel object

Description

This object specifies the labels that will be assigned to sets of parameter values or simulation scenarios. These labels will be used in the section and subsection titles of the Clinical Scenario Evaluation Report as well as in the summary tables. A CustomLabel object is defined by two arguments:

  • param defines a parameter (scenario) to which the current set of labels will be assigned.

  • label defines the label(s) to assign to each value of the parameter.

The param argument can contain several parameters from the following list:

  • sample.size: labels will be applied to sample size values.

  • event: labels will be applied to number of events values.

  • outcome.parameter: labels will be applied to outcome parameter scenarios.

  • design.parameter: labels will be applied to design parameter scenarios.

  • multiplicity.adjustment: labels will be applied to multiplicity adjustment scenarios.

Several objects of the CustomLabel class can be added to an object of the PresentationModel class.

Examples

Examples of CustomLabel objects:

Assign a custom label to the sample size values:

Assign a custom label to the outcome parameter scenarios:

GenerateReport function

Description

The Clinical Scenario Evaluation Report is generated using the GenerateReport function. This function has four arguments:

  • presentation.model defines a PresentationModel object.

  • cse.result defines a CSE object returned by the CSE function.

  • report.filename defines the filename of the Word-based report generated by this function.

  • report.template defines a Word-based template (it is an optional argument).

The GenerateReport function requires the officer R package package to generate a Word-based simulation report. Optionally, a custom template can be selected by defining report.template, this argument specifies the name of a Word document located in the working directory.

The Word-based simulation report is structured as follows:

  1. GENERAL INFORMATION
    1. PROJECT INFORMATION
    2. SIMULATION PARAMETERS
  2. DATA MODEL
    1. DESIGN (if a Design object has been defined)
    2. SAMPLE SIZE (or EVENT if an Event object has been defined)
    3. OUTCOME DISTRIBUTION
    4. DESIGN
  3. ANALYSIS MODEL
    1. TESTS
    2. MULTIPLICITY ADJUSTMENT
  4. EVALUATION MODEL
    1. CRITERIA
  5. RESULTS
    1. SECTION (if a Section object has been defined)
      1. SUBSECTION (if a Subsection object has been defined)
Mediana/inst/doc/case-studies.Rmd0000644000176200001440000026203313440027504016374 0ustar liggesusers--- title: "Case studies" author: "Gautier Paux and Alex Dmitrienko" date: "`r Sys.Date()`" output: rmarkdown::html_vignette vignette: > %\VignetteIndexEntry{Case studies} %\VignetteEngine{knitr::rmarkdown} \usepackage[utf8]{inputenc} --- # Introduction Several case studies have been created to facilitate the implementation of simulation-based Clinical Scenario Evaluation (CSE) approaches in multiple settings and help the user understand individual features of the Mediana package. Case studies are arranged in terms of increasing complexity of the underlying clinical trial setting (i.e., trial design and analysis methodology). For example, [Case study 1](#case-study-1-1) deals with a number of basic settings and increasingly more complex settings are considered in the subsequent case studies. ## Case study 1 This case study serves a good starting point for users who are new to the Mediana package. It focuses on clinical trials with simple designs and analysis strategies where power and sample size calculations can be performed using analytical methods. 1. [Trial with two treatment arms and single endpoint (normally distributed endpoint).](#normally-distributed-endpoint) 2. [Trial with two treatment arms and single endpoint (binary endpoint).](#binary-endpoint) 3. [Trial with two treatment arms and single endpoint (survival-type endpoint).](#survival-type-endpoint) 4. [Trial with two treatment arms and single endpoint (survival-type endpoint with censoring).](#survival-type-endpoint-with-censoring) 5. [Trial with two treatment arms and single endpoint (count-type endpoint).](#count-type-endpoint) ## Case study 2 This case study is based on a **clinical trial with three or more treatment arms**. A multiplicity adjustment is required in this setting and no analytical methods are available to support power calculations. This example also illustrates a key feature of the Mediana package, namely, a useful option to define custom functions, for example, it shows how the user can define a new criterion in the Evaluation Model. [Clinical trial in patients with schizophrenia](#case-study-2-1) ## Case study 3 This case study introduces a **clinical trial with several patient populations** (marker-positive and marker-negative patients). It demonstrates how the user can define independent samples in a data model and then specify statistical tests in an analysis model based on merging several samples, i.e., merging samples of marker-positive and marker-negative patients to carry out a test that evaluated the treatment effect in the overall population. [Clinical trial in patients with asthma](#case-study-3-1) ## Case study 4 This case study illustrates CSE simulations in a **clinical trial with several endpoints** and helps showcase the package's ability to model multivariate outcomes in clinical trials. [Clinical trial in patients with metastatic colorectal cancer](#case-study-4-1) ## Case study 5 This case study is based on a **clinical trial with several endpoints and multiple treatment arms** and illustrates the process of performing complex multiplicity adjustments in trials with several clinical objectives. [Clinical trial in patients with rheumatoid arthritis](#case-study-5-1) ## Case study 6 This case study is an extension of [Case study 2](#case-study-2-1) and illustrates how the package can be used to assess the performance of several multiplicity adjustments. The case study also walks the reader through the process of defining customized simulation reports. [Clinical trial in patients with schizophrenia](#case-study-6-1) # Case study 1 Case study 1 deals with a simple setting, namely, a clinical trial with two treatment arms (experimental treatment versus placebo) and a single endpoint. Power calculations can be performed analytically in this setting. Specifically, closed-form expressions for the power function can be derived using the central limit theorem or other approximations. Several distribution will be illustrated in this case study: - [Normally distributed endpoint](#normally-distributed-endpoint) - [Binary endpoint](#binary-endpoint) - [Survival-type endpoint](#survival-type-endpoint) - [Survival-type endpoint (with censoring)](#survival-type-endpoint-with-censoring) - [Count-type endpoint](#count-type-endpoint) ## Normally distributed endpoint Suppose that a sponsor is designing a Phase III clinical trial in patients with pulmonary arterial hypertension (PAH). The efficacy of experimental treatments for PAH is commonly evaluated using a six-minute walk test and the primary endpoint is defined as the change from baseline to the end of the 16-week treatment period in the six-minute walk distance. ### Define a Data Model The first step is to initialize the data model: ```r case.study1.data.model = DataModel() ``` After the initialization, components of the data model can be added to the `DataModel` object incrementally using the `+` operator. The change from baseline in the six-minute walk distance is assumed to follow a normal distribution. The distribution of the primary endpoint is defined in the `OutcomeDist` object: ```r case.study1.data.model = case.study1.data.model + OutcomeDist(outcome.dist = "NormalDist") ``` The sponsor would like to perform power evaluation over a broad range of sample sizes in each treatment arm: ```r case.study1.data.model = case.study1.data.model + SampleSize(c(50, 55, 60, 65, 70)) ``` As a side note, the `seq` function can be used to compactly define sample sizes in a data model: ```r case.study1.data.model = case.study1.data.model + SampleSize(seq(50, 70, 5)) ``` The sponsor is interested in performing power calculations under two treatment effect scenarios (standard and optimistic scenarios). Under these scenarios, the experimental treatment is expected to improve the six-minute walk distance by 40 or 50 meters compared to placebo, respectively, with the common standard deviation of 70 meters. Therefore, the mean change in the placebo arm is set to μ = 0 and the mean changes in the six-minute walk distance in the experimental arm are set to μ = 40 (standard scenario) or μ = 50 (optimistic scenario). The common standard deviation is σ = 70. ```r # Outcome parameter set 1 (standard scenario) outcome1.placebo = parameters(mean = 0, sd = 70) outcome1.treatment = parameters(mean = 40, sd = 70) # Outcome parameter set 2 (optimistic scenario) outcome2.placebo = parameters(mean = 0, sd = 70) outcome2.treatment = parameters(mean = 50, sd = 70) ``` Note that the mean and standard deviation are explicitly identified in each list. This is done mainly for the user's convenience. After having defined the outcome parameters for each sample, two `Sample` objects that define the two treatment arms in this trial can be created and added to the `DataModel` object: ```r case.study1.data.model = case.study1.data.model + Sample(id = "Placebo", outcome.par = parameters(outcome1.placebo, outcome2.placebo)) + Sample(id = "Treatment", outcome.par = parameters(outcome1.treatment, outcome2.treatment)) ``` ### Define an Analysis Model Just like the data model, the analysis model needs to be initialized as follows: ```r case.study1.analysis.model = AnalysisModel() ``` Only one significance test is planned to be carried out in the PAH clinical trial (treatment versus placebo). The treatment effect will be assessed using the one-sided two-sample *t*-test: ```r case.study1.analysis.model = case.study1.analysis.model + Test(id = "Placebo vs treatment", samples = samples("Placebo", "Treatment"), method = "TTest") ``` According to the specifications, the two-sample t-test will be applied to Sample 1 (Placebo) and Sample 2 (Treatment). These sample IDs come from the data model defied earlier. As explained in the manual, see [Analysis Model](http://gpaux.github.io/Mediana/AnalysisModel.html), the sample order is determined by the expected direction of the treatment effect. In this case, an increase in the six-minute walk distance indicates a beneficial effect and a numerically larger value of the primary endpoint is expected in Sample 2 (Treatment) compared to Sample 1 (Placebo). This implies that the list of samples to be passed to the t-test should include Sample 1 followed by Sample 2. It is of note that from version 1.0.6, it is possible to specify an option to indicate if a larger numeric values is expected in the Sample 2 (`larger = TRUE`) or in Sample 1 (`larger = FALSE`). By default, this argument is set to `TRUE`. To illustrate the use of the `Statistic` object, the mean change in the six-minute walk distance in the treatment arm can be computed using the `MeanStat` statistic: ```r case.study1.analysis.model = case.study1.analysis.model + Statistic(id = "Mean Treatment", method = "MeanStat", samples = samples("Treatment")) ``` ### Define an Evaluation Model The data and analysis models specified above collectively define the Clinical Scenarios to be examined in the PAH clinical trial. The scenarios are evaluated using success criteria or metrics that are aligned with the clinical objectives of the trial. In this case it is most appropriate to use regular power or, more formally, *marginal power*. This success criterion is specified in the evaluation model. First of all, the evaluation model must be initialized: ```r case.study1.evaluation.model = EvaluationModel() ``` Secondly, the success criterion of interest (marginal power) is defined using the `Criterion` object: ```r case.study1.evaluation.model = case.study1.evaluation.model + Criterion(id = "Marginal power", method = "MarginalPower", tests = tests("Placebo vs treatment"), labels = c("Placebo vs treatment"), par = parameters(alpha = 0.025)) ``` The `tests` argument lists the IDs of the tests (defined in the analysis model) to which the criterion is applied (note that more than one test can be specified). The test IDs link the evaluation model with the corresponding analysis model. In this particular case, marginal power will be computed for the t-test that compares the mean change in the six-minute walk distance in the placebo and treatment arms (Placebo vs treatment). In order to compute the average value of the mean statistic specified in the analysis model (i.e., the mean change in the six-minute walk distance in the treatment arm) over the simulation runs, another `Criterion` object needs to be added: ```r case.study1.evaluation.model = case.study1.evaluation.model + Criterion(id = "Average Mean", method = "MeanSumm", statistics = statistics("Mean Treatment"), labels = c("Average Mean Treatment")) ``` The `statistics` argument of this `Criterion` object lists the ID of the statistic (defined in the analysis model) to which this metric is applied (e.g., `Mean Treatment`). ### Perform Clinical Scenario Evaluation After the clinical scenarios (data and analysis models) and evaluation model have been defined, the user is ready to evaluate the success criteria specified in the evaluation model by calling the `CSE` function. To accomplish this, the simulation parameters need to be defined in a `SimParameters` object: ```r # Simulation parameters case.study1.sim.parameters = SimParameters(n.sims = 1000, proc.load = "full", seed = 42938001) ``` The function call for `CSE` specifies the individual components of Clinical Scenario Evaluation in this case study as well as the simulation parameters: ```r # Perform clinical scenario evaluation case.study1.results = CSE(case.study1.data.model, case.study1.analysis.model, case.study1.evaluation.model, case.study1.sim.parameters) ``` The simulation results are saved in an `CSE` object (`case.study1.results`). This object contains complete information about this particular evaluation, including the data, analysis and evaluation models specified by the user. The most important component of this object is the data frame contained in the list named *simulation.results* (`case.study1.results$simulation.results`). This data frame includes the values of the success criteria and metrics defined in the evaluation model. ### Summarize the Simulation Results #### Summary of simulation results in R console To facilitate the review of the simulation results produced by the `CSE` function, the user can invoke the `summary` function. This function displays the data frame containing the simulation results in the R console: ```r # Print the simulation results in the R console summary(case.study1.results) ``` If the user is interested in generate graphical summaries of the simulation results (using the the [ggplot2](https://ggplot2.tidyverse.org/) package or other packages), this data frame can also be saved to an object: ```r # Print the simulation results in the R console case.study1.simulation.results = summary(case.study1.results) ``` #### General a Simulation Report ##### Presentation Model A very useful feature of the Mediana package is generation of a Microsoft Word-based report to provide a summary of Clinical Scenario Evaluation Report. To generate a simulation report, the user needs to define a presentation model by creating a `PresentationModel` object. This object must be initialized as follows: ```r case.study1.presentation.model = PresentationModel() ``` Project information can be added to the presentation model using the `Project` object: ```r case.study1.presentation.model = case.study1.presentation.model + Project(username = "[Mediana's User]", title = "Case study 1", description = "Clinical trial in patients with pulmonary arterial hypertension") ``` The user can easily customize the simulation report by defining report sections and specifying properties of summary tables in the report. The code shown below creates a separate section within the report for each set of outcome parameters (using the `Section` object) and sets the sorting option for the summary tables (using the `Table` object). The tables will be sorted by the sample size. Further, in order to define descriptive labels for the outcome parameter scenarios and sample size scenarios, the `CustomLabel` object needs to be used: ```r case.study1.presentation.model = case.study1.presentation.model + Section(by = "outcome.parameter") + Table(by = "sample.size") + CustomLabel(param = "sample.size", label= paste0("N = ",c(50, 55, 60, 65, 70))) + CustomLabel(param = "outcome.parameter", label=c("Standard", "Optismistic")) ``` ##### Report generation Once the presentation model has been defined, the simulation report is ready to be generated using the `GenerateReport` function: ```r # Report Generation GenerateReport(presentation.model = case.study1.presentation.model, cse.results = case.study1.results, report.filename = "Case study 1 (normally distributed endpoint).docx") ``` ### Download The R code and the report that summarizes the results of Clinical Scenario Evaluation for this case study can be downloaded on the Mediana website: - [R code](http://gpaux.github.io/Mediana/Case%20study%201%20(normally%20distributed%20endpoint).R) - [CSE report](http://gpaux.github.io/Mediana/Case%20study%201%20(normally%20distributed%20endpoint).docx) ## Binary endpoint Consider a Phase III clinical trial for the treatment of rheumatoid arthritis (RA). The primary endpoint is the response rate based on the American College of Rheumatology (ACR) definition of improvement. The trial's sponsor in interested in performing power calculations using several treatment effect assumptions (Placebo 30% - Treatment 50%, Placebo 30% - Treatment 55% and Placebo 30% - Treatment 60%) ### Define a Data Model The three outcome parameter sets displayed in the table are combined with four sample size sets (`SampleSize(c(80, 90, 100, 110))`) and the distribution of the primary endpoint (`OutcomeDist(outcome.dist = "BinomDist")`) is specified in the `DataModel` object `case.study1.data.model`: ```r # Outcome parameter set 1 outcome1.placebo = parameters(prop = 0.30) outcome1.treatment = parameters(prop = 0.50) # Outcome parameter set 2 outcome2.placebo = parameters(prop = 0.30) outcome2.treatment = parameters(prop = 0.55) # Outcome parameter set 3 outcome3.placebo = parameters(prop = 0.30) outcome3.treatment = parameters(prop = 0.60) # Data model case.study1.data.model = DataModel() + OutcomeDist(outcome.dist = "BinomDist") + SampleSize(c(80, 90, 100, 110)) + Sample(id = "Placebo", outcome.par = parameters(outcome1.placebo, outcome2.placebo, outcome3.placebo)) + Sample(id = "Treatment", outcome.par = parameters(outcome1.treatment, outcome2.treatment, outcome3.treatment)) ``` ### Define an Analysis Model The analysis model uses a standard two-sample test for comparing proportions (`method = "PropTest"`) to assess the treatment effect in this clinical trial example: ```r # Analysis model case.study1.analysis.model = AnalysisModel() + Test(id = "Placebo vs treatment", samples = samples("Placebo", "Treatment"), method = "PropTest") ``` ### Define an Evaluation Model Power evaluations are easily performed in this clinical trial example using the same evaluation model utilized in the case of a normally distributed endpoint, i.e., evaluations rely on marginal power: ```r # Evaluation model case.study1.evaluation.model = EvaluationModel() + Criterion(id = "Marginal power", method = "MarginalPower", tests = tests("Placebo vs treatment"), labels = c("Placebo vs treatment"), par = parameters(alpha = 0.025)) ``` An extension of this clinical trial example is provided in [Case study 5](#case-study-5-1). The extension deals with a more complex setting involving several trial endpoints and multiple treatment arms. ### Download The R code and the report that summarizes the results of Clinical Scenario Evaluation for this case study can be downloaded on the Mediana website: - [R code](http://gpaux.github.io/Mediana/Case%20study%201%20(binary%20endpoint).R) - [CSE report](http://gpaux.github.io/Mediana/Case%20study%201%20(binary%20endpoint).docx) ## Survival-type endpoint If the trial's primary objective is formulated in terms of analyzing the time to a clinically important event (progression or death in an oncology setting), data and analysis models can be set up based on an exponential distribution and the log-rank test. As an illustration, consider a Phase III trial which will be conducted to evaluate the efficacy of a new treatment for metastatic colorectal cancer (MCC). Patients will be randomized in a 2:1 ratio to an experimental treatment or placebo (in addition to best supportive care). The trial's primary objective is to assess the effect of the experimental treatment on progression-free survival (PFS). ### Define a Data Model A single treatment effect scenario is considered in this clinical trial example. Specifically, the median time to progression is assumed to be: - Placebo : t0 = 6 months, - Treatment: t1 = 9 months. Under an exponential distribution assumption (which is specified using the `ExpoDist` distribution), the median times correspond to the following hazard rates: - λ0 = log(2)/t0 = 0.116, - λ1 = log(2)/t1 = 0.077, and the resulting hazard ratio (HR) is 0.077/0.116 = 0.67. ```r # Outcome parameters median.time.placebo = 6 rate.placebo = log(2)/median.time.placebo outcome.placebo = parameters(rate = rate.placebo) median.time.treatment = 9 rate.treatment = log(2)/median.time.treatment outcome.treatment = parameters(rate = rate.treatment) ``` It is important to note that, if no censoring mechanisms are specified in a data model with a time-to-event endpoint, all patients will reach the endpoint of interest (e.g., progression) and thus the number of patients will be equal to the number of events. Using this property, power calculations can be performed using either the `Event` object or `SampleSize` object. For the purpose of illustration, the `Event` object will be used in this example. To define a data model in the MCC clinical trial, the total event count in the trial is assumed to range between 270 and 300. Since the trial's design is not balanced, the randomization ratio needs to be specified in the `Event` object: ```r # Number of events parameters event.count.total = c(210, 300) randomization.ratio = c(1,2) # Data model case.study1.data.model = DataModel() + OutcomeDist(outcome.dist = "ExpoDist") + Event(n.events = event.count.total, rando.ratio = randomization.ratio) + Sample(id = "Placebo", outcome.par = parameters(outcome.placebo)) + Sample(id = "Treatment", outcome.par = parameters(outcome.treatment)) ``` It is worth noting that the primary endpoint's type (i.e., the`outcome.type` argument in the `OutcomeDist` object) is not specified. By default, the outcome type is set to `fixed`, which means that a design with a fixed follow-up is assumed even though the primary endpoint in this clinical trial is clearly a time-to-event endpoint. This is due to the fact that, as was explained earlier in this case study, there is no censoring in this design and all patients are followed until the event of interest is observed. It is easy to verify that the same results are obtained if the outcome type is set to `event`. ### Define an Analysis Model The analysis model in this clinical trial is very similar to the analysis models defined in the case studies with normal and binomial outcome variables. The only difference is the choice of the statistical method utilized in the primary analysis (`method = "LogrankTest"`): ```r # Analysis model case.study1.analysis.model = AnalysisModel() + Test(id = "Placebo vs treatment", samples = samples("Placebo", "Treatment"), method = "LogrankTest") ``` To illustrate the specification of a `Statistic` object, the hazard ratio will be computed using the Cox method. This can be accomplished by adding a `Statistic` object to the `AnalysisModel` such presented below. ```r # Analysis model case.study1.analysis.model = case.study1.analysis.model + Statistic(id = "Hazard Ratio", samples = samples("Placebo", "Treatment"), method = "HazardRatioStat", par = parameters(method = "Cox")) ``` ### Define an Evaluation Model An evaluation model identical to that used earlier in the case studies with normal and binomial distribution can be applied to compute the power function at the selected event counts. Moreover, the average hazard ratio accross the simulations will be computed. ```r # Evaluation model case.study1.evaluation.model = EvaluationModel() + Criterion(id = "Marginal power", method = "MarginalPower", tests = tests("Placebo vs treatment"), labels = c("Placebo vs treatment"), par = parameters(alpha = 0.025)) + Criterion(id = "Hazard Ratio", method = "MeanSumm", statistics = tests("Hazard Ratio"), labels = c("Average Hazard Ratio")) ``` ### Download The R code and the report that summarizes the results of Clinical Scenario Evaluation for this case study can be downloaded on the Mediana website: - [R code](http://gpaux.github.io/Mediana/Case%20study%201%20(survival-type%20endpoint).R) - [CSE report](http://gpaux.github.io/Mediana/Case%20study%201%20(survival-type%20endpoint).docx) ## Survival-type endpoint (with censoring) The power calculations presented in the previous case study assume an idealized setting where each patient is followed until the event of interest (e.g., progression) is observed. In this case, the sample size (number of patients) in each treatment arm is equal to the number of events. In reality, events are often censored and a sponsor is generally interested in determining the number of patients to be recruited in order to ensure a target number of events, which translates into desirable power. The Mediana package can be used to perform power calculations in event-driven trials in the presence of censoring. This is accomplished by setting up design parameters such as the length of the enrollment and follow-up periods in a data model using a `Design` object. In general, even though closed-form solutions have been derived for sample size calculations in event-driven designs, the available approaches force clinical trial researchers to make a variety of simplifying assumptions, e.g., assumptions on the enrollment distribution are commonly made, see, for example, Julious (2009, Chapter 15). A general simulation-based approach to power and sample size calculations implemented in the Mediana package enables clinical trial sponsors to remove these artificial restrictions and examine a very broad set of plausible design parameters. ### Define a Data Model Suppose, for example, that a standard design with a variable follow-up will be used in the MCC trial introduced in the previous case study. The total study duration will be 21 months, which includes a 9-month enrollment (accrual) period and a minimum follow-up of 12 months. The patients are assumed to be recruited at a uniform rate. The set of design parameters also includes the dropout distribution and its parameters. In this clinical trial, the dropout distribution is exponential with a rate determined from historical data. These design parameters are specified in a `Design` object: ```r # Dropout parameters dropout.par = parameters(rate = 0.0115) # Design parameters case.study1.design = Design(enroll.period = 9, study.duration = 21, enroll.dist = "UniformDist", dropout.dist = "ExpoDist", dropout.dist.par = dropout.par) ``` Finally, the primary endpoint's type is set to `event` in the `OutcomeDist` object to indicate that a variable follow-up will be utilized in this clinical trial. The complete data model in this case study is defined as follows: ```r # Number of events parameters event.count.total = c(390, 420) randomization.ratio = c(1,2) # Outcome parameters median.time.placebo = 6 rate.placebo = log(2)/median.time.placebo outcome.placebo = parameters(rate = rate.placebo) median.time.treatment = 9 rate.treatment = log(2)/median.time.treatment outcome.treatment = parameters(rate = rate.treatment) # Dropout parameters dropout.par = parameters(rate = 0.0115) # Data model case.study1.data.model = DataModel() + OutcomeDist(outcome.dist = "ExpoDist", outcome.type = "event") + Event(n.events = event.count.total, rando.ratio = randomization.ratio) + Design(enroll.period = 9, study.duration = 21, enroll.dist = "UniformDist", dropout.dist = "ExpoDist", dropout.dist.par = dropout.par) + Sample(id = "Placebo", outcome.par = parameters(outcome.placebo)) + Sample(id = "Treatment", outcome.par = parameters(outcome.treatment)) ``` ### Define an Analysis Model Since the number of events has been fixed in this clinical trial example and some patients will not reach the event of interest, it will be important to estimate the number of patients required to accrue the required number of events. In the Mediana package, this can be accomplished by specifying a descriptive statistic named `PatientCountStat` (this statistic needs to be specified in a `Statistic` object). Another descriptive statistic that would be of interest is the event count in each sample. To compute this statistic, `EventCountStat` needs to be included in a `Statistic` object. ```r # Analysis model case.study1.analysis.model = AnalysisModel() + Test(id = "Placebo vs treatment", samples = samples("Placebo", "Treatment"), method = "LogrankTest") + Statistic(id = "Events Placebo", samples = samples("Placebo"), method = "EventCountStat") + Statistic(id = "Events Treatment", samples = samples("Treatment"), method = "EventCountStat") + Statistic(id = "Patients Placebo", samples = samples("Placebo"), method = "PatientCountStat") + Statistic(id = "Patients Treatment", samples = samples("Treatment"), method = "PatientCountStat") ``` ### Define an Evaluation Model In order to compute the average values of the two statistics (`PatientCountStat` and `EventCountStat`) in each sample over the simulation runs, two `Criterion` objects need to be specified, in addition to the `Criterion` object defined to obtain marginal power. The IDs of the corresponding `Statistic` objects will be included in the `statistics` argument of the two `Criterion` objects: ```r # Evaluation model case.study1.evaluation.model = EvaluationModel() + Criterion(id = "Marginal power", method = "MarginalPower", tests = tests("Placebo vs treatment"), labels = c("Placebo vs treatment"), par = parameters(alpha = 0.025)) + Criterion(id = "Mean Events", method = "MeanSumm", statistics = statistics("Events Placebo", "Events Treatment"), labels = c("Mean Events Placebo", "Mean Events Treatment")) + Criterion(id = "Mean Patients", method = "MeanSumm", statistics = statistics("Patients Placebo", "Patients Treatment"), labels = c("Mean Patients Placebo", "Mean Patients Treatment")) ``` ### Download The R code and the report that summarizes the results of Clinical Scenario Evaluation for this case study can be downloaded on the Mediana website: - [R code](http://gpaux.github.io/Mediana/Case%20study%201%20(survival-type%20endpoint%20with%20censoring).R) - [CSE report](http://gpaux.github.io/Mediana/Case%20study%201%20(survival-type%20endpoint%20with%20censoring).docx) ## Count-type endpoint The last clinical trial example within Case study 1 deals with a Phase III clinical trial in patients with relapsing-remitting multiple sclerosis (RRMS). The trial aims at assessing the safety and efficacy of a single dose of a novel treatment compared to placebo. The primary endpoint is the number of new gadolinium enhancing lesions seen during a 6-month period on monthly MRIs of the brain and a smaller number indicates treatment benefit. The distribution of such endpoints has been widely studied in the literature and Sormani et al. ([1999a](http://www.jns-journal.com/article/S0022-510X(99)00015-5/abstract), [1999b](http://jnnp.bmj.com/content/66/4/465.long)) showed that a negative binomial distribution provides a fairly good fit. The list below gives the expected treatment effect in the experimental treatment and placebo arms (note that the negative binomial distribution is parameterized using the mean rather than the probability of success in each trial). The mean number of new lesions is set to 13 in the Treament arm and 7.8 in the Placebo arm, with a common dispersion parameter of 0.5. The corresponding treatment effect, i.e., the relative reduction in the mean number of new lesions counts, is 100 * (13 − 7.8)/13 = 40%. The assumptions in the table define a single outcome parameter set. ### Define a Data Model The `OutcomeDist` object defines the distribution of the trial endpoint (`NegBinomDist`). Further, a balanced design is utilized in this clinical trial and the range of sample sizes is defined in the `SampleSize` object (it is convenient to do this using the `seq` function). The `Sample` object includes the parameters required by the negative binomial distribution (dispersion and mean). ```r # Outcome parameters outcome.placebo = parameters(dispersion = 0.5, mean = 13) outcome.treatment = parameters(dispersion = 0.5, mean = 7.8) # Data model case.study1.data.model = DataModel() + OutcomeDist(outcome.dist = "NegBinomDist") + SampleSize(seq(100, 150, 10)) + Sample(id = "Placebo", outcome.par = parameters(outcome.placebo)) + Sample(id = "Treatment", outcome.par = parameters(outcome.treatment)) ``` ### Define an Analysis Model The treatment effect will be assessed in this clinical trial example using a negative binomial generalized linear model (NBGLM). In the Mediana package, the corresponding test is carrying out using the `GLMNegBinomTest` method which is specified in the `Test` object. It should be noted that as a smaller value indicates a treatment benefit, the first sample defined in the `samples` argument must be `Treatment`. ```r # Analysis model case.study1.analysis.model = AnalysisModel() + Test(id = "Treatment vs Placebo", samples = samples( "Treatment", "Placebo"), method = "GLMNegBinomTest") ``` Alternatively, from version 1.0.6, it is possible to specify the argument `lower` in the parameters of the method. If set to `FALSE` a numerically lower value is expected in Sample 2. ```r # Analysis model case.study1.analysis.model = AnalysisModel() + Test(id = "Treatment vs Placebo", samples = samples( "Placebo", "Treatment"), method = "GLMNegBinomTest", par = parameters(larger = FALSE)) ``` ### Define an Evaluation Model The objective of this clinical trial is identical to that of the clinical trials presented earlier on this page, i.e., evaluation will be based on marginal power of the primary endpoint test. As a consequence, the same evaluation model can be applied. ```r # Evaluation model case.study1.evaluation.model = EvaluationModel() + Criterion(id = "Marginal power", method = "MarginalPower", tests = tests("Treatment vs Placebo"), labels = c("Treatment vs Placebo"), par = parameters(alpha = 0.025)) ``` ### Download The R code and the report that summarizes the results of Clinical Scenario Evaluation for this case study can be downloaded on the Mediana website: - [R code](http://gpaux.github.io/Mediana/Case%20study%201%20(count-type%20endpoint).R) - [CSE report](http://gpaux.github.io/Mediana/Case%20study%201%20(count-type%20endpoint).docx) # Case study 2 ## Summary This clinical trial example deals with settings where no analytical methods are available to support power calculations. However, as demonstrated below, simulation-based approaches are easily applied to perform а comprehensive assessment of the relevant operating characteristics within the clinical scenario evaluation framework. Case study 2 is based on a clinical trial example introduced in Dmitrienko and D'Agostino (2013, Section 10). This example deals with a Phase III clinical trial in a schizophrenia population. Three doses of a new treatment, labelled Dose L, Dose M and Dose H, will be tested versus placebo. The trial will be declared successful if a beneficial treatment effect is demonstrated in any of the three dosing groups compared to the placebo group. The primary endpoint is defined as the reduction in the Positive and Negative Syndrome Scale (PANSS) total score compared to baseline and a larger reduction in the PANSS total score indicates treatment benefit. This endpoint is normally distributed and the treatment effect assumptions in the four treatment arms are displayed in the next table. ```{r, results = "asis", echo = FALSE} pander::pandoc.table(data.frame(Arm = c("Placebo", "Dose L", "Dose M", "Dose H"), Mean = c(16, 19.5, 21, 21), SD = rep(18,4))) ``` ## Define a Data Model The treatment effect assumptions presented in the table above define a single outcome parameter set and the common sample size is set to 260 patients. These parameters are specified in the following data model: ```r # Outcome parameters outcome.pl = parameters(mean = 16, sd = 18) outcome.dosel = parameters(mean = 19.5, sd = 18) outcome.dosem = parameters(mean = 21, sd = 18) outcome.doseh = parameters(mean = 21, sd = 18) # Data model case.study2.data.model = DataModel() + OutcomeDist(outcome.dist = "NormalDist") + SampleSize(seq(220, 260, 20)) + Sample(id = "Placebo", outcome.par = parameters(outcome.pl)) + Sample(id = "Dose L", outcome.par = parameters(outcome.dosel)) + Sample(id = "Dose M", outcome.par = parameters(outcome.dosem)) + Sample(id = "Dose H", outcome.par = parameters(outcome.doseh)) ``` ## Define an Analysis Model The analysis model, shown below, defines the three individual tests that will be carried out in the schizophrenia clinical trial. Each test corresponds to a dose-placebo comparison such as: - H1: Null hypothesis of no difference between Dose L and placebo. - H2: Null hypothesis of no difference between Dose M and placebo. - H3: Null hypothesis of no difference between Dose H and placebo. Each comparison will be carried out based on a one-sided two-sample *t*-test (`TTest` method defined in each `Test` object). As indicated earlier, the overall success criterion in the trial is formulated in terms of demonstrating a beneficial effect at any of the three doses. Due to multiple opportunities to claim success, the overall Type I error rate will be inflated and the Hochberg procedure is introduced to protect the error rate at the nominal level. Since no procedure parameters are defined, the three significance tests (or, equivalently, three null hypotheses of no effect) are assumed to be equally weighted. The corresponding analysis model is defined below: ```r # Analysis model case.study2.analysis.model = AnalysisModel() + MultAdjProc(proc = "HochbergAdj") + Test(id = "Placebo vs Dose L", samples = samples("Placebo", "Dose L"), method = "TTest") + Test(id = "Placebo vs Dose M", samples = samples ("Placebo", "Dose M"), method = "TTest") + Test(id = "Placebo vs Dose H", samples = samples("Placebo", "Dose H"), method = "TTest") ``` To request the Hochberg procedure with unequally weighted hypotheses, the user needs to assign a list of hypothesis weights to the `par` argument of the `MultAdjProc` object. The weights typically reflect the relative importance of the individual null hypotheses. Assume, for example, that 60% of the overall weight is assigned to H3 and the remainder is split between H1 and H2. In this case, the `MultAdjProc` object should be defined as follow: ```r MultAdjProc(proc = "HochbergAdj", par = parameters(weight = c(0.2, 0.2, 0.6))) ``` It should be noted that the order of the weights must be identical to the order of the `Test` objects defined in the analysis model. ## Define an Evaluation Model An evaluation model specifies clinically relevant criteria for assessing the performance of the individual tests defined in the corresponding analysis model or composite measures of success. In virtually any setting, it is of interest to compute the probability of achieving a significant outcome in each individual test, e.g., the probability of a significant difference between placebo and each dose. This is accomplished by requesting a `Criterion` object with `method = "MarginalPower"`. Since the trial will be declared successful if at least one dose-placebo comparison is significant, it is natural to compute the overall success probability, which is defined as the probability of demonstrating treatment benefit in one of more dosing groups. This is equivalent to evaluating disjunctive power in the trial (`method = "DisjunctivePower"`). In addition, the user can easily define a custom evaluation criterion. Suppose that, based on the results of the previously conducted trials, the sponsor expects a much larger treatment treatment difference at Dose H compared to Doses L and M. Given this, the sponsor may be interested in evaluating the probability of observing a significant treatment effect at Dose H and at least one other dose. The associated evaluation criterion is implemented in the following function: ```r # Custom evaluation criterion (Dose H and at least one of the two other doses are significant) case.study2.criterion = function(test.result, statistic.result, parameter) { alpha = parameter significant = ((test.result[,3] <= alpha) & ((test.result[,1] <= alpha) | (test.result[,2] <= alpha))) power = mean(significant) return(power) } ``` The function's first argument (`test.result`) is a matrix of p-values produced by the `Test` objects defined in the analysis model and the second argument (`statistic.result`) is a matrix of results produced by the `Statistic` objects defined in the analysis model. In this example, the criteria will only use the `test.result` argument, which will contain the p-values produced by the tests associated with the three dose-placebo comparisons. The last argument (`parameter`) contains the optional parameter(s) defined by the user in the `Criterion` object. In this example, the `par` argument contains the overall alpha level. The `case.study2.criterion` function computes the probability of a significant treatment effect at Dose H (`test.result[,3] <= alpha`) and a significant treatment difference at Dose L or Dose M (`(test.result[,1] <= alpha) | (test.result[,2]<= alpha)`). Since this criterion assumes that the third test is based on the comparison of Dose H versus Placebo, the order in which the tests are included in the evaluation model is important. The following evaluation model specifies marginal and disjunctive power as well as the custom evaluation criterion defined above: ```r # Evaluation model case.study2.evaluation.model = EvaluationModel() + Criterion(id = "Marginal power", method = "MarginalPower", tests = tests("Placebo vs Dose L", "Placebo vs Dose M", "Placebo vs Dose H"), labels = c("Placebo vs Dose L", "Placebo vs Dose M", "Placebo vs Dose H"), par = parameters(alpha = 0.025)) + Criterion(id = "Disjunctive power", method = "DisjunctivePower", tests = tests("Placebo vs Dose L", "Placebo vs Dose M", "Placebo vs Dose H"), labels = "Disjunctive power", par = parameters(alpha = 0.025)) + Criterion(id = "Dose H and at least one dose", method = "case.study2.criterion", tests = tests("Placebo vs Dose L", "Placebo vs Dose M", "Placebo vs Dose H"), labels = "Dose H and at least one of the two other doses are significant", par = parameters(alpha = 0.025)) ``` Another potential option is to apply the conjunctive criterion which is met if a significant treatment difference is detected simultaneously in all three dosing groups (`method = "ConjunctivePower"`). This criterion helps characterize the likelihood of a consistent treatment effect across the doses. The user can also use the `metric.tests` parameter to choose the specific tests to which the disjunctive and conjunctive criteria are applied (the resulting criteria are known as subset disjunctive and conjunctive criteria). To illustrate, the following statement computes the probability of a significant treatment effect at Dose M or Dose H (Dose L is excluded from this calculation): ```r Criterion(id = "Disjunctive power", method = "DisjunctivePower", tests = tests("Pl vs Dose M", "Pl vs Dose H"), labels = "Disjunctive power", par = parameters(alpha = 0.025)) ``` ## Download The R code and the report that summarizes the results of Clinical Scenario Evaluation for this case study can be downloaded on the Mediana website: - [R code](http://gpaux.github.io/Mediana/Case%20study%202.R) - [CSE report](http://gpaux.github.io/Mediana/Case%20study%202.docx) # Case study 3 ## Summary This case study deals with a Phase III clinical trial in patients with mild or moderate asthma (it is based on a clinical trial example from [Millen et al., 2014, Section 2.2](http://dij.sagepub.com/content/48/4/453.abstract)). The trial is intended to support a tailoring strategy. In particular, the treatment effect of a single dose of a new treatment will be compared to that of placebo in the overall population of patients as well as a pre-specified subpopulation of patients with a marker-positive status at baseline (for compactness, the overall population is denoted by OP, marker-positive subpopulation is denoted by M+ and marker- negative subpopulation is denoted by M−). Marker-positive patients are more likely to receive benefit from the experimental treatment. The overall objective of the clinical trial accounts for the fact that the treatment's effect may, in fact, be limited to the marker-positive subpopulation. The trial will be declared successful if the treatment's beneficial effect is established in the overall population of patients or, alternatively, the effect is established only in the subpopulation. The primary endpoint in the clinical trial is defined as an increase from baseline in the forced expiratory volume in one second (FEV1). This endpoint is normally distributed and improvement is associated with a larger change in FEV1. ## Define a Data Model To set up a data model for this clinical trial, it is natural to define samples (mutually exclusive groups of patients) as follows: - **Sample 1:** Marker-negative patients in the placebo arm. - **Sample 2:** Marker-positive patients in the placebo arm. - **Sample 3:** Marker-negative patients in the treatment arm. - **Sample 4:** Marker-positive patients in the treatment arm. Using this definition of samples, the trial's sponsor can model the fact that the treatment's effect is most pronounced in patients with a marker-positive status. The treatment effect assumptions in the four samples are summarized in the next table (expiratory volume in FEV1 is measured in liters). As shown in the table, the mean change in FEV1 is constant across the marker-negative and marker-positive subpopulations in the placebo arm (Samples 1 and 2). A positive treatment effect is expected in both subpopulations in the treatment arm but marker-positive patients will experience most of the beneficial effect (Sample 4). ```{r, results = "asis", echo = FALSE} pander::pandoc.table(data.frame(Sample = c("Placebo M-", "Placebo M+", "Treament M-", "Treatment M+"), Mean = c(0.12, 0.12, 0.24, 0.30), SD = rep(0.45,4))) ``` The following data model incorporates the assumptions listed above by defining a single set of outcome parameters. The data model includes three sample size sets (total sample size is set to 330, 340 and 350 patients). The sizes of the individual samples are computed based on historic information (40% of patients in the population of interest are expected to have a marker-positive status). In order to define specific sample size for each sample, they will be specified within each `Sample` object. ```r # Outcome parameters outcome.placebo.minus = parameters(mean = 0.12, sd = 0.45) outcome.placebo.plus = parameters(mean = 0.12, sd = 0.45) outcome.treatment.minus = parameters(mean = 0.24, sd = 0.45) outcome.treatment.plus = parameters(mean = 0.30, sd = 0.45) # Sample size parameters sample.size.total = c(330, 340, 350) sample.size.placebo.minus = as.list(0.3 * sample.size.total) sample.size.placebo.plus = as.list(0.2 * sample.size.total) sample.size.treatment.minus = as.list(0.3 * sample.size.total) sample.size.treatment.plus = as.list(0.2 * sample.size.total) # Data model case.study3.data.model = DataModel() + OutcomeDist(outcome.dist = "NormalDist") + Sample(id = "Placebo M-", sample.size = sample.size.placebo.minus, outcome.par = parameters(outcome.placebo.minus)) + Sample(id = "Placebo M+", sample.size = sample.size.placebo.plus, outcome.par = parameters(outcome.placebo.plus)) + Sample(id = "Treatment M-", sample.size = sample.size.treatment.minus, outcome.par = parameters(outcome.treatment.minus)) + Sample(id = "Treatment M+", sample.size = sample.size.treatment.plus, outcome.par = parameters(outcome.treatment.plus)) ``` ## Define an Analysis Model The analysis model in this clinical trial example is generally similar to that used in [Case study 2](#case-study-2-1) but there is an important difference which is described below. As in [Case study 2](#case-study-2-1), the primary endpoint follows a normal distribution and thus the treatment effect will be assessed using the two-sample *t*-test. Since two null hypotheses are tested in this trial (null hypotheses of no effect in the overall population of patients and subpopulation of marker-positive patients), a multiplicity adjustment needs to be applied. The Hochberg procedure with equally weighted null hypotheses will be used for this purpose. A key feature of the analysis strategy in this case study is that the samples defined in the data model are different from the samples used in the analysis of the primary endpoint. As shown in the Table, four samples are included in the data model. However, from the analysis perspective, the sponsor in interested in examining the treatment effect in two samples, namely, the overall population and marker-positive subpopulation. As shown below, to perform a comparison in the overall population, the *t*-test is applied to the following analysis samples: - **Placebo arm:** Samples 1 and 2 (`Placebo M-` and `Placebo M+`) are merged. - **Treatment arm:** Samples 3 and 4 (`Treatment M-` and `Treatment M+`) are merged. Further, the treatment effect test in the subpopulation of marker-positive patients is carried out based on these analysis samples: - **Placebo arm:** Sample 2 (`Placebo M+`). - **Treatment arm:** Sample 4 (`Treatment M+`). These analysis samples are specified in the analysis model below. The samples defined in the data model are merged using `c()` or `list()` function, e.g., `c("Placebo M-", "Placebo M+")`defines the placebo arm and `c("Treatment M-", "Treatment M+")` defines the experimental treatment arm in the overall population test. ```r # Analysis model case.study3.analysis.model = AnalysisModel() + MultAdjProc(proc = "HochbergAdj") + Test(id = "OP test", samples = samples(c("Placebo M-", "Placebo M+"), c("Treatment M-", "Treatment M+")), method = "TTest") + Test(id = "M+ test", samples = samples("Placebo M+", "Treatment M+"), method = "TTest") ``` ## Define an Evaluation Model It is reasonable to consider the following success criteria in this case study: - **Marginal power:** Probability of a significant outcome in each patient population. - **Disjunctive power:** Probability of a significant treatment effect in the overall population (OP) or marker-positive subpopulation (M+). This metric defines the overall probability of success in this clinical trial. - **Conjunctive power:** Probability of simultaneously achieving significance in the overall population and marker-positive subpopulation. This criterion will be useful if the trial's sponsor is interested in pursuing an enhanced efficacy claim ([Millen et al., 2012](http://dij.sagepub.com/content/46/6/647.abstract)). The following evaluation model applies the three criteria to the two tests listed in the analysis model: ```r # Evaluation model case.study3.evaluation.model = EvaluationModel() + Criterion(id = "Marginal power", method = "MarginalPower", tests = tests("OP test", "M+ test"), labels = c("OP test", "M+ test"), par = parameters(alpha = 0.025)) + Criterion(id = "Disjunctive power", method = "DisjunctivePower", tests = tests("OP test", "M+ test"), labels = "Disjunctive power", par = parameters(alpha = 0.025)) + Criterion(id = "Conjunctive power", method = "ConjunctivePower", tests = tests("OP test", "M+ test"), labels = "Conjunctive power", par = parameters(alpha = 0.025)) ``` ## Download The R code and the report that summarizes the results of Clinical Scenario Evaluation for this case study can be downloaded on the Mediana website: - [R code](http://gpaux.github.io/Mediana/Case%20study%203.R) - [CSE report](http://gpaux.github.io/Mediana/Case%20study%203.docx) # Case study 4 ## Summary Case study 4 serves as an extension of the oncology clinical trial example presented in [Case study 1](#case-study-1-1). Consider again a Phase III trial in patients with metastatic colorectal cancer (MCC). The same general design will be assumed in this section; however, an additional endpoint (overall survival) will be introduced. The case of two endpoints helps showcase the package's ability to model complex design and analysis strategies in trials with multivariate outcomes. Progression-free survival (PFS) is the primary endpoint in this clinical trial and overall survival (OS) serves as the key secondary endpoint, which provides supportive evidence of treatment efficacy. A hierarchical testing approach will be utilized in the analysis of the two endpoints. The PFS analysis will be performed first at α = 0.025 (one-sided), followed by the OS analysis at the same level if a significant effect on PFS is established. The resulting testing procedure is equivalent to the fixed-sequence procedure and controls the overall Type I error rate ([Dmitrienko and D’Agostino, 2013](http://onlinelibrary.wiley.com/doi/10.1002/sim.5990/abstract)). The treatment effect assumptions that will be used in clinical scenario evaluation are listed in the table below. The table shows the hypothesized median times along with the corresponding hazard rates for the primary and secondary endpoints. It follows from the table that the expected effect size is much larger for PFS compared to OS (PFS hazard ratio is lower than OS hazard ratio). ```{r, results = "asis", echo = FALSE} pander::pandoc.table(data.frame(Endpoint = c("Progression-free survival", "", "", "Overall survival", "", ""), Statistic = c(rep(c("Median time (months)", "Hazard rate", "Hazard ratio"),2)), Placebo = c(6, 0.116, 0.67, 15, 0.046, 0.79), Treatment = c(9, 0.077,"",19,0.036,""))) ``` ## Define a Data Model In this clinical trial two endpoints are evaluated for each patient (PFS and OS) and thus their joint distribution needs to be listed in the general set. A bivariate exponential distribution will be used in this example and samples from this bivariate distribution will be generated by the `MVExpoPFSOSDist` function which implements multivariate exponential distributions. The function utilizes the copula method, i.e., random variables that follow a bivariate normal distribution will be generated and then converted into exponential random variables. The next several statements specify the parameters of the bivariate exponential distribution: - Parameters of the marginal exponential distributions, i.e., the hazard rates. - Correlation matrix of the underlying multivariate normal distribution used in the copula method. The hazard rates for PFS and OS in each treatment arm are defined based on the information presented in the table above (`placebo.par` and `treatment.par`) and the correlation matrix is specified based on historical information (`corr.matrix`). These parameters are combined to define the outcome parameter sets (`outcome.placebo` and `outcome.treatment`) that will be included in the sample-specific set of data model parameters (`Sample` object). ```r # Outcome parameters: Progression-free survival median.time.pfs.placebo = 6 rate.pfs.placebo = log(2)/median.time.pfs.placebo outcome.pfs.placebo = parameters(rate = rate.pfs.placebo) median.time.pfs.treatment = 9 rate.pfs.treatment = log(2)/median.time.pfs.treatment outcome.pfs.treatment = parameters(rate = rate.pfs.treatment) hazard.pfs.ratio = rate.pfs.treatment/rate.pfs.placebo # Outcome parameters: Overall survival median.time.os.placebo = 15 rate.os.placebo = log(2)/median.time.os.placebo outcome.os.placebo = parameters(rate = rate.os.placebo) median.time.os.treatment = 19 rate.os.treatment = log(2)/median.time.os.treatment outcome.os.treatment = parameters(rate = rate.os.treatment) hazard.os.ratio = rate.os.treatment/rate.os.placebo # Parameter lists placebo.par = parameters(parameters(rate = rate.pfs.placebo), parameters(rate = rate.os.placebo)) treatment.par = parameters(parameters(rate = rate.pfs.treatment), parameters(rate = rate.os.treatment)) # Correlation between two endpoints corr.matrix = matrix(c(1.0, 0.3, 0.3, 1.0), 2, 2) # Outcome parameters outcome.placebo = parameters(par = placebo.par, corr = corr.matrix) outcome.treatment = parameters(par = treatment.par, corr = corr.matrix) ``` To define the sample-specific data model parameters, a 2:1 randomization ratio will be used in this clinical trial and thus the number of events as well as the randomization ratio are specified by the user in the `Event` object. Secondly, a separate sample ID needs to be assigned to each endpoint within the two samples (e.g. `Placebo PFS` and `Placebo OS`) corresponding to the two treatment arms. This will enable the user to construct analysis models for examining the treatment effect on each endpoint. ```r # Number of events event.count.total = c(270, 300) randomization.ratio = c(1, 2) # Data model case.study4.data.model = DataModel() + OutcomeDist(outcome.dist = "MVExpoPFSOSDist") + Event(n.events = event.count.total, rando.ratio = randomization.ratio) + Sample(id = list("Placebo PFS", "Placebo OS"), outcome.par = parameters(outcome.placebo)) + Sample(id = list("Treatment PFS", "Treatment OS"), outcome.par = parameters(outcome.treatment)) ``` ## Define an Analysis Model The treatment comparisons for both endpoints will be carried out based on the log-rank test (`method = "LogrankTest"`). Further, as was stated in the beginning of this page, the two endpoints will be tested hierarchically using a multiplicity adjustment procedure known as the fixed-sequence procedure. This procedure belongs to the class of chain procedures (`proc = "ChainAdj"`) and the following figure provides a visual summary of the decision rules used in this procedure.
![](figures/CaseStudy04-fig1.png)
The circles in this figure denote the two null hypotheses of interest: - H1: Null hypothesis of no difference between the two arms with respect to PFS. - H2: Null hypothesis of no difference between the two arms with respect to OS. The value displayed above a circle defines the initial weight of each null hypothesis. All of the overall α is allocated to H1 to ensure that the OS test will be carried out only after the PFS test is significant and the arrow indicates that H2 will be tested after H1 is rejected. More formally, a chain procedure is uniquely defined by specifying a vector of hypothesis weights (W) and matrix of transition parameters (G). Based on the figure, these parameters are given by
![](figures/CaseStudy04-fig2.png)
Two objects (named `chain.weight` and `chain.transition`) are defined below to pass the hypothesis weights and transition parameters to the multiplicity adjustment parameters. ```r # Parameters of the chain procedure (fixed-sequence procedure) # Vector of hypothesis weights chain.weight = c(1, 0) # Matrix of transition parameters chain.transition = matrix(c(0, 1, 0, 0), 2, 2, byrow = TRUE) # Analysis model case.study4.analysis.model = AnalysisModel() + MultAdjProc(proc = "ChainAdj", par = parameters(weight = chain.weight, transition = chain.transition)) + Test(id = "PFS test", samples = samples("Placebo PFS", "Treatment PFS"), method = "LogrankTest") + Test(id = "OS test", samples = samples("Placebo OS", "Treatment OS"), method = "LogrankTest") ``` As shown above, the two significance tests included in the analysis model reflect the two-fold objective of this trial. The first test focuses on a PFS comparison between the two treatment arms (`id = "PFS test"`) whereas the other test is carried out to assess the treatment effect on OS (`test.id = "OS test"`). Alternatively, the fixed-sequence procedure can be implemented using the method `FixedSeqAdj` introduced from version 1.0.4. This implementation is facilitated as no parameters have to be specified. ```r # Analysis model case.study4.analysis.model = AnalysisModel() + MultAdjProc(proc = "FixedSeqAdj") + Test(id = "PFS test", samples = samples("Placebo PFS", "Treatment PFS"), method = "LogrankTest") + Test(id = "OS test", samples = samples("Placebo OS", "Treatment OS"), method = "LogrankTest") ``` ## Define an Evaluation Model The evaluation model specifies the most basic criterion for assessing the probability of success in the PFS and OS analyses (marginal power). A criterion based on disjunctive power could be considered but it would not provide additional information. Due to the hierarchical testing approach, the probability of detecting a significant treatment effect on at least one endpoint (disjunctive power) is simply equal to the probability of establishing a significant PFS effect. ```r # Evaluation model case.study4.evaluation.model = EvaluationModel() + Criterion(id = "Marginal power", method = "MarginalPower", tests = tests("PFS test", "OS test"), labels = c("PFS test", "OS test"), par = parameters(alpha = 0.025)) ``` ## Download The R code and the report that summarizes the results of Clinical Scenario Evaluation for this case study can be downloaded on the Mediana website: - [R code](http://gpaux.github.io/Mediana/Case%20study%204.R) - [CSE report](http://gpaux.github.io/Mediana/Case%20study%204.docx) # Case study 5 ## Summary This case study extends the straightforward setting presented in [Case study 1](#case-study-1-1) to a more complex setting involving two trial endpoints and three treatment arms. Case study 5 illustrates the process of performing power calculations in clinical trials with multiple, hierarchically structured objectives and "multivariate" multiplicity adjustment strategies (gatekeeping procedures). Consider a three-arm Phase III clinical trial for the treatment of rheumatoid arthritis (RA). Two co-primary endpoints will be used to evaluate the effect of a novel treatment on clinical response and on physical function. The endpoints are defined as follows: - Endpoint 1: Response rate based on the American College of Rheumatology definition of improvement (ACR20). - Endpoint 2: Change from baseline in the Health Assessment Questionnaire-Disability Index (HAQ-DI). The two endpoints have different marginal distributions. The first endpoint is binary whereas the second one is continuous and follows a normal distribution. The efficacy profile of two doses of a new treatment (Doses L and Dose H) will be compared to that of a placebo and a successful outcome will be defined as a significant treatment effect at either or both doses. A hierarchical structure has been established within each dose so that Endpoint 2 will be tested if and only if there is evidence of a significant effect on Endpoint 1. Three treatment effect scenarios for each endpoint are displayed in the table below. The scenarios define three outcome parameter sets. The first set represents a rather conservative treatment effect scenario, the second set is a standard (most plausible) scenario and the third set represents an optimistic scenario. Note that a reduction in the HAQ-DI score indicates a beneficial effect and thus the mean changes are assumed to be negative for Endpoint 2. ```{r, results = "asis", echo = FALSE} pander::pandoc.table(data.frame(Endpoint = c("ACR20 (%)", "", "", "HAQ-DI (mean (SD))", "", ""), "Outcome parameter set" = c(rep(c("Conservative", "Standard", "Optimistic"),2)), Placebo = c("30%", "30%", "30%", "−0.10 (0.50)", "−0.10 (0.50)", "−0.10 (0.50)"), "Dose L" = c("40%", "45%", "50%", "−0.20 (0.50)", "−0.25 (0.50)", "−0.30 (0.50)"), "Dose H" = c("50%", "55%", "60%", "−0.30 (0.50)", "−0.35 (0.50)", "−0.40 (0.50)"))) ``` ## Define a Data Model As in [Case study 4](#case-study-4-1), two endpoints are evaluated for each patient in this clinical trial example, which means that their joint distribution needs to be specified. The `MVMixedDist` method will be utilized for specifying a bivariate distribution with binomial and normal marginals (`var.type = list("BinomDist", "NormalDist")`). In general, this function is used for modeling correlated normal, binomial and exponential endpoints and relies on the copula method, i.e., random variables are generated from a multivariate normal distribution and converted into variables with pre-specified marginal distributions. Three parameters must be defined to specify the joint distribution of Endpoints 1 and 2 in this clinical trial example: - Variable types (binomial and normal). - Outcome distribution parameters (proportion for Endpoint 1, mean and SD for Endpoint 2) based on the assumptions listed in the Table above. - Correlation matrix of the multivariate normal distribution used in the copula method. These parameters are combined to define three outcome parameter sets (e.g., `outcome1.plac `, `outcome1.dosel ` and `outcome1.doseh `) that will be included in the `Sample` object in the data model. ```r # Variable types var.type = list("BinomDist", "NormalDist") # Outcome distribution parameters placebo.par = parameters(parameters(prop = 0.3), parameters(mean = -0.10, sd = 0.5)) dosel.par1 = parameters(parameters(prop = 0.40), parameters(mean = -0.20, sd = 0.5)) dosel.par2 = parameters(parameters(prop = 0.45), parameters(mean = -0.25, sd = 0.5)) dosel.par3 = parameters(parameters(prop = 0.50), parameters(mean = -0.30, sd = 0.5)) doseh.par1 = parameters(parameters(prop = 0.50), parameters(mean = -0.30, sd = 0.5)) doseh.par2 = parameters(parameters(prop = 0.55), parameters(mean = -0.35, sd = 0.5)) doseh.par3 = parameters(parameters(prop = 0.60), parameters(mean = -0.40, sd = 0.5)) # Correlation between two endpoints corr.matrix = matrix(c(1.0, 0.5, 0.5, 1.0), 2, 2) # Outcome parameter set 1 outcome1.placebo = parameters(type = var.type, par = placebo.par, corr = corr.matrix) outcome1.dosel = parameters(type = var.type, par = dosel.par1, corr = corr.matrix) outcome1.doseh = parameters(type = var.type, par = doseh.par1, corr = corr.matrix) # Outcome parameter set 2 outcome2.placebo = parameters(type = var.type, par = placebo.par, corr = corr.matrix) outcome2.dosel = parameters(type = var.type, par = dosel.par2, corr = corr.matrix) outcome2.doseh = parameters(type = var.type, par = doseh.par2, corr = corr.matrix) # Outcome parameter set 3 outcome3.placebo = parameters(type = var.type, par = placebo.par, corr = corr.matrix) outcome3.doseh = parameters(type = var.type, par = doseh.par3, corr = corr.matrix) outcome3.dosel = parameters(type = var.type, par = dosel.par3, corr = corr.matrix) ``` These outcome parameter set are then combined within each `Sample` object and the common sample size per treatment arm ranges between 100 and 120: ```r # Data model case.study5.data.model = DataModel() + OutcomeDist(outcome.dist = "MVMixedDist") + SampleSize(c(100, 120)) + Sample(id = list("Placebo ACR20", "Placebo HAQ-DI"), outcome.par = parameters(outcome1.placebo, outcome2.placebo, outcome3.placebo)) + Sample(id = list("DoseL ACR20", "DoseL HAQ-DI"), outcome.par = parameters(outcome1.dosel, outcome2.dosel, outcome3.dosel)) + Sample(id = list("DoseH ACR20", "DoseH HAQ-DI"), outcome.par = parameters(outcome1.doseh, outcome2.doseh, outcome3.doseh)) ``` ## Define an Analysis Model To set up the analysis model in this clinical trial example, note that the treatment comparisons for Endpoints 1 and 2 will be carried out based on two different statistical tests: - Endpoint 1: Two-sample test for comparing proportions (`method = "PropTest"`). - Endpoint 2: Two-sample t-test (`method = "TTest"`). It was pointed out earlier in this page that the two endpoints will be tested hierarchically within each dose. The figure below provides a visual summary of the testing strategy used in this clinical trial. The circles in this figure denote the four null hypotheses of interest: H1: Null hypothesis of no difference between Dose L and placebo with respect to Endpoint 1. H2: Null hypothesis of no difference between Dose H and placebo with respect to Endpoint 1. H3: Null hypothesis of no difference between Dose L and placebo with respect to Endpoint 2. H4: Null hypothesis of no difference between Dose H and placebo with respect to Endpoint 2.
![](figures/CaseStudy05-fig1.png)
A multiple testing procedure known as the multiple-sequence gatekeeping procedure will be applied to account for the hierarchical structure of this multiplicity problem. This procedure belongs to the class of mixture-based gatekeeping procedures introduced in [Dmitrienko et al. (2015)](http://www.tandfonline.com/doi/abs/10.1080/10543406.2015.1074917). This gatekeeping procedure is specified by defining the following three parameters: - Families of null hypotheses (`family`). - Component procedures used in the families (`component.procedure`). - Truncation parameters used in the families (`gamma`). ```r # Parameters of the gatekeeping procedure procedure (multiple-sequence gatekeeping procedure) # Tests to which the multiplicity adjustment will be applied test.list = tests("Placebo vs DoseH - ACR20", "Placebo vs DoseL - ACR20", "Placebo vs DoseH - HAQ-DI", "Placebo vs DoseL - HAQ-DI") # Families of hypotheses family = families(family1 = c(1, 2), family2 = c(3, 4)) # Component procedures for each family component.procedure = families(family1 ="HolmAdj", family2 = "HolmAdj") # Truncation parameter for each family gamma = families(family1 = 0.8, family2 = 1) ``` These parameters are included in the `MultAdjProc` object defined below. The tests to which the multiplicity adjustment will be applied are defined in the `tests` argument. The use of this argument is optional if all tests included in the analysis model are to be included. The argument `family` states that the null hypotheses will be grouped into two families: - Family 1: H1 and H2. - Family 2: H3 and H4. It is to be noted that the order corresponds to the order of the tests defined in the analysis model, except if the tests are specifically specified in the `tests` argument of the `MultAdjProc` object. The families will be tested sequentially and a truncated Holm procedure will be applied within each family (`component.procedure`). Lastly, the truncation parameter will be set to 0.8 in Family 1 and to 1 in Family 2 (`gamma`). The resulting parameters are included in the `par` argument of the `MultAdjProc` object and, as before, the `proc` argument is used to specify the multiple testing procedure (`MultipleSequenceGatekeepingAdj`). The test are then specified in the analysis model and the overall analysis model is defined as follows: ```r # Analysis model case.study5.analysis.model = AnalysisModel() + MultAdjProc(proc = "MultipleSequenceGatekeepingAdj", par = parameters(family = family, proc = component.procedure, gamma = gamma), tests = test.list) + Test(id = "Placebo vs DoseL - ACR20", method = "PropTest", samples = samples("Placebo ACR20", "DoseL ACR20")) + Test(id = "Placebo vs DoseH - ACR20", method = "PropTest", samples = samples("Placebo ACR20", "DoseH ACR20")) + Test(id = "Placebo vs DoseL - HAQ-DI", method = "TTest", samples = samples("DoseL HAQ-DI", "Placebo HAQ-DI")) + Test(id = "Placebo vs DoseH - HAQ-DI", method = "TTest", samples = samples("DoseH HAQ-DI", "Placebo HAQ-DI")) ``` Recall that a numerically lower value indicates a beneficial effect for the HAQ-DI score and, as a result, the experimental treatment arm must be defined prior to the placebo arm in the test.samples parameters corresponding to the HAQ-DI tests, e.g., `samples = samples("DoseL HAQ-DI", "Placebo HAQ-DI")`. ## Define an Evaluation Model In order to assess the probability of success in this clinical trial, a hybrid criterion based on the conjunctive criterion (both trial endpoints must be significant) and disjunctive criterion (at least one dose-placebo comparison must be significant) can be considered. This criterion will be met if a significant effect is established at one or two doses on Endpoint 1 (ACR20) and also at one or two doses on Endpoint 2 (HAQ-DI). However, due to the hierarchical structure of the testing strategy (see Figure), this is equivalent to demonstrating a significant difference between Placebo and at least one dose with respect to Endpoint 2. The corresponding criterion is a subset disjunctive criterion based on the two Endpoint 2 tests (subset disjunctive power was briefly mentioned in [Case study 2](CaseStudy02)). In addition, the sponsor may also be interested in evaluating marginal power as well as subset disjunctive power based on the Endpoint 1 tests. The latter criterion will be met if a significant difference between Placebo and at least one dose is established with respect to Endpoint 1. Additionally, as in [Case study 2](CaseStudy02), the user could consider defining custom evaluation criteria. The three resulting evaluation criteria (marginal power, subset disjunctive criterion based on the Endpoint 1 tests and subset disjunctive criterion based on the Endpoint 2 tests) are included in the following evaluation model. ```r # Evaluation model case.study5.evaluation.model = EvaluationModel() + Criterion(id = "Marginal power", method = "MarginalPower", tests = tests("Placebo vs DoseL - ACR20", "Placebo vs DoseH - ACR20", "Placebo vs DoseL - HAQ-DI", "Placebo vs DoseH - HAQ-DI"), labels = c("Placebo vs DoseL - ACR20", "Placebo vs DoseH - ACR20", "Placebo vs DoseL - HAQ-DI", "Placebo vs DoseH - HAQ-DI"), par = parameters(alpha = 0.025)) + Criterion(id = "Disjunctive power - ACR20", method = "DisjunctivePower", tests = tests("Placebo vs DoseL - ACR20", "Placebo vs DoseH - ACR20"), labels = "Disjunctive power - ACR20", par = parameters(alpha = 0.025)) + Criterion(id = "Disjunctive power - HAQ-DI", method = "DisjunctivePower", tests = tests("Placebo vs DoseL - HAQ-DI", "Placebo vs DoseH - HAQ-DI"), labels = "Disjunctive power - HAQ-DI", par = parameters(alpha = 0.025)) ``` ## Download The R code and the report that summarizes the results of Clinical Scenario Evaluation for this case study can be downloaded on the Mediana website: - [R code](http://gpaux.github.io/Mediana/Case%20study%205.R) - [CSE report](http://gpaux.github.io/Mediana/Case%20study%205.docx) # Case study 6 ## Summary Case study 6 is an extension of [Case study 2](#case-study-2-1) where the objective of the sponsor is to compare several Multiple Testing Procedures (MTPs). The main difference is in the specification of the analysis model. ## Define a Data Model The same data model as in [Case study 2](#case-study-2-1) will be used in this case study. However, as shown in the table below, a new set of outcome parameters will be added in this case study (an optimistic set of parameters). ```{r, results = "asis", echo = FALSE} pander::pandoc.table(data.frame("Outcome parameter set" = c("Standard", "", "", "", "Optimistic", "", "", ""), "Arm" = c(rep(c("Placebo", "Dose L", "Dose M", "Dose H"),2)), "Mean" = c(16, 19.5, 21, 21, 16, 20, 21, 22), "SD" = c(rep(18,8)))) ``` ```r # Standard outcome1.placebo = parameters(mean = 16, sd = 18) outcome1.dosel = parameters(mean = 19.5, sd = 18) outcome1.dosem = parameters(mean = 21, sd = 18) outcome1.doseh = parameters(mean = 21, sd = 18) # Optimistic outcome2.placebo = parameters(mean = 16, sd = 18) outcome2.dosel = parameters(mean = 20, sd = 18) outcome2.dosem = parameters(mean = 21, sd = 18) outcome2.doseh = parameters(mean = 22, sd = 18) # Data model case.study6.data.model = DataModel() + OutcomeDist(outcome.dist = "NormalDist") + SampleSize(seq(220, 260, 20)) + Sample(id = "Placebo", outcome.par = parameters(outcome1.placebo, outcome2.placebo)) + Sample(id = "Dose L", outcome.par = parameters(outcome1.dosel, outcome2.dosel)) + Sample(id = "Dose M", outcome.par = parameters(outcome1.dosem, outcome2.dosem)) + Sample(id = "Dose H", outcome.par = parameters(outcome1.doseh, outcome2.doseh)) ``` ## Define an Analysis Model As in [Case study 2](#case-study-2-1), each dose-placebo comparison will be performed using a one-sided two-sample *t*-test (`TTest` method defined in each `Test` object). The same nomenclature will be used to define the hypotheses, i.e.: - H1: Null hypothesis of no difference between Dose L and placebo. - H2: Null hypothesis of no difference between Dose M and placebo. - H3: Null hypothesis of no difference between Dose H and placebo. In this case study, as in [Case study 2](#case-study-2-1), the overall success criterion in the trial is formulated in terms of demonstrating a beneficial effect at any of the three doses, inducing an inflation of the overall Type I error rate. In this case study, the sponsor is interested in comparing several Multiple Testing Procedures, such as the weighted Bonferroni, Holm and Hochberg procedures. These MTPs are defined as below: ```r # Multiplicity adjustments # No adjustment mult.adj1 = MultAdjProc(proc = NA) # Bonferroni adjustment (with unequal weights) mult.adj2 = MultAdjProc(proc = "BonferroniAdj", par = parameters(weight = c(1/4,1/4,1/2))) # Holm adjustment (with unequal weights) mult.adj3 = MultAdjProc(proc = "HolmAdj", par = parameters(weight = c(1/4,1/4,1/2))) # Hochberg adjustment (with unequal weights) mult.adj4 = MultAdjProc(proc = "HochbergAdj", par = parameters(weight = c(1/4,1/4,1/2))) ``` The `mult.adj1` object, which specified that no adjustment will be used, is defined in order to observe the decrease in power induced by each MTPs. It should be noted that for each weighted procedure, a higher weight is assigned to the test of Placebo vs Dose H (1/2), and the remaining weight is equally assigned to the two other tests (i.e. 1/4 for each test). These parameters are specified in the `par` argument of each MTP. The analysis model is defined as follows: ```r # Analysis model case.study6.analysis.model = AnalysisModel() + MultAdj(mult.adj1, mult.adj2, mult.adj3, mult.adj4) + Test(id = "Placebo vs Dose L", samples = samples("Placebo", "Dose L"), method = "TTest") + Test(id = "Placebo vs Dose M", samples = samples ("Placebo", "Dose M"), method = "TTest") + Test(id = "Placebo vs Dose H", samples = samples("Placebo", "Dose H"), method = "TTest") ``` For the sake of compactness, all MTPs are combined using a `MultAdj` object, but it is worth mentioning that each MTP could have been directly added to the `AnalysisModel` object using the `+` operator. ## Define an Evaluation Model As for the data model, the same evaluation model as in [Case study 2](#case-study-2-1) will be used in this case study. Refer to [Case study 2](#case-study-2-1) for more information. ```r # Evaluation model case.study6.evaluation.model = EvaluationModel() + Criterion(id = "Marginal power", method = "MarginalPower", tests = tests("Placebo vs Dose L", "Placebo vs Dose M", "Placebo vs Dose H"), labels = c("Placebo vs Dose L", "Placebo vs Dose M", "Placebo vs Dose H"), par = parameters(alpha = 0.025)) + Criterion(id = "Disjunctive power", method = "DisjunctivePower", tests = tests("Placebo vs Dose L", "Placebo vs Dose M", "Placebo vs Dose H"), labels = "Disjunctive power", par = parameters(alpha = 0.025)) + Criterion(id = "Dose H and at least one dose", method = "case.study6.criterion", tests = tests("Placebo vs Dose L", "Placebo vs Dose M", "Placebo vs Dose H"), labels = "Dose H and at least one of the two other doses are significant", par = parameters(alpha = 0.025)) ``` The last `Criterion` object specifies the custom criterion which computes the probability of a significant treatment effect at Dose H and a significant treatment difference at Dose L or Dose M. ## Perform Clinical Scenario Evaluation Using the data, analysis and evaluation models, simulation-based Clinical Scenario Evaluation is performed by calling the `CSE` function: ```r # Simulation Parameters case.study6.sim.parameters = SimParameters(n.sims = 1000, proc.load = "full", seed = 42938001) # Perform clinical scenario evaluation case.study6.results = CSE(case.study6.data.model, case.study6.analysis.model, case.study6.evaluation.model, case.study6.sim.parameters) ``` ## Generate a Simulation Report This case study will also illustrate the process of customizing a Word-based simulation report. This can be accomplished by defining custom sections and subsections to provide a structured summary of the complex set of simulation results. ### Create a Customized Simulation Report #### Define a Presentation Model Several presentation models will be used produce customized simulation reports: - A report without subsections. - A report with subsections. - A report with combined sections. First of all, a default `PresentationModel` object (`case.study6.presentation.model.default`) will be created. This object will include the common components of the report that are shared across the presentation models. The project information (`Project` object), sorting options in summary tables (`Table` object) and specification of custom labels (`CustomLabel` objects) are included in this object: ```r case.study6.presentation.model.default = PresentationModel() + Project(username = "[Mediana's User]", title = "Case study 6", description = "Clinical trial in patients with schizophrenia - Several MTPs") + Table(by = "sample.size") + CustomLabel(param = "sample.size", label = paste0("N = ", seq(220, 260, 20))) + CustomLabel(param = "multiplicity.adjustment", label = c("No adjustment", "Bonferroni adjustment", "Holm adjustment", "Hochberg adjustment")) ``` #### Report without subsections The first simulation report will include a section for each outcome parameter set. To accomplish this, a `Section` object is added to the default `PresentationModel` object and the report is generated: ```r # Reporting 1 - Without subsections case.study6.presentation.model1 = case.study6.presentation.model.default + Section(by = "outcome.parameter") # Report Generation GenerateReport(presentation.model = case.study6.presentation.model1, cse.results = case.study6.results, report.filename = "Case study 6 - Without subsections.docx") ``` #### Report with subsections The second report will include a section for each outcome parameter set and, in addition, a subsection will be created for each multiplicity adjustment procedure. The `Section` and `Subsection` objects are added to the default `PresentationModel` object as shown below and the report is generated: ```r # Reporting 2 - With subsections case.study6.presentation.model2 = case.study6.presentation.model.default + Section(by = "outcome.parameter") + Subsection(by = "multiplicity.adjustment") # Report Generation GenerateReport(presentation.model = case.study6.presentation.model2, cse.results = case.study6.results, report.filename = "Case study 6 - With subsections.docx") ``` #### Report with combined sections Finally, the third report will include a section for each combination of outcome parameter set and each multiplicity adjustment procedure. This is accomplished by adding a `Section` object to the default `PresentationModel` object and specifying the outcome parameter and multiplicity adjustment in the section's `by` argument. ```r # Reporting 3 - Combined sections case.study6.presentation.model3 = case.study6.presentation.model.default + Section(by = c("outcome.parameter", "multiplicity.adjustment")) # Report Generation GenerateReport(presentation.model = case.study6.presentation.model3, cse.results = case.study6.results, report.filename = "Case study 6 - Combined Sections.docx") ``` ## Download The R code and the report that summarizes the results of Clinical Scenario Evaluation for this case study can be downloaded on the Mediana website: - [R code](http://gpaux.github.io/Mediana/Case%20study%206.R) - [CSE report without subsections](http://gpaux.github.io/Mediana/Case%20study%206%20-%20Without%20subsections.docx) - [CSE report with subsections](http://gpaux.github.io/Mediana/Case%20study%206%20-%20With%20subsections.docx) - [CSE report with combined subsections](http://gpaux.github.io/Mediana/Case%20study%206%20-%20Combined%20Sections.docx) Mediana/inst/doc/adjusted-pvalues.html0000644000176200001440000010400113464544411017502 0ustar liggesusers Adjusted p-values and one-sided simultaneous confidence limits

Adjusted p-values and one-sided simultaneous confidence limits

Gautier Paux and Alex Dmitrienko

2019-05-08

Introduction

Along with the clinical trial simulations feature, the Mediana R package can be used to obtain adjusted p-values and one-sided simultaneous confidence limits.

AdjustPvalues function

The AdjustPvalues function can be used to get adjusted p-values for commonly used multiple testing procedures based on univariate p-values (Bonferroni, Holm, Hommel, Hochberg, fixed-sequence and Fallback procedures), commonly used parametric multiple testing procedures (single-step and step-down Dunnett procedures) and multistage gatepeeking procedure.

Description

Inputs

The AdjustPvalues function requires the input of two pre-specified objects defined in the following two arguments:

  • pval defines the raw p-values.

  • proc defines the multiple testing procedure. Several procedures are already implemented in the Mediana package (listed below, along with the required or optional parameters to specify in the par argument):

    • BonferroniAdj: Bonferroni procedure. Optional parameter: weight.
    • HolmAdj: Holm procedure. Optional parameter: weight.
    • HochbergAdj: Hochberg procedure. Optional parameter: weight.
    • HommelAdj: Hommel procedure. Optional parameter: weight.
    • FixedSeqAdj: Fixed-sequence procedure.
    • FallbackAdj: Fallback procedure. Required parameters: weight.
    • DunnettAdj: Single-step Dunnett procedure. Required parameters: n.
    • StepDownDunnettAdj: Step-down Dunnett procedure. Required parameters: n.
    • ChainAdj: Family of chain procedures. Required parameters: weight and transition.
    • NormalParamAdj: Parametric multiple testing procedure derived from a multivariate normal distribution. Required parameter: corr. Optional parameter: weight.
    • ParallelGatekeepingAdj: Family of parallel gatekeeping procedures. Required parameters: family, proc, gamma.
    • MultipleSequenceGatekeepingAdj: Family of multiple-sequence gatekeeping procedures. Required parameters: family, proc, gamma.
    • MixtureGatekeepingAdj: Family of mixture-based gatekeeping procedures. Required parameters: family, proc, gamma, serial, parallel.
  • par defines the parameters associated to the multiple testing procedure.

Outputs

The AdjustPvalues function returns a vector of adjusted p-values.

Example

The following example illustrates the use of the AdjustedPvalues function to get adjusted p-values for traditional nonparametric, semi-parametric and parametric procedures, as well as more complex multiple testing procedures.

Traditional nonparametric and semiparametric procedures

For the illustration of adjustedment of raw p-values with the traditional nonparametric and semiparametric procedures, we will consider the following three raw p-values:

These p-values will be adjusted with several multiple testing procedures as specified below:

In order to obtain the adjusted p-values for all these procedures, the sapply function can be used as follows. Note that as no weight parameter is defined, the equally weighted procedures are used to adjust the p-values. Finally, for the fixed-sequence procedure (FixedSeqAdj), the order of the testing sequence is based on the order of the p-values in the vector.

The output is as follows:

In order to specify unequal weights for the three raw p-values, the weight parameter can be defined as follows. Note that this parameter has no effect on the adjustment with the fixed-sequence procedure.

The output is as follows:

Traditional parametric procedures

Consider a clinical trials comparing three doses with a Placebo based on a normally distributed endpoints. Let H1, H2 and H3 be the three null hypotheses of no effect tested in the trial:

  • H1: No difference between Dose 1 and Placebo

  • H2: No difference between Dose 2 and Placebo

  • H3: No difference between Dose 3 and Placebo

The treatment effect estimates, corresponding to the mean dose-placebo difference are specified below, as well as the pooled standard deviation, the sample size, the standard errors and the T-statistics associated with the three dose-placebo tests

Based on the T-statistics, the raw p-values can be easily obtained:

The adjusted p-values based on the single step Dunnett and step-down Dunnett procedures are obtained as follows.

The outputs are presented below.

Gatekeeping procedures

For illustration, we will consider a clinical trial with two families of null hypotheses. The first family contains the null hypotheses associated with the Endpoints 1 and 2, that are considered as primary endpoints, and the second family the null hypotheses associated with the Endpoints 3 and 4 (key secondary endpoints). The null hypotheses of the secondary family will be tested if and only if at least one null hypothesis from the first family is rejected. Let H1, H2, H3 and H4 be the four null hypotheses of no effect on Endpoint 1, 2, 3 and 4 respectively tested in the trial:

  • H1: No difference between Drug and Placebo on Endpoint 1 (Family 1)

  • H2: No difference between Drug and Placebo on Endpoint 2 (Family 1)

  • H3: No difference between Drug and Placebo on Endpoint 3 (Family 2)

  • H4: No difference between Drug and Placebo on Endpoint 4 (Family 2)

The raw p-values are specified below:

The parameters of the parallel gatekeeping procedure are specified using the three arguments family which specifies the hypotheses included in each family, proc which specifies the component procedure associated with each family and gamma which specifies the truncation parameter of each family.

The adjusted p-values are obtained using the AdjustedPvalues function as specified below:

AdjustCIs function

The AdjustCIs function can be used to get simultaneous confidence intervals for selected multiple testing procedures based on univariate p-values (Bonferroni, Holm and fixed-sequence procedures) and commonly used parametric multiple testing procedures (single-step and step-down Dunnett procedures).

Description

Inputs

The AdjustPvalues function requires the input of two pre-specified objects defined in the following two arguments:

  • est defines the point estimates.

  • proc defines the multiple testing procedure. Several procedures are already implemented in the Mediana package (listed below, along with the required or optional parameters to specify in the par argument):

    • BonferroniAdj: Bonferroni procedure. Required parameters: n, sd and covprob. Optional parameter: weight.
    • HolmAdj: Holm procedure. Required parameters: n, sd and covprob. Optional parameter: weight.
    • FixedSeqAdj: Fixed-sequence procedure. Required parameters: n, sd and covprob.
    • DunnettAdj: Single-step Dunnett procedure. Required parameters: n, sd and covprob.
    • StepDownDunnettAdj: Step-down Dunnett procedure. Required parameters: n, sd and covprob.
  • par defines the parameters associated to the multiple testing procedure.

Outputs

The AdjustCIs function returns a vector lower simultaneous confidence limits.

Example

Consider a clinical trials comparing three doses with a Placebo based on a normally distributed endpoints. Let H1, H2 and H3 be the three null hypotheses of no effect tested in the trial:

  • H1: No difference between Dose 1 and Placebo

  • H2: No difference between Dose 2 and Placebo

  • H3: No difference between Dose 3 and Placebo

The treatment effect estimates, corresponding to the mean dose-placebo difference are specified below, as well as the pooled standard deviation, the sample size.

The one-sided simultaneous confidence limits for several multiple testing procedures are obtained using the AdjustCIs function wrapped in a sapply function.

The output obtained is presented below:

Mediana/inst/doc/case-studies.R0000644000176200001440000000676213464544412016070 0ustar liggesusers## ---- results = "asis", echo = FALSE------------------------------------- pander::pandoc.table(data.frame(Arm = c("Placebo", "Dose L", "Dose M", "Dose H"), Mean = c(16, 19.5, 21, 21), SD = rep(18,4))) ## ---- results = "asis", echo = FALSE------------------------------------- pander::pandoc.table(data.frame(Sample = c("Placebo M-", "Placebo M+", "Treament M-", "Treatment M+"), Mean = c(0.12, 0.12, 0.24, 0.30), SD = rep(0.45,4))) ## ---- results = "asis", echo = FALSE------------------------------------- pander::pandoc.table(data.frame(Endpoint = c("Progression-free survival", "", "", "Overall survival", "", ""), Statistic = c(rep(c("Median time (months)", "Hazard rate", "Hazard ratio"),2)), Placebo = c(6, 0.116, 0.67, 15, 0.046, 0.79), Treatment = c(9, 0.077,"",19,0.036,""))) ## ---- results = "asis", echo = FALSE------------------------------------- pander::pandoc.table(data.frame(Endpoint = c("ACR20 (%)", "", "", "HAQ-DI (mean (SD))", "", ""), "Outcome parameter set" = c(rep(c("Conservative", "Standard", "Optimistic"),2)), Placebo = c("30%", "30%", "30%", "-0.10 (0.50)", "-0.10 (0.50)", "-0.10 (0.50)"), "Dose L" = c("40%", "45%", "50%", "-0.20 (0.50)", "-0.25 (0.50)", "-0.30 (0.50)"), "Dose H" = c("50%", "55%", "60%", "-0.30 (0.50)", "-0.35 (0.50)", "-0.40 (0.50)"))) ## ---- results = "asis", echo = FALSE------------------------------------- pander::pandoc.table(data.frame("Outcome parameter set" = c("Standard", "", "", "", "Optimistic", "", "", ""), "Arm" = c(rep(c("Placebo", "Dose L", "Dose M", "Dose H"),2)), "Mean" = c(16, 19.5, 21, 21, 16, 20, 21, 22), "SD" = c(rep(18,8)))) Mediana/inst/doc/adjusted-pvalues.Rmd0000644000176200001440000003032213434027611017257 0ustar liggesusers--- title: "Adjusted p-values and one-sided simultaneous confidence limits" author: "Gautier Paux and Alex Dmitrienko" date: "`r Sys.Date()`" output: rmarkdown::html_vignette vignette: > %\VignetteIndexEntry{Adjusted p-values and one-sided simultaneous confidence limits} %\VignetteEngine{knitr::rmarkdown} \usepackage[utf8]{inputenc} --- # Introduction Along with the clinical trial simulations feature, the Mediana R package can be used to obtain adjusted *p*-values and one-sided simultaneous confidence limits. # `AdjustPvalues` function The `AdjustPvalues` function can be used to get adjusted *p*-values for commonly used multiple testing procedures based on univariate p-values (Bonferroni, Holm, Hommel, Hochberg, fixed-sequence and Fallback procedures), commonly used parametric multiple testing procedures (single-step and step-down Dunnett procedures) and multistage gatepeeking procedure. ## Description ### Inputs The `AdjustPvalues` function requires the input of two pre-specified objects defined in the following two arguments: - `pval` defines the raw *p*-values. - `proc` defines the multiple testing procedure. Several procedures are already implemented in the Mediana package (listed below, along with the required or optional parameters to specify in the par argument): - `BonferroniAdj`: Bonferroni procedure. Optional parameter: `weight`. - `HolmAdj`: Holm procedure. Optional parameter: `weight`. - `HochbergAdj`: Hochberg procedure. Optional parameter: `weight`. - `HommelAdj`: Hommel procedure. Optional parameter: `weight`. - `FixedSeqAdj`: Fixed-sequence procedure. - `FallbackAdj`: Fallback procedure. Required parameters: `weight`. - `DunnettAdj`: Single-step Dunnett procedure. Required parameters: `n`. - `StepDownDunnettAdj`: Step-down Dunnett procedure. Required parameters: `n`. - `ChainAdj`: Family of chain procedures. Required parameters: `weight` and `transition`. - `NormalParamAdj`: Parametric multiple testing procedure derived from a multivariate normal distribution. Required parameter: `corr`. Optional parameter: `weight`. - `ParallelGatekeepingAdj`: Family of parallel gatekeeping procedures. Required parameters: `family`, `proc`, `gamma`. - `MultipleSequenceGatekeepingAdj`: Family of multiple-sequence gatekeeping procedures. Required parameters: `family`, `proc`, `gamma`. - `MixtureGatekeepingAdj`: Family of mixture-based gatekeeping procedures. Required parameters: `family`, `proc`, `gamma`, `serial`, `parallel`. - `par` defines the parameters associated to the multiple testing procedure. ### Outputs The `AdjustPvalues` function returns a vector of adjusted *p*-values. ## Example The following example illustrates the use of the `AdjustedPvalues` function to get adjusted *p*-values for traditional nonparametric, semi-parametric and parametric procedures, as well as more complex multiple testing procedures. ### Traditional nonparametric and semiparametric procedures For the illustration of adjustedment of raw *p*-values with the traditional nonparametric and semiparametric procedures, we will consider the following three raw *p*-values: ```r rawp = c(0.012, 0.009, 0.023) ``` These *p*-values will be adjusted with several multiple testing procedures as specified below: ```r # Bonferroni, Holm, Hochberg, Hommel and Fixed-sequence procedure proc = c("BonferroniAdj", "HolmAdj", "HochbergAdj", "HommelAdj", "FixedSeqAdj", "FallbackAdj") ``` In order to obtain the adjusted *p*-values for all these procedures, the `sapply` function can be used as follows. Note that as no `weight` parameter is defined, the equally weighted procedures are used to adjust the *p*-values. Finally, for the fixed-sequence procedure (`FixedSeqAdj`), the order of the testing sequence is based on the order of the *p*-values in the vector. ```r # Equally weighted sapply(proc, function(x) {AdjustPvalues(rawp, proc = x)}) ``` The output is as follows: ```r BonferroniAdj HolmAdj HochbergAdj HommelAdj FixedSeqAdj FallbackAdj [1,] 0.036 0.027 0.023 0.023 0.012 0.0360 [2,] 0.027 0.027 0.023 0.018 0.012 0.0270 [3,] 0.069 0.027 0.023 0.023 0.023 0.0345 ``` In order to specify unequal weights for the three raw *p*-values, the `weight` parameter can be defined as follows. Note that this parameter has no effect on the adjustment with the fixed-sequence procedure. ```r # Unequally weighted (no effect on the fixed-sequence procedure) sapply(proc, function(x) {AdjustPvalues(rawp, proc = x, par = parameters(weight = c(1/2, 1/4, 1/4)))}) ``` The output is as follows: ```r BonferroniAdj HolmAdj HochbergAdj HommelAdj FixedSeqAdj FallbackAdj [1,] 0.024 0.024 0.018 0.018 0.012 0.024 [2,] 0.036 0.024 0.018 0.018 0.012 0.024 [3,] 0.092 0.024 0.023 0.023 0.023 0.024 ``` ### Traditional parametric procedures Consider a clinical trials comparing three doses with a Placebo based on a normally distributed endpoints. Let H1, H2 and H3 be the three null hypotheses of no effect tested in the trial: - H1: No difference between Dose 1 and Placebo - H2: No difference between Dose 2 and Placebo - H3: No difference between Dose 3 and Placebo The treatment effect estimates, corresponding to the mean dose-placebo difference are specified below, as well as the pooled standard deviation, the sample size, the standard errors and the *T*-statistics associated with the three dose-placebo tests ```r # Treatment effect estimates (mean dose-placebo differences) est = c(2.3,2.5,1.9) # Pooled standard deviation sd = 9.5 # Study design is balanced with 180 patients per treatment arm n = 180 # Standard errors stderror = rep(sd*sqrt(2/n),3) # T-statistics associated with the three dose-placebo tests stat = est/stderror ``` Based on the *T*-statistics, the raw *p*-values can be easily obtained: ```r # One-sided pvalue rawp = 1-pt(stat,2*(n-1)) ``` The adjusted *p*-values based on the single step Dunnett and step-down Dunnett procedures are obtained as follows. ```r # Adjusted p-values based on the Dunnett procedures # (assuming that each test statistic follows a t distribution) AdjustPvalues(rawp,proc = "DunnettAdj", par = parameters(n = n)) AdjustPvalues(rawp,proc = "StepDownDunnettAdj", par = parameters(n = n)) ``` The outputs are presented below. ```r > AdjustPvalues(rawp,proc = "DunnettAdj",par = parameters(n = n)) [1] 0.02887019 0.01722656 0.07213393 > AdjustPvalues(rawp,proc = "StepDownDunnettAdj",par = parameters(n = n)) [1] 0.02043820 0.01722544 0.02909082 ``` ### Gatekeeping procedures For illustration, we will consider a clinical trial with two families of null hypotheses. The first family contains the null hypotheses associated with the Endpoints 1 and 2, that are considered as primary endpoints, and the second family the null hypotheses associated with the Endpoints 3 and 4 (key secondary endpoints). The null hypotheses of the secondary family will be tested if and only if at least one null hypothesis from the first family is rejected. Let H1, H2, H3 and H4 be the four null hypotheses of no effect on Endpoint 1, 2, 3 and 4 respectively tested in the trial: - H1: No difference between Drug and Placebo on Endpoint 1 (Family 1) - H2: No difference between Drug and Placebo on Endpoint 2 (Family 1) - H3: No difference between Drug and Placebo on Endpoint 3 (Family 2) - H4: No difference between Drug and Placebo on Endpoint 4 (Family 2) The raw *p*-values are specified below: ```r # One-sided raw p-values (associated respectively with H1, H2, H3 and H4) rawp<-c(0.0082, 0.0174, 0.0042, 0.0180) ``` The parameters of the parallel gatekeeping procedure are specified using the three arguments `family` which specifies the hypotheses included in each family, `proc` which specifies the component procedure associated with each family and `gamma` which specifies the truncation parameter of each family. ```r # Define hypothesis included in each family (index of the raw p-value vector) family = families(family1 = c(1, 2), family2 = c(3, 4)) # Define component procedure of each family component.procedure = families(family1 ="HolmAdj", family2 = "HolmAdj") # Truncation parameter of each family gamma = families(family1 = 0.5, family2 = 1) ``` The adjusted *p*-values are obtained using the `AdjustedPvalues` function as specified below: ```r AdjustPvalues(rawp, proc = "ParallelGatekeepingAdj", par = parameters(family = family, proc = component.procedure, gamma = gamma)) [1] 0.0164 0.0232 0.0232 0.0232 ``` # `AdjustCIs` function The `AdjustCIs` function can be used to get simultaneous confidence intervals for selected multiple testing procedures based on univariate p-values (Bonferroni, Holm and fixed-sequence procedures) and commonly used parametric multiple testing procedures (single-step and step-down Dunnett procedures). ## Description ### Inputs The `AdjustPvalues` function requires the input of two pre-specified objects defined in the following two arguments: - `est` defines the point estimates. - `proc` defines the multiple testing procedure. Several procedures are already implemented in the Mediana package (listed below, along with the required or optional parameters to specify in the par argument): - `BonferroniAdj`: Bonferroni procedure. Required parameters: `n`, `sd` and `covprob`. Optional parameter: `weight`. - `HolmAdj`: Holm procedure. Required parameters: `n`, `sd` and `covprob`. Optional parameter: `weight`. - `FixedSeqAdj`: Fixed-sequence procedure. Required parameters: `n`, `sd` and `covprob`. - `DunnettAdj`: Single-step Dunnett procedure. Required parameters: `n`, `sd` and `covprob`. - `StepDownDunnettAdj`: Step-down Dunnett procedure. Required parameters: `n`, `sd` and `covprob`. - `par` defines the parameters associated to the multiple testing procedure. ### Outputs The `AdjustCIs` function returns a vector lower simultaneous confidence limits. ## Example Consider a clinical trials comparing three doses with a Placebo based on a normally distributed endpoints. Let H1, H2 and H3 be the three null hypotheses of no effect tested in the trial: - H1: No difference between Dose 1 and Placebo - H2: No difference between Dose 2 and Placebo - H3: No difference between Dose 3 and Placebo The treatment effect estimates, corresponding to the mean dose-placebo difference are specified below, as well as the pooled standard deviation, the sample size. ```r # Null hypotheses of no treatment effect are equally weighted weight<-c(1/3,1/3,1/3) # Treatment effect estimates (mean dose-placebo differences) est = c(2.3,2.5,1.9) # Pooled standard deviation sd = 9.5 # Study design is balanced with 180 patients per treatment arm n = 180 ``` The one-sided simultaneous confidence limits for several multiple testing procedures are obtained using the `AdjustCIs` function wrapped in a `sapply` function. ```r # Bonferroni, Holm, Hochberg, Hommel and Fixed-sequence procedure proc = c("BonferroniAdj", "HolmAdj", "FixedSeqAdj", "DunnettAdj", "StepDownDunnettAdj") # Equally weighted sapply(proc, function(x) {AdjustCIs(est, proc = x, par = parameters(sd = sd, n = n, covprob = 0.975, weight = weight))}) ``` The output obtained is presented below: ```r BonferroniAdj HolmAdj FixedSeqAdj DunnettAdj StepDownDunnettAdj [1,] -0.09730247 0.00000000 0.00000000 -0.05714354 0.00000000 [2,] 0.10269753 0.00000000 0.00000000 0.14285646 0.00000000 [3,] -0.49730247 -0.06268427 -0.06268427 -0.45714354 -0.06934203 ``` Mediana/inst/doc/mediana.Rmd0000644000176200001440000016462513434027611015413 0ustar liggesusers--- title: "Mediana: an R package for clinical trial simulations" author: "Gautier Paux and Alex Dmitrieniko" date: "`r Sys.Date()`" output: rmarkdown::html_vignette vignette: > %\VignetteIndexEntry{Mediana: an R package for clinical trial simulations} %\VignetteEngine{knitr::rmarkdown} \usepackage[utf8]{inputenc} --- # Introduction ## About Mediana is an R package which provides a general framework for clinical trial simulations based on the Clinical Scenario Evaluation approach. The package supports a broad class of data models (including clinical trials with continuous, binary, survival-type and count-type endpoints as well as multivariate outcomes that are based on combinations of different endpoints), analysis strategies and commonly used evaluation criteria. ## Expert and development teams **Package design**: Alex Dmitrienko (Mediana Inc.). **Core development team**: Gautier Paux (Servier), Alex Dmitrienko (Mediana Inc.). **Extended development team**: Thomas Brechenmacher (Novartis), Fei Chen (Johnson and Johnson), Ilya Lipkovich (Quintiles), Ming-Dauh Wang (Lilly), Jay Zhang (MedImmune), Haiyan Zheng (Osaka University). **Expert team**: Keaven Anderson (Merck), Frank Harrell (Vanderbilt University), Mani Lakshminarayanan (Pfizer), Brian Millen (Lilly), Jose Pinheiro (Johnson and Johnson), Thomas Schmelter (Bayer). ## Installation ### Latest release Install the latest version of the Mediana package from CRAN using the `install.packages` command in R: ```r install.packages("Mediana") ``` Alternatively, you can download the package from the [CRAN website](https://cran.r-project.org/package=Mediana). ### Development version The up-to-date development version can be found and installed directly from the GitHub web site. You need to install the `devtools` package and then call the `install_github` function in R: ```r # install.packages("devtools") devtools::install_github("gpaux/Mediana") ``` ## Clinical Scenario Evaluation Framework The Mediana R package was developed to provide a general software implementation of the Clinical Scenario Evaluation (CSE) framework. This framework introduced by [Benda et al. (2010)](http://dij.sagepub.com/content/44/3/299.abstract) and [Friede et al. (2010)](http://dij.sagepub.com/content/44/6/713.abstract) recognizes that sample size calculation and power evaluation in clinical trials are high-dimensional statistical problems. This approach helps decompose this complex problem by identifying key elements of the evaluation process. These components are termed models: - [Data models](#data-model) define the process of generating trial data (e.g., sample sizes, outcome distributions and parameters). - [Analysis models](#analysis-model) define the statistical methods applied to the trial data (e.g., statistical tests, multiplicity adjustments). - [Evaluation models](#evaluation-model) specify the measures for evaluating the performance of the analysis strategies (e.g., traditional success criteria such as marginal power or composite criteria such as disjunctive power). Find out more about the role of each model and how to specify the three models to perform Clinical Scenario Evaluation by reviewing the dedicated pages (click on the links above). ## Case studies Multiple case studies are provided on the [web site's package](http://gpaux.github.io/Mediana/CaseStudies.html) to facilitate the implementation of Clinical Scenario Evaluation in different clinical trial settings using the Mediana package. These case studies will be updated on a regular basis. Another vignette accessible with the following command is also available presenting these case studies. ```r vignette("case-studies", package = "Mediana") ``` The Mediana package has been successfully used in multiple clinical trials to perform power calculations as well as optimally select trial designs and analysis strategies (clinical trial optimization). For more information on applications of the Mediana package, download the following papers: - [Dmitrienko, A., Paux, G., Brechenmacher, T. (2016). Power calculations in clinical trials with complex clinical objectives. Journal of the Japanese Society of Computational Statistics. 28, 15-50.](https://www.jstage.jst.go.jp/article/jjscs/28/1/28_1411001_213/_article) - [Dmitrienko, A., Paux, G., Pulkstenis, E., Zhang, J. (2016). Tradeoff-based optimization criteria in clinical trials with multiple objectives and adaptive designs. Journal of Biopharmaceutical Statistics. 26, 120-140.](http://www.tandfonline.com/doi/abs/10.1080/10543406.2015.1092032?journalCode=lbps20) # Data model Data models define the process of generating patient data in clinical trials. ## Initialization A data model can be initialized using the following command ```r # DataModel initialization data.model = DataModel() ``` It is highly recommended to use this command as it will simplify the process of specifying components of the data model, e.g., `OutcomeDist`, `Sample`, `SampleSize`, `Event` and `Design` objects. ## Components of a data model Once the `DataModel` object has been initialized, components of the data model can be specified by adding objects to the model using the '+' operator as shown below. ```r # Outcome parameter set 1 outcome1.placebo = parameters(mean = 0, sd = 70) outcome1.treatment = parameters(mean = 40, sd = 70) # Outcome parameter set 2 outcome2.placebo = parameters(mean = 0, sd = 70) outcome2.treatment = parameters(mean = 50, sd = 70) # Data model case.study1.data.model = DataModel() + OutcomeDist(outcome.dist = "NormalDist") + SampleSize(c(50, 55, 60, 65, 70)) + Sample(id = "Placebo", outcome.par = parameters(outcome1.placebo, outcome2.placebo)) + Sample(id = "Treatment", outcome.par = parameters(outcome1.treatment, outcome2.treatment)) ``` ### `OutcomeDist` object #### Description This object specifies the distribution of patient outcomes in a data model. An `OutcomeDist` object is defined by two arguments: - `outcome.dist` defines the outcome distribution. - `outcome.type` defines the outcome type (optional). There are two acceptable values of this argument: `standard` (fixed-design setting) and `event` (event-driven design setting). Several distributions that can be specified using the `outcome.dist` argument are already implemented in the Mediana package. These distributions are listed below along with the required parameters to be included in the `outcome.par` argument of the `Sample` object: - `UniformDist`: generate data following a **univariate distribution**. Required parameter: `max`. - `NormalDist`: generate data following a **normal distribution**. Required parameters: `mean` and `sd`. - `BinomDist`: generate data following a **binomial distribution**. Required parameter: `prop`. - `BetaDist`: generate data following a **beta distribution**. Required parameter: `a` and `b`. - `ExpoDist`: generate data following an **exponential distribution**. Required parameter: `rate`. - `WeibullDist`: generate data following a **weibull distribution**. Required parameter: `shape` and `scale`. - `TruncatedExpoDist`: generate data following a **truncated exponential distribution**. Required parameter: `rate` an `trunc`. - `PoissonDist`: generate data following a **Poisson distribution**. Required parameter: `lambda`. - `NegBinomDist`: generate data following a **negative binomial distribution**. Required parameters: `dispersion` and `mean`. - `MultinomialDist`: generate data following a **multinomial distribution**. Required parameters: `prob`. - `MVNormalDist`: generate data following a **multivariate normal distribution**. Required parameters: `par` and `corr`. For each generated endpoint, the `par` parameter must contain the required parameters `mean` and `sd`. The `corr` parameter specifies the correlation matrix for the endpoints. - `MVBinomDist`: generate data following a **multivariate binomial distribution**. Required parameters: `par` and `corr`. For each generated endpoint, the `par` parameter must contain the required parameter `prop`. The `corr` parameter specifies the correlation matrix for the endpoints. - `MVExpoDist`: generate data following a **multivariate exponential distribution**. Required parameters: `par` and `corr`. For each generated endpoint, the `par` parameter must contain the required parameter `rate`. The `corr `parameter specifies the correlation matrix for the endpoints. - `MVExpoPFSOSDist`: generate data following a **multivariate exponential distribution to generate PFS and OS endpoints**. The PFS value is imputed to the OS value if the latter occurs earlier. Required parameters: `par` and `corr`. For each generated endpoint, the `par` parameter must contain the required parameter `rate`. The` corr` parameter specifies the correlation matrix for the endpoints. - `MVMixedDist`: generate data following a **multivariate mixed distribution**. Required parameters: `type`, `par` and `corr`. The `type` parameter assumes the following values: `NormalDist`, `BinomDist` and `ExpoDist`. For each generated endpoint, the `par` parameter must contain the required parameters according to the distribution type. The `corr` parameter specifies the correlation matrix for the endpoints. The `outcome.type` argument defines the outcome's type. This argument accepts only two values: - `standard`: for fixed design setting. - `event`: for event-driven design setting. The outcome's type must be defined for each endpoint in case of multivariate disribution, e.g. `c("event","event")` in case of multivariate exponential distribution. The `outcome.type` argument is essential to get censored events for time-to-event endpoints if the `SampleSize` object is used to specify the number of patients to generate. A single `OutcomeDist` object can be added to a `DataModel` object. For more information about the `OutcomeDist` object, see the documentation for [OutcomeDist](https://cran.r-project.org/package=Mediana/Mediana.pdf) on the CRAN web site. If a certain outcome distribution is not implemented in the Mediana package, the user can create a custom function and use it within the package (see the dedicated vignette `vignette("custom-functions", package = "Mediana")`). #### Example Examples of `OutcomeDist` objects: Specify popular univariate distributions: ```r # Normal distribution OutcomeDist(outcome.dist = "NormalDist") # Binomial distribution OutcomeDist(outcome.dist = "BinomDist") # Exponential distribution OutcomeDist(outcome.dist = "ExpoDist") ``` Specify a mixed multivariate distribution: ```r # Multivariate Mixed distribution OutcomeDist(outcome.dist = "MVMixedDist") ``` ### `Sample` object #### Description This object specifies parameters of a sample (e.g., treatment arm in a trial) in a data model. Samples are defined as mutually exclusive groups of patients, for example, treatment arms. A `Sample` object is defined by three arguments: - `id` defines the sample's unique ID (label). - `outcome.par` defines the parameters of the outcome distribution for the sample. - `sample.size` defines the sample's size (optional). The `sample.size` argument is optional but must be used to define the sample size only if an unbalanced design is considered (i.e., the sample size varies across the samples). The sample size must be either defined in the `Sample` object or in the `SampleSize` object, but not in both. Several `Sample` objects can be added to a `DataModel` object. For more information about the `Sample` object, see the documentation [Sample](https://cran.r-project.org/package=Mediana/Mediana.pdf) on the CRAN web site. #### Example Examples of `Sample` objects: Specify two samples with a continuous endpoint following a normal distribution: ```r # Outcome parameters set 1 outcome1.placebo = parameters(mean = 0, sd = 70) outcome1.treatment = parameters(mean = 40, sd = 70) # Outcome parameters set 2 outcome2.placebo = parameters(mean = 0, sd = 70) outcome2.treatment = parameters(mean = 50, sd = 70) # Placebo sample object Sample(id = "Placebo", outcome.par = parameters(outcome1.placebo, outcome2.placebo)) # Treatment sample object Sample(id = "Treatment", outcome.par = parameters(outcome1.treatment, outcome2.treatment)) ``` Specify two samples with a binary endpoint following a binomial distribution: ```r # Outcome parameters set outcome.placebo = parameters(prop = 0.30) outcome.treatment = parameters(prop = 0.50) # Placebo sample object Sample(id = "Placebo", outcome.par = parameters(outcome1.placebo)) # Treatment sample object Sample(id = "Treatment", outcome.par = parameters(outcome1.treatment)) ``` Specify two samples with a time-to-event (survival) endpoint following an exponential distribution: ```r # Outcome parameters median.time.placebo = 6 rate.placebo = log(2)/median.time.placebo outcome.placebo = parameters(rate = rate.placebo) median.time.treatment = 9 rate.treatment = log(2)/median.time.treatment outcome.treatment = parameters(rate = rate.treatment) # Placebo sample object Sample(id = "Placebo", outcome.par = parameters(outcome.placebo)) # Treatment sample object Sample(id = "Treatment", outcome.par = parameters(outcome.treatment)) ``` Specify three samples with two primary endpoints that follow a binomial and a normal distribution, respectively: ```r # Variable types var.type = list("BinomDist", "NormalDist") # Outcome distribution parameters placebo.par = parameters(parameters(prop = 0.3), parameters(mean = -0.10, sd = 0.5)) dosel.par = parameters(parameters(prop = 0.40), parameters(mean = -0.20, sd = 0.5)) doseh.par = parameters(parameters(prop = 0.50), parameters(mean = -0.30, sd = 0.5)) # Correlation between two endpoints corr.matrix = matrix(c(1.0, 0.5, 0.5, 1.0), 2, 2) # Outcome parameters set outcome.placebo = parameters(type = var.type, par = plac.par, corr = corr.matrix) outcome.dosel = parameters(type = var.type, par = dosel.par, corr = corr.matrix) outcome.doseh = parameters(type = var.type, par = doseh.par, corr = corr.matrix) # Placebo sample object Sample(id = list("Plac ACR20", "Plac HAQ-DI"), outcome.par = parameters(outcome.placebo)) # Low Dose sample object Sample(id = list("DoseL ACR20", "DoseL HAQ-DI"), outcome.par = parameters(outcome.dosel)) # High Dose sample object Sample(id = list("DoseH ACR20", "DoseH HAQ-DI"), outcome.par = parameters(outcome.doseh)) ``` ### `SampleSize` object #### Description This object specifies the sample size in a balanced trial design (all samples will have the same sample size). A `SampleSize` object is defined by one argument: - `sample.size` specifies a list or vector of sample size(s). A single `SampleSize` object can be added to a `DataModel` object. For more information about the `SampleSize` object, see the package's documentation [SampleSize](https://cran.r-project.org/package=Mediana/Mediana.pdf). #### Example Examples of `SampleSize` objects: Several equivalent specifications of the `SampleSize` object: ```r SampleSize(c(50, 55, 60, 65, 70)) SampleSize(list(50, 55, 60, 65, 70)) SampleSize(seq(50, 70, 5)) ``` ### `Event` object #### Description This object specifies the total number of events (total event count) among all samples in an event-driven clinical trial. An `Event` object is defined by two arguments: - `n.events` defines a vector of the required event counts. - `rando.ratio` defines a vector of randomization ratios for each `Sample` object defined in the `DataModel` object. A single `Event` object can be added to a `DataModel` object. For more information about the `Event` object, see the package's documentation [Event](https://cran.r-project.org/package=Mediana/Mediana.pdf). #### Example Examples of `Event` objects: Specify the required number of events in a trial with a 2:1 randomization ratio (Treatment:Placebo): ```r # Event parameters event.count.total = c(390, 420) randomization.ratio = c(1,2) # Event object Event(n.events = event.count.total, rando.ratio = randomization.ratio) ``` ### `Design` object #### Description This object specifies the design parameters used in event-driven designs if the user is interested in modeling the enrollment (or accrual) and dropout (or loss to follow up) processes. A `Design` object is defined by seven arguments: - `enroll.period` defines the length of the enrollment period. - `enroll.dist` defines the enrollment distribution. - `enroll.dist.par` defines the parameters of the enrollment distribution (optional). - `followup.period` defines the length of the follow-up period for each patient in study designs with a fixed follow-up period, i.e., the length of time from the enrollment to planned discontinuation is constant across patients. The user must specify either `followup.period` or `study.duration`. - `study.duration` defines the total study duration in study designs with a variable follow-up period. The total study duration is defined as the length of time from the enrollment of the first patient to the discontinuation of the last patient. - `dropout.dist` defines the dropout distribution. - `dropout.dist.par` defines the parameters of the dropout distribution. Several `Design` objects can be added to a `DataModel` object. For more information about the `Design` object, see the package's documentation [Design](https://cran.r-project.org/package=Mediana/Mediana.pdf). A convienient way to model non-uniform enrollment is to use a beta distribution (`BetaDist`). If `enroll.dist = "BetaDist"`, the `enroll.dist.par` should contain the parameter of the beta distribution (`a` and `b`). These parameters must be derived according to the expected enrollment at a specific timepoint. For example, if half the patients are expected to be enrolled at 75% of the enrollment period, the beta distribution is a `Beta(log(0.5)/log(0.75), 1)`. Generally, let `q` be the proportion of enrolled patients at `100p`% of the enrollment period, the Beta distribution can be derived as follows: - If `q < p`, the Beta distribution is `Beta(a,1)` with `a = log(q) / log(p)` - If `q > p`, the Beta distribution is `Beta (1,b)` with `b = log(1-q) / log(1-p)` - Otherwise the Beta distribution is `Beta(1,1)` #### Example Examples of `Design` objects: Specify parameters of the enrollment and dropout processes with a uniform enrollment distribution and exponential dropout distribution: ```r # Design parameters (in months) Design(enroll.period = 9, study.duration = 21, enroll.dist = "UniformDist", dropout.dist = "ExpoDist", dropout.dist.par = parameters(rate = 0.0115)) ``` # Analysis model Analysis models define statistical methods (e.g., significance tests or descriptive statistics) that are applied to the study data in a clinical trial. ## Initialization An analysis model can be initialized using the following command: ```r # AnalysisModel initialization analysis.model = AnalysisModel() ``` It is highly recommended to use this command to initialize an analysis model as it will simplify the process of specifying components of the data model, including the `MultAdj`, `MultAdjProc`, `MultAdjStrategy`, `Test`, `Statistic` objects. ## Components of an analysis model After an `AnalysisModel` object has been initialized, components of the analysis model can be specified by adding objects to the model using the '+' operator as shown below. ```r # Analysis model case.study1.analysis.model = AnalysisModel() + Test(id = "Placebo vs treatment", samples = samples("Placebo", "Treatment"), method = "TTest") + Statistic(id = "Mean Treatment", method = "MeanStat", samples = samples("Treatment")) ``` ### `Test` object #### Description This object specifies a significance test that will be applied to one or more samples defined in a data model. A `Test` object is defined by the following four arguments: - `id` defines the test's unique ID (label). - `method` defines the significance test. - `samples` defines the IDs of the samples (defined in the data model) that the significance test is applied to. - `par` defines the parameter(s) of the statistical test. Several commonly used significance tests are already implemented in the Mediana package. In addition, the user can easily define custom significance tests (see the dedicated vignette `vignette("custom-functions", package = "Mediana")`). The built-in tests are listed below along with the required parameters that need to be included in the `par` argument: - `TTest`: perform the **two-sample t-test** between the two samples defined in the `samples` argument. Optional parameter: `larger` (Larger value is expected in the second sample (`TRUE` or `FALSE`)). - `TTestNI`: perform the **non-inferiority two-sample t-test** between the two samples defined in the `samples` argument. Required parameter: `margin` (positive non-inferiority margin). Optional parameter: `larger` (Larger value is expected in the second sample (`TRUE` or `FALSE`)). - `WilcoxTest`: perform the **Wilcoxon-Mann-Whitney test** between the two samples defined in the `samples` argument. Optional parameter: `larger` (Larger value is expected in the second sample (`TRUE` or `FALSE`)). - `PropTest`: perform the **two-sample test for proportions** between the two samples defined in the `samples` argument. Optional parameter: `yates` (Yates' continuity correction flag that is set to `TRUE` or `FALSE`) and `larger` (Larger value is expected in the second sample (`TRUE` or `FALSE`)). - `PropTestNI`: perform the **non-inferiority two-sample test for proportions** between the two samples defined in the `samples` argument. Required parameter: `margin` (positive non-inferiority margin). Optional parameter: `yates` (Yates' continuity correction flag that is set to `TRUE` or `FALSE`) and `larger` (Larger value is expected in the second sample (`TRUE` or `FALSE`)). - `FisherTest`: perform the **Fisher exact test** between the two samples defined in the `samples` argument. Optional parameter: `larger` (Larger value is expected in the second sample (`TRUE` or `FALSE`)). - `GLMPoissonTest`: perform the **Poisson regression test** between the two samples defined in the `samples` argument. Optional parameter: `larger` (Larger value is expected in the second sample (`TRUE` or `FALSE`)). - `GLMNegBinomTest`: perform the **Negative-binomial regression test** between the two `samples` defined in the `samples` argument. Optional parameter: `larger` (Larger value is expected in the second sample (`TRUE` or `FALSE`)). - `LogrankTest`: perform the **Log-rank test** between the two samples defined in the `samples` argument. Optional parameter: `larger` (Larger value is expected in the second sample (`TRUE` or `FALSE`)). - `OrdinalLogisticRegTest`: perform an **ordinal logistic regression test** between the two samples defined in the `samples` argument. Optional parameter: `larger` (Larger value is expected in the second sample (`TRUE` or `FALSE`)). It needs to be noted that the significance tests listed above are implemented as **one-sided** tests and thus the sample order in the `samples` argument is important. In particular, the Mediana package assumes by default that a numerically larger value of the endpoint is expected in Sample 2 compared to Sample 1. Suppose, for example, that a higher treatment response indicates a beneficial effect (e.g., higher improvement rate). In this case Sample 1 should include control patients whereas Sample 2 should include patients allocated to the experimental treatment arm. The sample order needs to be reversed if a beneficial treatment effect is associated with a lower value of the endpoint (e.g., lower blood pressure), or alternatively (from version 1.0.6), the optional parameters `larger` must be set to FALSE to indicate that a larger value is expected on the first Sample. Several `Test` objects can be added to an `AnalysisModel`object. For more information about the `Test` object, see the package's documentation [Test](https://cran.r-project.org/package=Mediana/Mediana.pdf) on the CRAN web site. #### Example Examples of `Test` objects: Carry out the two-sample t-test: ```r # Placebo and Treatment samples were defined in the data model Test(id = "Placebo vs treatment", samples = samples("Placebo", "Treatment"), method = "TTest") ``` Carry out the two-sample t-test with larger values expected in the first sample (from v1.0.6): ```r # Placebo and Treatment samples were defined in the data model Test(id = "Placebo vs treatment", samples = samples("Treatment", "Placebo"), method = "TTest", par = parameters(larger = FALSE)) ``` Carry out the two-sample t-test for non-inferiority: ```r # Placebo and Treatment samples were defined in the data model Test(id = "Placebo vs treatment", samples = samples("Placebo", "Treatment"), method = "TTestNI", par = parameters(margin = 0.2)) ``` Carry out the two-sample t-test with pooled samples: ```r # Placebo M-, Placebo M+, Treatment M- and Treatment M+ samples were defined in the data model Test(id = "OP test", samples = samples(c("Placebo M-", "Placebo M+"), c("Treatment M-", "Treatment M+")), method = "TTest") ``` ### `Statistic` object #### Description This object specifies a descriptive statistic that will be computed based on one or more samples defined in a data model. A `Statistic` object is defined by four arguments: - `id` defines the descriptive statistic's unique ID (label). - `method` defines the type of statistic/method for computing the statistic. - `samples` defines the samples (pre-defined in the data model) to be used for computing the statistic. - `par` defines the parameter(s) of the statistic. Several methods for computing descriptive statistics are already implemented in the Mediana package and the user can also define custom functions for computing descriptive statistics (see the dedicated vignette `vignette("custom-functions", package = "Mediana")`). These methods are shown below along with the required parameters that need to be defined in the `par` argument: - `MedianStat`: compute the **median** of the sample defined in the `samples` argument. - `MeanStat`: compute the **mean** of the sample defined in the `samples` argument. - `SdStat`: compute the **standard deviation** of the sample defined in the `samples` argument. - `MinStat`: compute the **minimum** value in the sample defined in the `samples` argument. - `MaxStat`: compute the **maximum** value in the sample defined in the `samples` argument. - `DiffMeanStat`: compute the **difference of means** between the two samples defined in the `samples` argument. Two samples must be defined. - `EffectSizeContStat`: compute the **effect size** for a continuous endpoint. Two samples must be defined. - `RatioEffectSizeContStat`: compute the **ratio of two effect sizes** for a continuous endpoint. Four samples must be defined. - `PropStat`: generate the **proportion ** of the sample defined in the `samples` argument. - `DiffPropStat`: compute the **difference of the proportions** between the two samples defined in the `samples` argument. Two samples must be defined. - `EffectSizePropStat`: compute the **effect size** for a binary endpoint. Two samples must be defined. - `RatioEffectSizePropStat`: compute the **ratio of two effect sizes** for a binary endpoint. Four samples must be defined. - `HazardRatioStat`: compute the **hazard ratio** of the two samples defined in the samples argument. Two samples must be defined. By default the Log-Rank method is used. Optional argument: `method` taking as value `Log-Rank` or `Cox`. - `EffectSizeEventStat`: compute the **effect size** for a survival endpoint (log of the HR. Two samples must be defined. Two samples must be defined. By default the Log-Rank method is used. Optional argument: `method` taking as value `Log-Rank` or `Cox`. - `RatioEffectSizeEventStat`: compute the **ratio of two effect sizes** for a survival endpoint based on the Log-Rank method. Four samples must be defined. By default the Log-Rank method is used. Optional argument: `method` taking as value `Log-Rank` or `Cox`. - `EventCountStat`: compute the **number of events** observed in the sample(s) defined in the `samples` argument. - `PatientCountStat`: compute the **number of patients** observed in the sample(s) defined in the `samples` argument Several `Statistic` objects can be added to an `AnalysisModel` object. For more information about the `Statistic` object, see the R documentation [Statistic](https://cran.r-project.org/package=Mediana/Mediana.pdf). #### Example Examples of `Statistic` objects: Compute the mean of a single sample: ```r # Treatment sample was defined in the data model Statistic(id = "Mean Treatment", method = "MeanStat", samples = samples("Treatment")) ``` ### `MultAdjProc` object #### Description This object specifies a multiplicity adjustment procedure that will be applied to the significance tests in order to protect the overall Type I error rate. A `MultAdjProc` object is defined by three arguments: - `proc` defines a multiplicity adjustment procedure. - `par` defines the parameter(s) of the multiplicity adjustment procedure (optional). - `tests` defines the specific tests (defined in the analysis model) to which the multiplicity adjustment procedure will be applied. If no `tests` are defined, the multiplicity adjustment procedure will be applied to all tests defined in the `AnalysisModel` object. Several commonly used multiplicity adjustment procedures are included in the Mediana package. In addition, the user can easily define custom multiplicity adjustments. The built-in multiplicity adjustments are defined below along with the required parameters that need to be included in the `par` argument: - `BonferroniAdj`: **Bonferroni** procedure. Optional parameter: `weight` (vector of hypothesis weights). - `HolmAdj`: **Holm** procedure. Optional parameter: `weight` (vector of hypothesis weights). - `HochbergAdj`: **Hochberg** procedure. Optional parameter: `weight` (vector of hypothesis weights). - `HommelAdj`: **Hommel** procedure. Optional parameter: `weight` (vector of hypothesis weights). - `FixedSeqAdj`: **Fixed-sequence procedure**. - `ChainAdj`: Family of **chain procedures**. Required parameters: `weight` (vector of hypothesis weights) and `transition` (matrix of transition parameters). - `FallbackAdj`: **Fallback** procedure. Required parameters: `weight` (vector of hypothesis weights). - `NormalParamAdj`: **Parametric multiple testing procedure** derived from a multivariate normal distribution. Required parameter: `corr` (correlation matrix of the multivariate normal distribution). Optional parameter: `weight` (vector of hypothesis weights). - `ParallelGatekeepingAdj`: Family of **parallel gatekeeping procedures**. Required parameters: `family` (vectors of hypotheses included in each family), `proc` (vector of procedure names applied to each family), `gamma` (vector of truncation parameters). - `MultipleSequenceGatekeepingAdj`: Family of **multiple-sequence gatekeeping procedures**. Required parameters: `family` (vectors of hypotheses included in each family), `proc` (vector of procedure names applied to each family), `gamma` (vector of truncation parameters). - `MixtureGatekeepingAdj`: Family of **mixture-based gatekeeping procedures**. Required parameters: `family` (vectors of hypotheses included in each family), `proc` (vector of procedure names applied to each family), `gamma` (vector of truncation parameters), `serial` (matrix of indicators), `parallel` (matrix of indicators). Several `MultAdjProc` objects can be added to an `AnalysisModel`object using the '+' operator or by grouping them into a MultAdj object. For more information about the `MultAdjProc` object, see the package's documentation [MultAdjProc](https://cran.r-project.org/package=Mediana/Mediana.pdf). #### Example Examples of `MultAdjProc` objects: Apply a multiplicity adjustment based on the chain procedure: ```r # Parameters of the chain procedure (equivalent to a fixed-sequence procedure) # Vector of hypothesis weights chain.weight = c(1, 0) # Matrix of transition parameters chain.transition = matrix(c(0, 1, 0, 0), 2, 2, byrow = TRUE) # MultAdjProc MultAdjProc(proc = "ChainAdj", par = parameters(weight = chain.weight, transition = chain.transition)) ``` This procedure implementation is facilicated by the use of the `FixedSeqAdj` method intoduced in version 1.0.4. ```r # MultAdjProc MultAdjProc(proc = "FixedSeqAdj") ``` Apply a multiple-sequence gatekeeping procedure: ```r # Parameters of the multiple-sequence gatekeeping procedure # Tests to which the multiplicity adjustment will be applied (defined in the AnalysisModel) test.list = tests("Pl vs DoseH - ACR20", "Pl vs DoseL - ACR20", "Pl vs DoseH - HAQ-DI", "Pl vs DoseL - HAQ-DI") # Hypothesis included in each family (the number corresponds to the position of the test in the test.list vector) family = families(family1 = c(1, 2), family2 = c(3, 4)) # Component procedure of each family component.procedure = families(family1 ="HolmAdj", family2 = "HolmAdj") # Truncation parameter of each family gamma = families(family1 = 0.8, family2 = 1) # MultAdjProc MultAdjProc(proc = "MultipleSequenceGatekeepingAdj", par = parameters(family = family, proc = component.procedure, gamma = gamma), tests = test.list) ``` ### `MultAdjStrategy` object #### Description This object specifies a multiplicity adjustment strategy that can include several multiplicity adjustment procedures. A multiplicity adjustment strategy may be defined when the same Clinical Scenario Evaluation approach is applied to several clinical trials. A `MultAdjStrategy` object serves as a wrapper for several `MultAdjProc` objects. For more information about the `MultAdjStrategy` object, see the package's documentation [MultAdjStrategy](https://cran.r-project.org/package=Mediana/Mediana.pdf). #### Example Example of a `MultAdjStrategy` object: Perform complex multiplicity adjustments based on gatekeeping procedures in two clinical trials with three endpoints: ```r # Parallel gatekeeping procedure parameters family = families(family1 = c(1), family2 = c(2, 3)) component.procedure = families(family1 ="HolmAdj", family2 = "HolmAdj") gamma = families(family1 = 0.8, family2 = 1) # Multiple sequence gatekeeping procedure parameters for Trial A mult.adj.trialA = MultAdjProc(proc = "ParallelGatekeepingAdj", par = parameters(family = family, proc = component.procedure, gamma = gamma), tests = tests("Trial A Pla vs Trt End1", "Trial A Pla vs Trt End2", "Trial A Pla vs Trt End3")) mult.adj.trialB = MultAdjProc(proc = "ParallelGatekeepingAdj", par = parameters(family = family, proc = component.procedure, gamma = gamma), tests = tests("Trial B Pla vs Trt End1", "Trial B Pla vs Trt End2", "Trial B Pla vs Trt End3")) # Analysis model analysis.model = AnalysisModel() + MultAdjStrategy(mult.adj.trialA, mult.adj.trialB) + # Tests for study A Test(id = "Trial A Pla vs Trt End1", method = "PropTest", samples = samples("Trial A Plac End1", "Trial A Trt End1")) + Test(id = "Trial A Pla vs Trt End2", method = "TTest", samples = samples("Trial A Plac End2", "Trial A Trt End2")) + Test(id = "Trial A Pla vs Trt End3", method = "TTest", samples = samples("Trial A Plac End3", "Trial A Trt End3")) + # Tests for study B Test(id = "Trial B Pla vs Trt End1", method = "PropTest", samples = samples("Trial B Plac End1", "Trial B Trt End1")) + Test(id = "Trial B Pla vs Trt End2", method = "TTest", samples = samples("Trial B Plac End2", "Trial B Trt End2")) + Test(id = "Trial B Pla vs Trt End3", method = "TTest", samples = samples("Trial B Plac End3", "Trial B Trt End3")) ``` ### `MultAdj` object #### Description This object can be used to combine several `MultAdjProc` or `MultAdjStrategy` objects and add them as a single object to an `AnalysisModel` object . This object is provided mainly for convenience and its use is optional. Alternatively, `MultAdjProc` or `MultAdjStrategy` objects can be added to an `AnalysisModel` object incrementally using the '+' operator. For more information about the `MultAdj` object, see the package's documentation [MultAdj](https://cran.r-project.org/package=Mediana/Mediana.pdf). #### Example Example of a `MultAdj` object: Perform Clinical Scenario Evaluation to compare three candidate multiplicity adjustment procedures: ```r # Multiplicity adjustments to compare mult.adj1 = MultAdjProc(proc = "BonferroniAdj") mult.adj2 = MultAdjProc(proc = "HolmAdj") mult.adj3 = MultAdjProc(proc = "HochbergAdj") # Analysis model analysis.model = AnalysisModel() + MultAdj(mult.adj1, mult.adj2, mult.adj3) + Test(id = "Pl vs Dose L", samples = samples("Placebo", "Dose L"), method = "TTest") + Test(id = "Pl vs Dose M", samples = samples ("Placebo", "Dose M"), method = "TTest") + Test(id = "Pl vs Dose H", samples = samples("Placebo", "Dose H"), method = "TTest") # Note that the code presented above is equivalent to: analysis.model = AnalysisModel() + mult.adj1 + mult.adj2 + mult.adj3 + Test(id = "Pl vs Dose L", samples = samples("Placebo", "Dose L"), method = "TTest") + Test(id = "Pl vs Dose M", samples = samples ("Placebo", "Dose M"), method = "TTest") + Test(id = "Pl vs Dose H", samples = samples("Placebo", "Dose H"), method = "TTest") ``` # Evaluation model Evaluation models are used within the Mediana package to specify the success criteria or metrics for evaluating the performance of the selected clinical scenario (combination of data and analysis models). ## Initialization An evaluation model can be initialized using the following command: ```r # EvaluationModel initialization evaluation.model = EvaluationModel() ``` It is highly recommended to use this command to initialize an evaluation model because it simplifies the process of specifying components of the evaluation model such as `Criterion` objects. ## Components of an evaluation model After an `EvaluationModel` object has been initialized, components of the evaluation model can be specified by adding objects to the model using the '+' operator as shown below. ```r # Evaluation model case.study1.evaluation.model = EvaluationModel() + Criterion(id = "Marginal power", method = "MarginalPower", tests = tests("Placebo vs treatment"), labels = c("Placebo vs treatment"), par = parameters(alpha = 0.025)) + Criterion(id = "Average Mean", method = "MeanSumm", statistics = statistics("Mean Treatment"), labels = c("Average Mean Treatment")) ``` ### `Criterion` object #### Description This object specifies the success criteria that will be applied to a clinical scenario to evaluate the performance of selected analysis methods. A `Criterion` object is defined by six arguments: - `id` defines the criterion's unique ID (label). - `method` defines the criterion. - `tests` defines the IDs of the significance tests (defined in the analysis model) that the criterion is applied to. - `statistics` defines the IDs the descriptive statistics (defined in the analysis model) that the criterion is applied to. - `par` defines the parameter(s) of the criterion. - `label` defines the label(s) of the criterion values (the label(s) will be used in the simulation report). Several commonly used success criteria are implemented in the Mediana package. The user can also define custom significance criteria. The built-in success criteria are listed below along with the required parameters that need to be included in the `par` argument: - `MarginalPower`: compute the marginal power of all tests included in the `test` argument. Required parameter: `alpha` (significance level used in each test). - `WeightedPower`: compute the weighted power of all tests included in the `test` argument. Required parameters: `alpha` (significance level used in each test) and `weight` (vector of weights assigned to the significance tests). - `DisjunctivePower`: compute the disjunctive power (probability of achieving statistical significance in at least one test included in the `test` argument). Required parameter: `alpha` (significance level used in each test). - `ConjunctivePower`: compute the conjunctive power (probability of achieving statistical significance in all tests included in the `test` argument). Required parameter: `alpha` (significance level used in each test). - `ExpectedRejPower`: compute the expected number of statistical significant tests. Required parameter: `alpha`(significance level used in each test). Several `Criterion` objects can be added to an `EvaluationModel` object. For more information about the `Criterion` object, see the package's documentation [Criterion](https://cran.r-project.org/package=Mediana/Mediana.pdf). If a certain success criterion is not implemented in the Mediana package, the user can create a custom function and use it within the package (see the dedicated vignette `vignette("custom-functions", package = "Mediana")`). #### Examples Examples of `Criterion` objects: Compute marginal power with alpha = 0.025: ```r Criterion(id = "Marginal power", method = "MarginalPower", tests = tests("Placebo vs treatment"), labels = c("Placebo vs treatment"), par = parameters(alpha = 0.025)) ``` Compute weighted power with alpha = 0.025 and unequal test-specific weights: ```r Criterion(id = "Weighted power", method = "WeightedPower", tests = tests("Placebo vs treatment - Endpoint 1", "Placebo vs treatment - Endpoint 2"), labels = c("Weighted power"), par = parameters(alpha = 0.025, weight = c(2/3, 1/3))) ``` Compute disjunctive power with alpha = 0.025: ```r Criterion(id = "Disjunctive power", method = "DisjunctivePower", tests = tests("Placebo vs Dose H", "Placebo vs Dose M", "Placebo vs Dose L"), labels = c("Disjunctive power"), par = parameters(alpha = 0.025)) ``` # Clinical Scenario Evaluation Clinical Scenario Evaluation (CSE) is performed based on the data, analysis and evaluation models as well as simulation parameters specified by the user. The simulation parameters are defined using the `SimParameters` object. ## Clinical Scenario Evaluation objects ### `SimParameters` object #### Description The `SimParameters` object is a required argument of the `CSE` function and has the following arguments: - `n.sims` defines the number of simulations. - `seed` defines the seed to be used in the simulations. - `proc.load` defines the processor load in parallel computations. The `proc.load` argument is used to define the number of processor cores dedicated to the simulations. A numeric value can be defined as well as character value which automatically detects the number of cores: - `low`: 1 processor core. - `med`: Number of available processor cores / 2. - `high`: Number of available processor cores 1. - `full`: All available processor cores. #### Examples Examples of `SimParameters` object specification: Perform 10000 simulations using all available processor cores: ```r SimParameters(n.sims = 10000, proc.load = "full", seed = 42938001) ``` Perform 10000 simulations using 2 processor cores: ```r SimParameters(n.sims = 10000, proc.load = 2, seed = 42938001) ``` ### `CSE` function #### Description The `CSE` function is invoked to runs simulations under the Clinical Scenario Evaluation approach. This function uses four arguments: - `data` defines a `DataModel` object. - `analysis` defines an `AnalysisModel` object. - `evaluation` defines an `EvaluationModel` object. - `simulation` defines a `SimParameters` object. #### Examples The following example illustrates the use of the `CSE` function: ```r # Outcome parameter set 1 outcome1.placebo = parameters(mean = 0, sd = 70) outcome1.treatment = parameters(mean = 40, sd = 70) # Outcome parameter set 2 outcome2.placebo = parameters(mean = 0, sd = 70) outcome2.treatment = parameters(mean = 50, sd = 70) # Data model case.study1.data.model = DataModel() + OutcomeDist(outcome.dist = "NormalDist") + SampleSize(c(50, 55, 60, 65, 70)) + Sample(id = "Placebo", outcome.par = parameters(outcome1.placebo, outcome2.placebo)) + Sample(id = "Treatment", outcome.par = parameters(outcome1.treatment, outcome2.treatment)) # Analysis model case.study1.analysis.model = AnalysisModel() + Test(id = "Placebo vs treatment", samples = samples("Placebo", "Treatment"), method = "TTest") # Evaluation model case.study1.evaluation.model = EvaluationModel() + Criterion(id = "Marginal power", method = "MarginalPower", tests = tests("Placebo vs treatment"), labels = c("Placebo vs treatment"), par = parameters(alpha = 0.025)) # Simulation Parameters case.study1.sim.parameters = SimParameters(n.sims = 1000, proc.load = 2, seed = 42938001) # Perform clinical scenario evaluation case.study1.results = CSE(case.study1.data.model, case.study1.analysis.model, case.study1.evaluation.model, case.study1.sim.parameters) ``` ### Summary of results Once Clinical Scenario Evaluation-based simulations have been run, the `CSE` object returned by the `CSE` function contains a list with the following components: - `simulation.results`: a data frame containing the results of the simulations for each scenario. - `analysis.scenario.grid`: a data frame containing the grid of the combination of data and analysis scenarios. - `data.structure`: a list containing the data structure according to the `DataModel` object. - `analysis.structure`: a list containing the analysis structure according to the `AnalysisModel` object. - `evaluation.structure`: a list containing the evaluation structure according to the `EvaluationModel` object. - `sim.parameters`: a list containing the simulation parameters according to `SimParameters` object. - `timestamp`: a list containing information about the start time, end time and duration of the simulation runs. The simulation results can be summarized in the R console using the `summary` function: ```r summary(case.study1.results) ``` A Microsoft Word-based simulation report can be generated from the simulation results produced by the `CSE` function using the `GenerateReport` function, see [Simulation report](#simulation-report). # Simulation report The Mediana R package uses the [officer R package](http://davidgohel.github.io/officer/) package to generate a Microsoft Word-based report that summarizes the results of Clinical Scenario Evaluation-based simulations. The user can easily customize this simulation report by adding a description of the project as well as labels to each scenario, including data scenarios (sample size, outcome distribution parameters, design parameters) and analysis scenarios (multiplicity adjustment). The user can also customize the report's structure, e.g., create sections and subsections within the report and specify how the rows will be sorted within each table. In order to customize the report, the user has to use a `PresentationModel` object described below. Once a `PresentationModel` object has been defined, the `GenerateReport` function can be called to generate a Clinical Scenario Evaluation report. ## Initialization A presentation model can be initialized using the following command ```r # PresentationModel initialization presentation.model = PresentationModel() ``` Initialization with this command is highly recommended as it will simplify the process of adding related objects, e.g., the `Project`, `Section`, `Subsection`, `Table`, `CustomLabel` objects. ## Specific objects Once the `PresentationModel` object has been initialized, specific objects can be added by simply using the '+' operator as in data, analysis and evaluation models. ### `Project` object #### Description This object specifies a description of the project. The `Project` object is defined by three optional arguments: - `username` defines the username to be included in the report (by default, the username is "[Unknown User]"). - `title` defines the project's in the report (the default value is "[Unknown title]"). - `description` defines the project's description (the default value is "[No description]"). This information will be added in the report generated using the `GenerateReport` function. A single object of the `Project` class can be added to an object of the `PresentationModel` class. #### Examples A simple `Project` object can be created as follows: ```r Project(username = "Gautier Paux", title = "Case study 1", description = "Clinical trial in patients with pulmonary arterial hypertension") ``` ### `Section` object #### Description This object specifies the sections that will be created within the simulation report. A `Section` object is defined by a single argument: - `by` defines the rules for setting up sections. The `by` argument can contain several parameters from the following list: - `sample.size`: a separate section will be created for each sample size. - `event`: a separate section will be created for each event count. - `outcome.parameter`: a separate section will be created for each outcome parameter scenario. - `design.parameter`: a separate section will be created for each design parameter scenario. - `multiplicity.adjustment`: a separate section will be created for each multiplicity adjustment scenario. Note that, if a parameter is defined in the `by` argument, it must be defined only in this object (i.e., neither in the `Subection` object nor in the `Table` object). A single object of the `Section` class can be added to an object of the `PresentationModel` class. #### Examples A `Section` object can be defined as follows: Create a separate section within the report for each outcome parameter scenario: ```r Section(by = "outcome.parameter") ``` Create a separate section for each unique combination of the sample size and outcome parameter scenarios: ```r Section(by = c("sample.size", "outcome.parameter")) ``` ### `Subsection` object #### Description This object specifies the rules for creating subsections within the simulation report. A `Subsection` object is defined by a single argument: - `by` defines the rules for creating subsections. The `by` argument can contain several parameters from the following list: - `sample.size`: a separate subsection will be created for each sample size. - `event`: a separate subsection will be created for each number of events. - `outcome.parameter`: a separate subsection will be created for each outcome parameter scenario. - `design.parameter`: a separate subsection will be created for each design parameter scenario. - `multiplicity.adjustment`: a separate subsection will be created for each multiplicity adjustment scenario. As before, if a parameter is defined in the `by` argument, it must be defined only in this object (i.e., neither in the `Section` object nor in the `Table` object). A single object of the `Subsection` class can be added to an object of the `PresentationModel` class. #### Examples `Subsection` objects can be set up as follows: Create a separate subsection for each sample size scenario: ```r Subsection(by = "sample.size") ``` Create a separate subsection for each unique combination of the sample size and outcome parameter scenarios: ```r Subsection(by = c("sample.size", "outcome.parameter")) ``` ### `Table` object #### Description This object specifies how the summary tables will be sorted within the report. A `Table` object is defined by a single argument: - `by` defines how the tables of the report will be sorted. The `by` argument can contain several parameters, the value must be contain in the following list: - `sample.size`: the tables will be sorted by the sample size. - `event`: the tables will be sorted by the number of events. - `outcome.parameter`: the tables will be sorted by the outcome parameter scenario. - `design.parameter`: the tables will be sorted by the design parameter scenario. - `multiplicity.adjustment`: the tables will be sorted by the multiplicity adjustment scenario. If a parameter is defined in the `by` argument it must be defined only in this object (i.e., neither in the `Section` object nor in the `Subsection` object). A single object of class `Table` can be added to an object of class `PresentationModel`. #### Examples Examples of `Table` objects: Create a summary table sorted by sample size scenarios: ```r Table(by = "sample.size") ``` Create a summary table sorted by sample size and outcome parameter scenarios: ```r Table(by = c("sample.size", "outcome.parameter")) ``` ### `CustomLabel` object #### Description This object specifies the labels that will be assigned to sets of parameter values or simulation scenarios. These labels will be used in the section and subsection titles of the Clinical Scenario Evaluation Report as well as in the summary tables. A `CustomLabel` object is defined by two arguments: - `param` defines a parameter (scenario) to which the current set of labels will be assigned. - `label` defines the label(s) to assign to each value of the parameter. The `param` argument can contain several parameters from the following list: - `sample.size`: labels will be applied to sample size values. - `event`: labels will be applied to number of events values. - `outcome.parameter`: labels will be applied to outcome parameter scenarios. - `design.parameter`: labels will be applied to design parameter scenarios. - `multiplicity.adjustment`: labels will be applied to multiplicity adjustment scenarios. Several objects of the `CustomLabel` class can be added to an object of the `PresentationModel` class. #### Examples Examples of `CustomLabel` objects: Assign a custom label to the sample size values: ```r CustomLabel(param = "sample.size", label= paste0("N = ",c(50, 55, 60, 65, 70))) ``` Assign a custom label to the outcome parameter scenarios: ```r CustomLabel(param = "outcome.parameter", label=c("Pessimistic", "Expected", "Optimistic")) ``` ## `GenerateReport` function ### Description The Clinical Scenario Evaluation Report is generated using the `GenerateReport` function. This function has four arguments: - `presentation.model` defines a `PresentationModel` object. - `cse.result` defines a `CSE` object returned by the CSE function. - `report.filename` defines the filename of the Word-based report generated by this function. - `report.template` defines a Word-based template (it is an optional argument). The `GenerateReport` function requires the [officer R package](http://davidgohel.github.io/officer/) package to generate a Word-based simulation report. Optionally, a custom template can be selected by defining `report.template`, this argument specifies the name of a Word document located in the working directory. The Word-based simulation report is structured as follows: 1. GENERAL INFORMATION 1. PROJECT INFORMATION 2. SIMULATION PARAMETERS 1. DATA MODEL 1. DESIGN (if a `Design` object has been defined) 2. SAMPLE SIZE (or EVENT if an `Event` object has been defined) 2. OUTCOME DISTRIBUTION 3. DESIGN 1. ANALYSIS MODEL 1. TESTS 2. MULTIPLICITY ADJUSTMENT 1. EVALUATION MODEL 1. CRITERIA 1. RESULTS 1. SECTION (if a `Section` object has been defined) 1. SUBSECTION (if a `Subsection` object has been defined) 2. ... 1. ... ### Examples This example illustrates the use of the `GenerateReport` function: ```r # Define a presentation model case.study1.presentation.model = PresentationModel() + Section(by = "outcome.parameter") + Table(by = "sample.size") + CustomLabel(param = "sample.size", label= paste0("N = ",c(50, 55, 60, 65, 70))) + CustomLabel(param = "outcome.parameter", label=c("Standard 1", "Standard 2")) # Report Generation GenerateReport(presentation.model = case.study1.presentation.model, cse.results = case.study1.results, report.filename = "Case study 1 (normally distributed endpoint).docx") ``` Mediana/inst/doc/case-studies.html0000644000176200001440000066702113464544413016635 0ustar liggesusers Case studies

Case studies

Gautier Paux and Alex Dmitrienko

2019-05-08

Introduction

Several case studies have been created to facilitate the implementation of simulation-based Clinical Scenario Evaluation (CSE) approaches in multiple settings and help the user understand individual features of the Mediana package. Case studies are arranged in terms of increasing complexity of the underlying clinical trial setting (i.e., trial design and analysis methodology). For example, Case study 1 deals with a number of basic settings and increasingly more complex settings are considered in the subsequent case studies.

Case study 1

This case study serves a good starting point for users who are new to the Mediana package. It focuses on clinical trials with simple designs and analysis strategies where power and sample size calculations can be performed using analytical methods.

  1. Trial with two treatment arms and single endpoint (normally distributed endpoint).
  2. Trial with two treatment arms and single endpoint (binary endpoint).
  3. Trial with two treatment arms and single endpoint (survival-type endpoint).
  4. Trial with two treatment arms and single endpoint (survival-type endpoint with censoring).
  5. Trial with two treatment arms and single endpoint (count-type endpoint).

Case study 2

This case study is based on a clinical trial with three or more treatment arms. A multiplicity adjustment is required in this setting and no analytical methods are available to support power calculations.

This example also illustrates a key feature of the Mediana package, namely, a useful option to define custom functions, for example, it shows how the user can define a new criterion in the Evaluation Model.

Clinical trial in patients with schizophrenia

Case study 3

This case study introduces a clinical trial with several patient populations (marker-positive and marker-negative patients). It demonstrates how the user can define independent samples in a data model and then specify statistical tests in an analysis model based on merging several samples, i.e., merging samples of marker-positive and marker-negative patients to carry out a test that evaluated the treatment effect in the overall population.

Clinical trial in patients with asthma

Case study 4

This case study illustrates CSE simulations in a clinical trial with several endpoints and helps showcase the package’s ability to model multivariate outcomes in clinical trials.

Clinical trial in patients with metastatic colorectal cancer

Case study 5

This case study is based on a clinical trial with several endpoints and multiple treatment arms and illustrates the process of performing complex multiplicity adjustments in trials with several clinical objectives.

Clinical trial in patients with rheumatoid arthritis

Case study 6

This case study is an extension of Case study 2 and illustrates how the package can be used to assess the performance of several multiplicity adjustments. The case study also walks the reader through the process of defining customized simulation reports.

Clinical trial in patients with schizophrenia

Case study 1

Case study 1 deals with a simple setting, namely, a clinical trial with two treatment arms (experimental treatment versus placebo) and a single endpoint. Power calculations can be performed analytically in this setting. Specifically, closed-form expressions for the power function can be derived using the central limit theorem or other approximations.

Several distribution will be illustrated in this case study:

Normally distributed endpoint

Suppose that a sponsor is designing a Phase III clinical trial in patients with pulmonary arterial hypertension (PAH). The efficacy of experimental treatments for PAH is commonly evaluated using a six-minute walk test and the primary endpoint is defined as the change from baseline to the end of the 16-week treatment period in the six-minute walk distance.

Define a Data Model

The first step is to initialize the data model:

After the initialization, components of the data model can be added to the DataModel object incrementally using the + operator.

The change from baseline in the six-minute walk distance is assumed to follow a normal distribution. The distribution of the primary endpoint is defined in the OutcomeDist object:

The sponsor would like to perform power evaluation over a broad range of sample sizes in each treatment arm:

As a side note, the seq function can be used to compactly define sample sizes in a data model:

The sponsor is interested in performing power calculations under two treatment effect scenarios (standard and optimistic scenarios). Under these scenarios, the experimental treatment is expected to improve the six-minute walk distance by 40 or 50 meters compared to placebo, respectively, with the common standard deviation of 70 meters.

Therefore, the mean change in the placebo arm is set to μ = 0 and the mean changes in the six-minute walk distance in the experimental arm are set to μ = 40 (standard scenario) or μ = 50 (optimistic scenario). The common standard deviation is σ = 70.

Note that the mean and standard deviation are explicitly identified in each list. This is done mainly for the user’s convenience.

After having defined the outcome parameters for each sample, two Sample objects that define the two treatment arms in this trial can be created and added to the DataModel object:

Define an Analysis Model

Just like the data model, the analysis model needs to be initialized as follows:

Only one significance test is planned to be carried out in the PAH clinical trial (treatment versus placebo). The treatment effect will be assessed using the one-sided two-sample t-test:

According to the specifications, the two-sample t-test will be applied to Sample 1 (Placebo) and Sample 2 (Treatment). These sample IDs come from the data model defied earlier. As explained in the manual, see Analysis Model, the sample order is determined by the expected direction of the treatment effect. In this case, an increase in the six-minute walk distance indicates a beneficial effect and a numerically larger value of the primary endpoint is expected in Sample 2 (Treatment) compared to Sample 1 (Placebo). This implies that the list of samples to be passed to the t-test should include Sample 1 followed by Sample 2. It is of note that from version 1.0.6, it is possible to specify an option to indicate if a larger numeric values is expected in the Sample 2 (larger = TRUE) or in Sample 1 (larger = FALSE). By default, this argument is set to TRUE.

To illustrate the use of the Statistic object, the mean change in the six-minute walk distance in the treatment arm can be computed using the MeanStat statistic:

Define an Evaluation Model

The data and analysis models specified above collectively define the Clinical Scenarios to be examined in the PAH clinical trial. The scenarios are evaluated using success criteria or metrics that are aligned with the clinical objectives of the trial. In this case it is most appropriate to use regular power or, more formally, marginal power. This success criterion is specified in the evaluation model.

First of all, the evaluation model must be initialized:

Secondly, the success criterion of interest (marginal power) is defined using the Criterion object:

The tests argument lists the IDs of the tests (defined in the analysis model) to which the criterion is applied (note that more than one test can be specified). The test IDs link the evaluation model with the corresponding analysis model. In this particular case, marginal power will be computed for the t-test that compares the mean change in the six-minute walk distance in the placebo and treatment arms (Placebo vs treatment).

In order to compute the average value of the mean statistic specified in the analysis model (i.e., the mean change in the six-minute walk distance in the treatment arm) over the simulation runs, another Criterion object needs to be added:

The statistics argument of this Criterion object lists the ID of the statistic (defined in the analysis model) to which this metric is applied (e.g., Mean Treatment).

Perform Clinical Scenario Evaluation

After the clinical scenarios (data and analysis models) and evaluation model have been defined, the user is ready to evaluate the success criteria specified in the evaluation model by calling the CSE function.

To accomplish this, the simulation parameters need to be defined in a SimParameters object:

The function call for CSE specifies the individual components of Clinical Scenario Evaluation in this case study as well as the simulation parameters:

The simulation results are saved in an CSE object (case.study1.results). This object contains complete information about this particular evaluation, including the data, analysis and evaluation models specified by the user. The most important component of this object is the data frame contained in the list named simulation.results (case.study1.results$simulation.results). This data frame includes the values of the success criteria and metrics defined in the evaluation model.

Summarize the Simulation Results

Summary of simulation results in R console

To facilitate the review of the simulation results produced by the CSE function, the user can invoke the summary function. This function displays the data frame containing the simulation results in the R console:

If the user is interested in generate graphical summaries of the simulation results (using the the ggplot2 package or other packages), this data frame can also be saved to an object:

General a Simulation Report

Presentation Model

A very useful feature of the Mediana package is generation of a Microsoft Word-based report to provide a summary of Clinical Scenario Evaluation Report.

To generate a simulation report, the user needs to define a presentation model by creating a PresentationModel object. This object must be initialized as follows:

Project information can be added to the presentation model using the Project object:

The user can easily customize the simulation report by defining report sections and specifying properties of summary tables in the report. The code shown below creates a separate section within the report for each set of outcome parameters (using the Section object) and sets the sorting option for the summary tables (using the Table object). The tables will be sorted by the sample size. Further, in order to define descriptive labels for the outcome parameter scenarios and sample size scenarios, the CustomLabel object needs to be used:

Report generation

Once the presentation model has been defined, the simulation report is ready to be generated using the GenerateReport function:

Download

The R code and the report that summarizes the results of Clinical Scenario Evaluation for this case study can be downloaded on the Mediana website:

Binary endpoint

Consider a Phase III clinical trial for the treatment of rheumatoid arthritis (RA). The primary endpoint is the response rate based on the American College of Rheumatology (ACR) definition of improvement. The trial’s sponsor in interested in performing power calculations using several treatment effect assumptions (Placebo 30% - Treatment 50%, Placebo 30% - Treatment 55% and Placebo 30% - Treatment 60%)

Define an Analysis Model

The analysis model uses a standard two-sample test for comparing proportions (method = "PropTest") to assess the treatment effect in this clinical trial example:

Define an Evaluation Model

Power evaluations are easily performed in this clinical trial example using the same evaluation model utilized in the case of a normally distributed endpoint, i.e., evaluations rely on marginal power:

An extension of this clinical trial example is provided in Case study 5. The extension deals with a more complex setting involving several trial endpoints and multiple treatment arms.

Download

The R code and the report that summarizes the results of Clinical Scenario Evaluation for this case study can be downloaded on the Mediana website:

Survival-type endpoint

If the trial’s primary objective is formulated in terms of analyzing the time to a clinically important event (progression or death in an oncology setting), data and analysis models can be set up based on an exponential distribution and the log-rank test.

As an illustration, consider a Phase III trial which will be conducted to evaluate the efficacy of a new treatment for metastatic colorectal cancer (MCC). Patients will be randomized in a 2:1 ratio to an experimental treatment or placebo (in addition to best supportive care).

The trial’s primary objective is to assess the effect of the experimental treatment on progression-free survival (PFS).

Define a Data Model

A single treatment effect scenario is considered in this clinical trial example. Specifically, the median time to progression is assumed to be:

  • Placebo : t0 = 6 months,

  • Treatment: t1 = 9 months.

Under an exponential distribution assumption (which is specified using the ExpoDist distribution), the median times correspond to the following hazard rates:

  • λ0 = log(2)/t0 = 0.116,

  • λ1 = log(2)/t1 = 0.077,

and the resulting hazard ratio (HR) is 0.077/0.116 = 0.67.

It is important to note that, if no censoring mechanisms are specified in a data model with a time-to-event endpoint, all patients will reach the endpoint of interest (e.g., progression) and thus the number of patients will be equal to the number of events. Using this property, power calculations can be performed using either the Event object or SampleSize object. For the purpose of illustration, the Event object will be used in this example.

To define a data model in the MCC clinical trial, the total event count in the trial is assumed to range between 270 and 300. Since the trial’s design is not balanced, the randomization ratio needs to be specified in the Event object:

It is worth noting that the primary endpoint’s type (i.e., theoutcome.type argument in the OutcomeDist object) is not specified. By default, the outcome type is set to fixed, which means that a design with a fixed follow-up is assumed even though the primary endpoint in this clinical trial is clearly a time-to-event endpoint. This is due to the fact that, as was explained earlier in this case study, there is no censoring in this design and all patients are followed until the event of interest is observed. It is easy to verify that the same results are obtained if the outcome type is set to event.

Define an Analysis Model

The analysis model in this clinical trial is very similar to the analysis models defined in the case studies with normal and binomial outcome variables. The only difference is the choice of the statistical method utilized in the primary analysis (method = "LogrankTest"):

To illustrate the specification of a Statistic object, the hazard ratio will be computed using the Cox method. This can be accomplished by adding a Statistic object to the AnalysisModel such presented below.

Define an Evaluation Model

An evaluation model identical to that used earlier in the case studies with normal and binomial distribution can be applied to compute the power function at the selected event counts. Moreover, the average hazard ratio accross the simulations will be computed.

Download

The R code and the report that summarizes the results of Clinical Scenario Evaluation for this case study can be downloaded on the Mediana website:

Survival-type endpoint (with censoring)

The power calculations presented in the previous case study assume an idealized setting where each patient is followed until the event of interest (e.g., progression) is observed. In this case, the sample size (number of patients) in each treatment arm is equal to the number of events. In reality, events are often censored and a sponsor is generally interested in determining the number of patients to be recruited in order to ensure a target number of events, which translates into desirable power.

The Mediana package can be used to perform power calculations in event-driven trials in the presence of censoring. This is accomplished by setting up design parameters such as the length of the enrollment and follow-up periods in a data model using a Design object.

In general, even though closed-form solutions have been derived for sample size calculations in event-driven designs, the available approaches force clinical trial researchers to make a variety of simplifying assumptions, e.g., assumptions on the enrollment distribution are commonly made, see, for example, Julious (2009, Chapter 15). A general simulation-based approach to power and sample size calculations implemented in the Mediana package enables clinical trial sponsors to remove these artificial restrictions and examine a very broad set of plausible design parameters.

Define a Data Model

Suppose, for example, that a standard design with a variable follow-up will be used in the MCC trial introduced in the previous case study. The total study duration will be 21 months, which includes a 9-month enrollment (accrual) period and a minimum follow-up of 12 months. The patients are assumed to be recruited at a uniform rate. The set of design parameters also includes the dropout distribution and its parameters. In this clinical trial, the dropout distribution is exponential with a rate determined from historical data. These design parameters are specified in a Design object:

Finally, the primary endpoint’s type is set to event in the OutcomeDist object to indicate that a variable follow-up will be utilized in this clinical trial.

The complete data model in this case study is defined as follows:

Define an Analysis Model

Since the number of events has been fixed in this clinical trial example and some patients will not reach the event of interest, it will be important to estimate the number of patients required to accrue the required number of events. In the Mediana package, this can be accomplished by specifying a descriptive statistic named PatientCountStat (this statistic needs to be specified in a Statistic object). Another descriptive statistic that would be of interest is the event count in each sample. To compute this statistic, EventCountStat needs to be included in a Statistic object.

Define an Evaluation Model

In order to compute the average values of the two statistics (PatientCountStat and EventCountStat) in each sample over the simulation runs, two Criterion objects need to be specified, in addition to the Criterion object defined to obtain marginal power. The IDs of the corresponding Statistic objects will be included in the statistics argument of the two Criterion objects:

Download

The R code and the report that summarizes the results of Clinical Scenario Evaluation for this case study can be downloaded on the Mediana website:

Count-type endpoint

The last clinical trial example within Case study 1 deals with a Phase III clinical trial in patients with relapsing-remitting multiple sclerosis (RRMS). The trial aims at assessing the safety and efficacy of a single dose of a novel treatment compared to placebo. The primary endpoint is the number of new gadolinium enhancing lesions seen during a 6-month period on monthly MRIs of the brain and a smaller number indicates treatment benefit. The distribution of such endpoints has been widely studied in the literature and Sormani et al. (1999a, 1999b) showed that a negative binomial distribution provides a fairly good fit.

The list below gives the expected treatment effect in the experimental treatment and placebo arms (note that the negative binomial distribution is parameterized using the mean rather than the probability of success in each trial). The mean number of new lesions is set to 13 in the Treament arm and 7.8 in the Placebo arm, with a common dispersion parameter of 0.5.

The corresponding treatment effect, i.e., the relative reduction in the mean number of new lesions counts, is 100 * (13 − 7.8)/13 = 40%. The assumptions in the table define a single outcome parameter set.

Define a Data Model

The OutcomeDist object defines the distribution of the trial endpoint (NegBinomDist). Further, a balanced design is utilized in this clinical trial and the range of sample sizes is defined in the SampleSize object (it is convenient to do this using the seq function). The Sample object includes the parameters required by the negative binomial distribution (dispersion and mean).

Define an Analysis Model

The treatment effect will be assessed in this clinical trial example using a negative binomial generalized linear model (NBGLM). In the Mediana package, the corresponding test is carrying out using the GLMNegBinomTest method which is specified in the Test object. It should be noted that as a smaller value indicates a treatment benefit, the first sample defined in the samples argument must be Treatment.

Alternatively, from version 1.0.6, it is possible to specify the argument lower in the parameters of the method. If set to FALSE a numerically lower value is expected in Sample 2.

Define an Evaluation Model

The objective of this clinical trial is identical to that of the clinical trials presented earlier on this page, i.e., evaluation will be based on marginal power of the primary endpoint test. As a consequence, the same evaluation model can be applied.

Download

The R code and the report that summarizes the results of Clinical Scenario Evaluation for this case study can be downloaded on the Mediana website:

Case study 2

Summary

This clinical trial example deals with settings where no analytical methods are available to support power calculations. However, as demonstrated below, simulation-based approaches are easily applied to perform а comprehensive assessment of the relevant operating characteristics within the clinical scenario evaluation framework.

Case study 2 is based on a clinical trial example introduced in Dmitrienko and D’Agostino (2013, Section 10). This example deals with a Phase III clinical trial in a schizophrenia population. Three doses of a new treatment, labelled Dose L, Dose M and Dose H, will be tested versus placebo. The trial will be declared successful if a beneficial treatment effect is demonstrated in any of the three dosing groups compared to the placebo group.

The primary endpoint is defined as the reduction in the Positive and Negative Syndrome Scale (PANSS) total score compared to baseline and a larger reduction in the PANSS total score indicates treatment benefit. This endpoint is normally distributed and the treatment effect assumptions in the four treatment arms are displayed in the next table.

Arm Mean SD
Placebo 16 18
Dose L 19.5 18
Dose M 21 18
Dose H 21 18

Define an Analysis Model

The analysis model, shown below, defines the three individual tests that will be carried out in the schizophrenia clinical trial. Each test corresponds to a dose-placebo comparison such as:

  • H1: Null hypothesis of no difference between Dose L and placebo.

  • H2: Null hypothesis of no difference between Dose M and placebo.

  • H3: Null hypothesis of no difference between Dose H and placebo.

Each comparison will be carried out based on a one-sided two-sample t-test (TTest method defined in each Test object).

As indicated earlier, the overall success criterion in the trial is formulated in terms of demonstrating a beneficial effect at any of the three doses. Due to multiple opportunities to claim success, the overall Type I error rate will be inflated and the Hochberg procedure is introduced to protect the error rate at the nominal level.

Since no procedure parameters are defined, the three significance tests (or, equivalently, three null hypotheses of no effect) are assumed to be equally weighted. The corresponding analysis model is defined below:

To request the Hochberg procedure with unequally weighted hypotheses, the user needs to assign a list of hypothesis weights to the par argument of the MultAdjProc object. The weights typically reflect the relative importance of the individual null hypotheses. Assume, for example, that 60% of the overall weight is assigned to H3 and the remainder is split between H1 and H2. In this case, the MultAdjProc object should be defined as follow:

It should be noted that the order of the weights must be identical to the order of the Test objects defined in the analysis model.

Define an Evaluation Model

An evaluation model specifies clinically relevant criteria for assessing the performance of the individual tests defined in the corresponding analysis model or composite measures of success. In virtually any setting, it is of interest to compute the probability of achieving a significant outcome in each individual test, e.g., the probability of a significant difference between placebo and each dose. This is accomplished by requesting a Criterion object with method = "MarginalPower".

Since the trial will be declared successful if at least one dose-placebo comparison is significant, it is natural to compute the overall success probability, which is defined as the probability of demonstrating treatment benefit in one of more dosing groups. This is equivalent to evaluating disjunctive power in the trial (method = "DisjunctivePower").

In addition, the user can easily define a custom evaluation criterion. Suppose that, based on the results of the previously conducted trials, the sponsor expects a much larger treatment treatment difference at Dose H compared to Doses L and M. Given this, the sponsor may be interested in evaluating the probability of observing a significant treatment effect at Dose H and at least one other dose. The associated evaluation criterion is implemented in the following function:

The function’s first argument (test.result) is a matrix of p-values produced by the Test objects defined in the analysis model and the second argument (statistic.result) is a matrix of results produced by the Statistic objects defined in the analysis model. In this example, the criteria will only use the test.result argument, which will contain the p-values produced by the tests associated with the three dose-placebo comparisons. The last argument (parameter) contains the optional parameter(s) defined by the user in the Criterion object. In this example, the par argument contains the overall alpha level.

The case.study2.criterion function computes the probability of a significant treatment effect at Dose H (test.result[,3] <= alpha) and a significant treatment difference at Dose L or Dose M ((test.result[,1] <= alpha) | (test.result[,2]<= alpha)). Since this criterion assumes that the third test is based on the comparison of Dose H versus Placebo, the order in which the tests are included in the evaluation model is important.

The following evaluation model specifies marginal and disjunctive power as well as the custom evaluation criterion defined above:

Another potential option is to apply the conjunctive criterion which is met if a significant treatment difference is detected simultaneously in all three dosing groups (method = "ConjunctivePower"). This criterion helps characterize the likelihood of a consistent treatment effect across the doses.

The user can also use the metric.tests parameter to choose the specific tests to which the disjunctive and conjunctive criteria are applied (the resulting criteria are known as subset disjunctive and conjunctive criteria). To illustrate, the following statement computes the probability of a significant treatment effect at Dose M or Dose H (Dose L is excluded from this calculation):

Download

The R code and the report that summarizes the results of Clinical Scenario Evaluation for this case study can be downloaded on the Mediana website:

Case study 3

Summary

This case study deals with a Phase III clinical trial in patients with mild or moderate asthma (it is based on a clinical trial example from Millen et al., 2014, Section 2.2). The trial is intended to support a tailoring strategy. In particular, the treatment effect of a single dose of a new treatment will be compared to that of placebo in the overall population of patients as well as a pre-specified subpopulation of patients with a marker-positive status at baseline (for compactness, the overall population is denoted by OP, marker-positive subpopulation is denoted by M+ and marker- negative subpopulation is denoted by M−).

Marker-positive patients are more likely to receive benefit from the experimental treatment. The overall objective of the clinical trial accounts for the fact that the treatment’s effect may, in fact, be limited to the marker-positive subpopulation. The trial will be declared successful if the treatment’s beneficial effect is established in the overall population of patients or, alternatively, the effect is established only in the subpopulation. The primary endpoint in the clinical trial is defined as an increase from baseline in the forced expiratory volume in one second (FEV1). This endpoint is normally distributed and improvement is associated with a larger change in FEV1.

Define a Data Model

To set up a data model for this clinical trial, it is natural to define samples (mutually exclusive groups of patients) as follows:

  • Sample 1: Marker-negative patients in the placebo arm.

  • Sample 2: Marker-positive patients in the placebo arm.

  • Sample 3: Marker-negative patients in the treatment arm.

  • Sample 4: Marker-positive patients in the treatment arm.

Using this definition of samples, the trial’s sponsor can model the fact that the treatment’s effect is most pronounced in patients with a marker-positive status.

The treatment effect assumptions in the four samples are summarized in the next table (expiratory volume in FEV1 is measured in liters). As shown in the table, the mean change in FEV1 is constant across the marker-negative and marker-positive subpopulations in the placebo arm (Samples 1 and 2). A positive treatment effect is expected in both subpopulations in the treatment arm but marker-positive patients will experience most of the beneficial effect (Sample 4).

Sample Mean SD
Placebo M- 0.12 0.45
Placebo M+ 0.12 0.45
Treament M- 0.24 0.45
Treatment M+ 0.3 0.45

The following data model incorporates the assumptions listed above by defining a single set of outcome parameters. The data model includes three sample size sets (total sample size is set to 330, 340 and 350 patients). The sizes of the individual samples are computed based on historic information (40% of patients in the population of interest are expected to have a marker-positive status). In order to define specific sample size for each sample, they will be specified within each Sample object.

Define an Analysis Model

The analysis model in this clinical trial example is generally similar to that used in Case study 2 but there is an important difference which is described below.

As in Case study 2, the primary endpoint follows a normal distribution and thus the treatment effect will be assessed using the two-sample t-test.

Since two null hypotheses are tested in this trial (null hypotheses of no effect in the overall population of patients and subpopulation of marker-positive patients), a multiplicity adjustment needs to be applied. The Hochberg procedure with equally weighted null hypotheses will be used for this purpose.

A key feature of the analysis strategy in this case study is that the samples defined in the data model are different from the samples used in the analysis of the primary endpoint. As shown in the Table, four samples are included in the data model. However, from the analysis perspective, the sponsor in interested in examining the treatment effect in two samples, namely, the overall population and marker-positive subpopulation. As shown below, to perform a comparison in the overall population, the t-test is applied to the following analysis samples:

  • Placebo arm: Samples 1 and 2 (Placebo M- and Placebo M+) are merged.

  • Treatment arm: Samples 3 and 4 (Treatment M- and Treatment M+) are merged.

Further, the treatment effect test in the subpopulation of marker-positive patients is carried out based on these analysis samples:

  • Placebo arm: Sample 2 (Placebo M+).

  • Treatment arm: Sample 4 (Treatment M+).

These analysis samples are specified in the analysis model below. The samples defined in the data model are merged using c() or list() function, e.g., c("Placebo M-", "Placebo M+")defines the placebo arm and c("Treatment M-", "Treatment M+") defines the experimental treatment arm in the overall population test.

Define an Evaluation Model

It is reasonable to consider the following success criteria in this case study:

  • Marginal power: Probability of a significant outcome in each patient population.

  • Disjunctive power: Probability of a significant treatment effect in the overall population (OP) or marker-positive subpopulation (M+). This metric defines the overall probability of success in this clinical trial.

  • Conjunctive power: Probability of simultaneously achieving significance in the overall population and marker-positive subpopulation. This criterion will be useful if the trial’s sponsor is interested in pursuing an enhanced efficacy claim (Millen et al., 2012).

The following evaluation model applies the three criteria to the two tests listed in the analysis model:

Download

The R code and the report that summarizes the results of Clinical Scenario Evaluation for this case study can be downloaded on the Mediana website:

Case study 4

Summary

Case study 4 serves as an extension of the oncology clinical trial example presented in Case study 1. Consider again a Phase III trial in patients with metastatic colorectal cancer (MCC). The same general design will be assumed in this section; however, an additional endpoint (overall survival) will be introduced. The case of two endpoints helps showcase the package’s ability to model complex design and analysis strategies in trials with multivariate outcomes.

Progression-free survival (PFS) is the primary endpoint in this clinical trial and overall survival (OS) serves as the key secondary endpoint, which provides supportive evidence of treatment efficacy. A hierarchical testing approach will be utilized in the analysis of the two endpoints. The PFS analysis will be performed first at α = 0.025 (one-sided), followed by the OS analysis at the same level if a significant effect on PFS is established. The resulting testing procedure is equivalent to the fixed-sequence procedure and controls the overall Type I error rate (Dmitrienko and D’Agostino, 2013).

The treatment effect assumptions that will be used in clinical scenario evaluation are listed in the table below. The table shows the hypothesized median times along with the corresponding hazard rates for the primary and secondary endpoints. It follows from the table that the expected effect size is much larger for PFS compared to OS (PFS hazard ratio is lower than OS hazard ratio).

Endpoint Statistic Placebo Treatment
Progression-free survival Median time (months) 6 9
Hazard rate 0.116 0.077
Hazard ratio 0.67
Overall survival Median time (months) 15 19
Hazard rate 0.046 0.036
Hazard ratio 0.79

Define a Data Model

In this clinical trial two endpoints are evaluated for each patient (PFS and OS) and thus their joint distribution needs to be listed in the general set.

A bivariate exponential distribution will be used in this example and samples from this bivariate distribution will be generated by the MVExpoPFSOSDist function which implements multivariate exponential distributions. The function utilizes the copula method, i.e., random variables that follow a bivariate normal distribution will be generated and then converted into exponential random variables.

The next several statements specify the parameters of the bivariate exponential distribution:

  • Parameters of the marginal exponential distributions, i.e., the hazard rates.

  • Correlation matrix of the underlying multivariate normal distribution used in the copula method.

The hazard rates for PFS and OS in each treatment arm are defined based on the information presented in the table above (placebo.par and treatment.par) and the correlation matrix is specified based on historical information (corr.matrix). These parameters are combined to define the outcome parameter sets (outcome.placebo and outcome.treatment) that will be included in the sample-specific set of data model parameters (Sample object).

To define the sample-specific data model parameters, a 2:1 randomization ratio will be used in this clinical trial and thus the number of events as well as the randomization ratio are specified by the user in the Event object. Secondly, a separate sample ID needs to be assigned to each endpoint within the two samples (e.g. Placebo PFS and Placebo OS) corresponding to the two treatment arms. This will enable the user to construct analysis models for examining the treatment effect on each endpoint.

Define an Analysis Model

The treatment comparisons for both endpoints will be carried out based on the log-rank test (method = "LogrankTest"). Further, as was stated in the beginning of this page, the two endpoints will be tested hierarchically using a multiplicity adjustment procedure known as the fixed-sequence procedure. This procedure belongs to the class of chain procedures (proc = "ChainAdj") and the following figure provides a visual summary of the decision rules used in this procedure.

The circles in this figure denote the two null hypotheses of interest:

  • H1: Null hypothesis of no difference between the two arms with respect to PFS.

  • H2: Null hypothesis of no difference between the two arms with respect to OS.

The value displayed above a circle defines the initial weight of each null hypothesis. All of the overall α is allocated to H1 to ensure that the OS test will be carried out only after the PFS test is significant and the arrow indicates that H2 will be tested after H1 is rejected.

More formally, a chain procedure is uniquely defined by specifying a vector of hypothesis weights (W) and matrix of transition parameters (G). Based on the figure, these parameters are given by

Two objects (named chain.weight and chain.transition) are defined below to pass the hypothesis weights and transition parameters to the multiplicity adjustment parameters.

As shown above, the two significance tests included in the analysis model reflect the two-fold objective of this trial. The first test focuses on a PFS comparison between the two treatment arms (id = "PFS test") whereas the other test is carried out to assess the treatment effect on OS (test.id = "OS test").

Alternatively, the fixed-sequence procedure can be implemented using the method FixedSeqAdj introduced from version 1.0.4. This implementation is facilitated as no parameters have to be specified.

Define an Evaluation Model

The evaluation model specifies the most basic criterion for assessing the probability of success in the PFS and OS analyses (marginal power). A criterion based on disjunctive power could be considered but it would not provide additional information.

Due to the hierarchical testing approach, the probability of detecting a significant treatment effect on at least one endpoint (disjunctive power) is simply equal to the probability of establishing a significant PFS effect.

Download

The R code and the report that summarizes the results of Clinical Scenario Evaluation for this case study can be downloaded on the Mediana website:

Case study 5

Summary

This case study extends the straightforward setting presented in Case study 1 to a more complex setting involving two trial endpoints and three treatment arms. Case study 5 illustrates the process of performing power calculations in clinical trials with multiple, hierarchically structured objectives and “multivariate” multiplicity adjustment strategies (gatekeeping procedures).

Consider a three-arm Phase III clinical trial for the treatment of rheumatoid arthritis (RA). Two co-primary endpoints will be used to evaluate the effect of a novel treatment on clinical response and on physical function. The endpoints are defined as follows:

  • Endpoint 1: Response rate based on the American College of Rheumatology definition of improvement (ACR20).

  • Endpoint 2: Change from baseline in the Health Assessment Questionnaire-Disability Index (HAQ-DI).

The two endpoints have different marginal distributions. The first endpoint is binary whereas the second one is continuous and follows a normal distribution.

The efficacy profile of two doses of a new treatment (Doses L and Dose H) will be compared to that of a placebo and a successful outcome will be defined as a significant treatment effect at either or both doses. A hierarchical structure has been established within each dose so that Endpoint 2 will be tested if and only if there is evidence of a significant effect on Endpoint 1.

Three treatment effect scenarios for each endpoint are displayed in the table below. The scenarios define three outcome parameter sets. The first set represents a rather conservative treatment effect scenario, the second set is a standard (most plausible) scenario and the third set represents an optimistic scenario. Note that a reduction in the HAQ-DI score indicates a beneficial effect and thus the mean changes are assumed to be negative for Endpoint 2.

Table continues below
Endpoint Outcome.parameter.set Placebo Dose.L
ACR20 (%) Conservative 30% 40%
Standard 30% 45%
Optimistic 30% 50%
HAQ-DI (mean (SD)) Conservative -0.10 (0.50) -0.20 (0.50)
Standard -0.10 (0.50) -0.25 (0.50)
Optimistic -0.10 (0.50) -0.30 (0.50)
Dose.H
50%
55%
60%
-0.30 (0.50)
-0.35 (0.50)
-0.40 (0.50)

Define a Data Model

As in Case study 4, two endpoints are evaluated for each patient in this clinical trial example, which means that their joint distribution needs to be specified. The MVMixedDist method will be utilized for specifying a bivariate distribution with binomial and normal marginals (var.type = list("BinomDist", "NormalDist")). In general, this function is used for modeling correlated normal, binomial and exponential endpoints and relies on the copula method, i.e., random variables are generated from a multivariate normal distribution and converted into variables with pre-specified marginal distributions.

Three parameters must be defined to specify the joint distribution of Endpoints 1 and 2 in this clinical trial example:

  • Variable types (binomial and normal).

  • Outcome distribution parameters (proportion for Endpoint 1, mean and SD for Endpoint 2) based on the assumptions listed in the Table above.

  • Correlation matrix of the multivariate normal distribution used in the copula method.

These parameters are combined to define three outcome parameter sets (e.g., outcome1.plac, outcome1.dosel and outcome1.doseh) that will be included in the Sample object in the data model.

# Variable types
var.type = list("BinomDist", "NormalDist")

# Outcome distribution parameters
placebo.par = parameters(parameters(prop = 0.3), 
                         parameters(mean = -0.10, sd = 0.5))

dosel.par1 = parameters(parameters(prop = 0.40), 
                        parameters(mean = -0.20, sd = 0.5))
dosel.par2 = parameters(parameters(prop = 0.45), 
                        parameters(mean = -0.25, sd = 0.5))
dosel.par3 = parameters(parameters(prop = 0.50), 
                        parameters(mean = -0.30, sd = 0.5))

doseh.par1 = parameters(parameters(prop = 0.50), 
                        parameters(mean = -0.30, sd = 0.5))
doseh.par2 = parameters(parameters(prop = 0.55), 
                        parameters(mean = -0.35, sd = 0.5))
doseh.par3 = parameters(parameters(prop = 0.60), 
                        parameters(mean = -0.40, sd = 0.5))

# Correlation between two endpoints
corr.matrix = matrix(c(1.0, 0.5,
                       0.5, 1.0), 2, 2)

# Outcome parameter set 1
outcome1.placebo = parameters(type = var.type, 
                              par = placebo.par, 
                              corr = corr.matrix)
outcome1.dosel = parameters(type = var.type, 
                            par = dosel.par1, 
                            corr = corr.matrix)
outcome1.doseh = parameters(type = var.type, 
                            par = doseh.par1, 
                            corr = corr.matrix)

# Outcome parameter set 2
outcome2.placebo = parameters(type = var.type, 
                              par = placebo.par, 
                              corr = corr.matrix)
outcome2.dosel = parameters(type = var.type, 
                            par = dosel.par2, 
                            corr = corr.matrix)
outcome2.doseh = parameters(type = var.type, 
                            par = doseh.par2, 
                            corr = corr.matrix)

# Outcome parameter set 3
outcome3.placebo = parameters(type = var.type, 
                              par = placebo.par, 
                              corr = corr.matrix)
outcome3.doseh = parameters(type = var.type, 
                            par = doseh.par3, 
                            corr = corr.matrix)
outcome3.dosel = parameters(type = var.type, 
                            par = dosel.par3, 
                            corr = corr.matrix)

These outcome parameter set are then combined within each Sample object and the common sample size per treatment arm ranges between 100 and 120:

Define an Analysis Model

To set up the analysis model in this clinical trial example, note that the treatment comparisons for Endpoints 1 and 2 will be carried out based on two different statistical tests:

  • Endpoint 1: Two-sample test for comparing proportions (method = "PropTest").

  • Endpoint 2: Two-sample t-test (method = "TTest").

It was pointed out earlier in this page that the two endpoints will be tested hierarchically within each dose. The figure below provides a visual summary of the testing strategy used in this clinical trial. The circles in this figure denote the four null hypotheses of interest:

H1: Null hypothesis of no difference between Dose L and placebo with respect to Endpoint 1.

H2: Null hypothesis of no difference between Dose H and placebo with respect to Endpoint 1.

H3: Null hypothesis of no difference between Dose L and placebo with respect to Endpoint 2.

H4: Null hypothesis of no difference between Dose H and placebo with respect to Endpoint 2.

A multiple testing procedure known as the multiple-sequence gatekeeping procedure will be applied to account for the hierarchical structure of this multiplicity problem. This procedure belongs to the class of mixture-based gatekeeping procedures introduced in Dmitrienko et al. (2015). This gatekeeping procedure is specified by defining the following three parameters:

  • Families of null hypotheses (family).

  • Component procedures used in the families (component.procedure).

  • Truncation parameters used in the families (gamma).

These parameters are included in the MultAdjProc object defined below. The tests to which the multiplicity adjustment will be applied are defined in the tests argument. The use of this argument is optional if all tests included in the analysis model are to be included. The argument family states that the null hypotheses will be grouped into two families:

  • Family 1: H1 and H2.

  • Family 2: H3 and H4.

It is to be noted that the order corresponds to the order of the tests defined in the analysis model, except if the tests are specifically specified in the tests argument of the MultAdjProc object.

The families will be tested sequentially and a truncated Holm procedure will be applied within each family (component.procedure). Lastly, the truncation parameter will be set to 0.8 in Family 1 and to 1 in Family 2 (gamma). The resulting parameters are included in the par argument of the MultAdjProc object and, as before, the proc argument is used to specify the multiple testing procedure (MultipleSequenceGatekeepingAdj).

The test are then specified in the analysis model and the overall analysis model is defined as follows:

Recall that a numerically lower value indicates a beneficial effect for the HAQ-DI score and, as a result, the experimental treatment arm must be defined prior to the placebo arm in the test.samples parameters corresponding to the HAQ-DI tests, e.g., samples = samples("DoseL HAQ-DI", "Placebo HAQ-DI").

Define an Evaluation Model

In order to assess the probability of success in this clinical trial, a hybrid criterion based on the conjunctive criterion (both trial endpoints must be significant) and disjunctive criterion (at least one dose-placebo comparison must be significant) can be considered.

This criterion will be met if a significant effect is established at one or two doses on Endpoint 1 (ACR20) and also at one or two doses on Endpoint 2 (HAQ-DI). However, due to the hierarchical structure of the testing strategy (see Figure), this is equivalent to demonstrating a significant difference between Placebo and at least one dose with respect to Endpoint 2. The corresponding criterion is a subset disjunctive criterion based on the two Endpoint 2 tests (subset disjunctive power was briefly mentioned in Case study 2).

In addition, the sponsor may also be interested in evaluating marginal power as well as subset disjunctive power based on the Endpoint 1 tests. The latter criterion will be met if a significant difference between Placebo and at least one dose is established with respect to Endpoint 1. Additionally, as in Case study 2, the user could consider defining custom evaluation criteria. The three resulting evaluation criteria (marginal power, subset disjunctive criterion based on the Endpoint 1 tests and subset disjunctive criterion based on the Endpoint 2 tests) are included in the following evaluation model.

Download

The R code and the report that summarizes the results of Clinical Scenario Evaluation for this case study can be downloaded on the Mediana website:

Case study 6

Summary

Case study 6 is an extension of Case study 2 where the objective of the sponsor is to compare several Multiple Testing Procedures (MTPs). The main difference is in the specification of the analysis model.

Define an Analysis Model

As in Case study 2, each dose-placebo comparison will be performed using a one-sided two-sample t-test (TTest method defined in each Test object). The same nomenclature will be used to define the hypotheses, i.e.:

  • H1: Null hypothesis of no difference between Dose L and placebo.

  • H2: Null hypothesis of no difference between Dose M and placebo.

  • H3: Null hypothesis of no difference between Dose H and placebo.

In this case study, as in Case study 2, the overall success criterion in the trial is formulated in terms of demonstrating a beneficial effect at any of the three doses, inducing an inflation of the overall Type I error rate. In this case study, the sponsor is interested in comparing several Multiple Testing Procedures, such as the weighted Bonferroni, Holm and Hochberg procedures. These MTPs are defined as below:

The mult.adj1 object, which specified that no adjustment will be used, is defined in order to observe the decrease in power induced by each MTPs.

It should be noted that for each weighted procedure, a higher weight is assigned to the test of Placebo vs Dose H (1/2), and the remaining weight is equally assigned to the two other tests (i.e. 1/4 for each test). These parameters are specified in the par argument of each MTP.

The analysis model is defined as follows:

For the sake of compactness, all MTPs are combined using a MultAdj object, but it is worth mentioning that each MTP could have been directly added to the AnalysisModel object using the + operator.

Generate a Simulation Report

This case study will also illustrate the process of customizing a Word-based simulation report. This can be accomplished by defining custom sections and subsections to provide a structured summary of the complex set of simulation results.

Create a Customized Simulation Report

Define a Presentation Model

Several presentation models will be used produce customized simulation reports:

  • A report without subsections.

  • A report with subsections.

  • A report with combined sections.

First of all, a default PresentationModel object (case.study6.presentation.model.default) will be created. This object will include the common components of the report that are shared across the presentation models. The project information (Project object), sorting options in summary tables (Table object) and specification of custom labels (CustomLabel objects) are included in this object:

Report without subsections

The first simulation report will include a section for each outcome parameter set. To accomplish this, a Section object is added to the default PresentationModel object and the report is generated:

Report with subsections

The second report will include a section for each outcome parameter set and, in addition, a subsection will be created for each multiplicity adjustment procedure. The Section and Subsection objects are added to the default PresentationModel object as shown below and the report is generated:

Report with combined sections

Finally, the third report will include a section for each combination of outcome parameter set and each multiplicity adjustment procedure. This is accomplished by adding a Section object to the default PresentationModel object and specifying the outcome parameter and multiplicity adjustment in the section’s by argument.

Download

The R code and the report that summarizes the results of Clinical Scenario Evaluation for this case study can be downloaded on the Mediana website:

Mediana/inst/template/0000755000176200001440000000000013434027611014401 5ustar liggesusersMediana/inst/template/template.docx0000644000176200001440000005447313434027611017110 0ustar liggesusersPK!k֭[Content_Types].xml (MO0HUBM9{\=?L6Kdd zx2;8CX C~SЀ5wm5D~˿/ g~? @ZOhi@ò|[ͻA;y(gah/ij M,xs$rtv4UAB7+yϒ] yOl{;-oDy<\gF[GPK!N _rels/.rels (JA a}7 "Hw"w̤ھ P^O֛;<aYՠ؛`GkxmPY[g Gΰino/<<1ⳆA$>"f3\ȾTI SWY ig@X6_]7~ fˉao.b*lIrj),l0%b 6iD_, |uZ^t٢yǯ;!Y,}{C/h>PK!H6ޣ[word/_rels/document.xml.rels (N0EHC=qS<Դ@X8==ڦ/wMV_=&j49KH,jSsvGA9[cA'WխK|rVϜ;Y._Qh _ڒB~xpF{=I3;)tz˽QZ ʹC',B$%PuzHOߞ׵PQ*QaWѢUSHr;_La|BDXo0} V A bB,R>&/>Gsv5`wzEQB97b+#gCAcWG&/s與8Ӝ=Ik5OSqq">r 4z돡-vfɢF$,] dN瑿  {M3$"% Gotx 7N:閵 ;*2b<*'FZD e,> zp6 :m-Z;j9)*Q_wm~W>Wp|}n1k#鲘e~rZ|gydz w"'h<SDʼn4Y(e㐱K@Xf5eHۇ=_PK!󷋦Kword/footer1.xmlUr0wѪ ؤROL&%dCȮaˠDTIs$<6u9kӳ ՆI^LjX&LB|vN02p)h/EZ [Pq֪L1݌Zn,3O)WHxG~/gJ˘Tc"6.;D JΈ5]W^F}:eKƙ?h`ds-ZPRJP=4gx d4A fjWGѠu#iZ7qڒ҃ M h3*)_+h5ESFIFha>f;p>}ϴU+Gϡ]w(/ͼ ެeqpR%EEGNC݄%6_G칓DzyΩ69YQ'ϖfo2C2En[ȰILDc),jGdb!n.hJrnNʌ S-^F*PWvHUA)Ok!- L]|UxW) `Gl]Mz~upج q] <jbKm|S+y ikFf|U-BFZʴjNug:oB礁-zA I^yWwP`y?X_ˍ%7PK!ЀLword/footnotes.xmlTMo0 0tO,Y0qzh"~*ˉPK$9^(;r9q)rqG^X'A$R͡z[߿֓$qՠEA‘ͷEW^.A 7y:܂O9Jr`f2p>0g ИwST1ژ 勬? 6G(Hcu~$4't|;Rq!o(}1FNϢ]&bךl67H~OV؊B1>H}BO]1; 6gdJs^\31khu ftM޹4!>$Q<jFFm6K$˳m?Lj<#7E}6sĈZi2 )_5p˿PK!)Fword/endnotes.xmlUMo0 0tO,w^0qzh"~*˱PK$9^(]l#Ez}GQX'A$YR ͡߿v$rՠENN‘/6B5<'&c+[*-8(,%q ohB/c 0Gh`X%Xż[=Ċ,0/d- j4VgD(d=5zؙ q{-F 5r*i2>񚈣sIYI[jRgpQN{꿈 &fH8#Ť`>Ŏi1#?v'И`FW]罔0kNJA"ų˞jd&in$ٜEfdp ,` dEҝ3aTsB.Ym8љdM;~w[`CLgˌ[2&&eoq޽y[h 0.㠽M7GQ)9rCEWDc/PK!}word/theme/theme1.xmlY]7}/? xlc73f7 Gٖ=jFf$Ƅ@I Ҵ $ЗlҦ+ؖl,[XJְhν:Α4|~D#p[lDIݽ檮G9?vD#}wP |͈_bSó1K"$6G :=]'F1!vҥpޡp .4I8h) $y&u>/\".Aԟ߽G;[l52aQLNKE[W*6qR.,)a)g׫tZ@ZX+ 濴 WԿ;_QJ@)ojćrdޗ1{VxXnz +Tòdc tcI"AbĢxP-D !>PxS3^z%/%gf6 FaB~ ^] o^>wN8y'~NV{(V履:=x~BgxH@Is;XSQ1AnݢO8#B}}(،$:gLkadMXpM?'ΓБvfSVbs A&E@cgcb X8wD>մ2#en#6bspi2juHx+cj* \QD#HPuLO0eNg9H`Zү~@瑉L9G6;l(ڰ=:#~%L|C=[}`#ݧmPVҪ@YbǓa%5 G$>Ud=oeշO-H[mundD.v,uٜK{vҽ}>^i4ȷ\*KupDŽҞSҝ4BS{VMCo2t`& R6N'DMa}_P; \O3e[mfBAnMSHڽ`[ ˕QS{bNVyA@ھ 3DBh<ٹYXTE6BԖY `FBdT/+>Lo &+ Xe&n]ZjoiVn& 5pVmhkkd(TP+꿱8kn]h+[.P2C4ce4rɋ EgQiE0 T ""pP]9eh4Dq+A.,E#I7c]kNoAS>UgKK6tѱ3 *FsDsRV&WU NC(p5,騻e lP-$D8 V1.Yt#9M4Ws*rִùJjb4}O{]rk Be6kV$M޾{lr$RQQ/ r&.EEΝҸ05Y8gw7*5,SpQ uf<ѐ;!90m tK癇P nie.R($---Wl9uq& DQ7_k8r6;赕vOBVO?1h(R rL'7nS@g}q.F{aov[Xm,x~B-PD}ONiCd M8"̌B. TԘ}{DAӮ&˺tR(m4HjcKF0$pJs@%MEveo7Z4a1íf0/K\j"7 .!Z.j-U9 _BcpCV}`uVL䘒G(ЊDž 1f) ki7 Z-~&J FqoKhV͙o/>qFސѭ?;\oPK!jڗword/stylesWithEffects.xml]Ys8~ߪ*;-5)ǎ'd$QH#w{0&6NA`D%i5 m V9& L*]ؽd+&O{,h\KyO!uXS~$;/Fzf(MpK}㿑nk s 1H`Y4DN d]7?s.S7?s\Q/_kJȼ[U: 4ڂ% !Xl$ޥ›Z"Q *`Tb- -Fd-vXkAt?gm5yCF1i4# R ]LF9> ej-s4['i-hYòbhȕAdE^3!@A,ƣ ݲr1VkkyyvUe$ Ж|[Y*@\2sџ_*Y,C0\ޅ'=Zu9bbYĶ$P aq7Bv 1sR/#ׯD4o;>Y|744|g)e֐p;mX3>A $\D#ć=*HIA`+:ؖz͐gJaG]h BH"STeF Dwc =>pag!Ho-W ەsDhȑNyHOv?s1̽<ЉNQ9gBOJÎPsN: kȤ4>ID`Ir+oA4`FG C*yi_,[Kkrrɓ'S?ϒdW6 -ps[8n~!& h1eUkZ '!E6n'2>M, qSwN'?i~> '1m' f0rVpH"ъgx ^[>8 S-y;. b'1sN<CFYd:/ K-ѳd{L@zs)G FqB.W' f+OKѷ5U~(9XnW8< еhlSӪQڦ7&Z]>=%E xZ`8Ď{~ fk,&tq`&` *`t30G5AF)h!ЫBQ+æ@Qa2#]͈SK Vzn^ @(Ȓ!'0@Ŀ'kχav'EV6>dP8=[/:%JmM]Q={Z[0,Ms /zk 1jOdMa,룐0`<^)n[»34TT[y'L5,.Aa,&W=WL*-b>!kyˬɏ;Nt buu V]|Īj&_ic>ll={6bXYU“#[eV\9CQ~sͶV-wI%b+N>[4:BJ4TnAhx]H皉\ _d5':'uR8S2=Jkǽ}V/.)m Wuke>l(I)vZj24r1ϥ| od>ݪ MWt2ˋYxX&A}XYьMt#&&Q裳=Mv4ʥ&e:% sPDEghBUF|4es o"د=͸A]qVbh"vDd6(|-rrC6/TǠmE1kͳLjKdJ@ $+טj|-Z BwF]5Re4>",+'TYMwTBrʇSR  d0WdT R*"bCfʢL `ʊxi?{ $aΡ~?OPK! word/styles.xml]r8}ߪ=]k4So{P@_E$ݒI7as8wN)ݟn$h0y1I)f6t |<">Ǣ^hB14zvbRA;ύ8$?@cYiQp~o^?a|Pp 4[qR?aC~ {/WNz yPֻ p8qr{z-?GoE'Jo?B?ώF D >.,8gQBQ%4>QH%H4>QB@%7>Qo%s>:H\EQF%>а%ݲDs=ڷ]GtUp\&Q<7jzgs~J T |zu(Ze00}%_D0t}[(r-=or]n0蔭o +Q44p#4+Y{N 1>0sAV^E[$r&M`݅}|ž|jc :G|ךinήg^s a=&hanRYXoDskguֶxB9t6X@{CY kd!Z&Gl;dik6Xb0iG3rpILzf36`jY]h!]Wh!H}%Ds!;G Yִ,{1t`cf[3dto_coH6P4bmB_&MY4F*4ʺTI6hQ7m 6 yjOB#oY 9U%oAxP kn`lnjDԿv@R T&o)ё, Y duC!oAݐn@P7m =y7 鎼 dYsT YӃ ņ*m @e6bm @ $ye!m y6 yꆼ 'f!ݑ,kn kzT6d xt6bm2yHNP%Ȳ6PA$oYm o9TM!ouC!oAɻYHwm ˚$m Ț YsC%y XLRS TI T%@V7m ٚ -B/1S7mТn@P{nyȲɩ*y)H%oA@zQC LU G# @6$<+Ҽ:@B x6M~-kb,[^K*%a<$x%CHW@.I4q|C^,A2M)@%w A<>L|FLqޖK!#o- .擻)kOWaXE 5cEd IjͲ !ʁhR,:Zy&ƉX#qPu[{ՠQXs/55PƬ2PUH\P,׹te?Բ4d'P! O>y4z&2mE  ny&4/j yty'<@z:\10%ǐ8{vZuПj`YFCs Bv )v-`f8lP <)0I-.FTȱs 7BnukA5 @5snG? Va9U=;L^ >,Cx4rk;ışrMoUTfq҅aOъfZ ͚!Ϣ+'^q,Da񧁨Bb[]Tw |O_e/,5**1w@_Z8xq9ӀWCeC'@1råv| ]' vXi|-[]va,JЇ-`z cI PX%ͅAx fN5]AvD71׭,NTG]l1 &_q~C|W'I d(9ʬv @jea)xD0]M N"UC2lp(,b*Fj1U9&ϱw{rag!Ho-VN`+סⰽ&BCd甇jAui s/K:qV!j^I P`=8%?ôk@M=qYB'K6_~ &P/M39Ҩev-fIbrj^_k>XQ|2tJ]= "}0Գ`EuJ!kŊZ;N~d|ʪL$Q״2NBVld|;0Y : īVwok 6ƴ1xqȡVtH&3њ01@C}p>v߹}lG;Np<LJ ]+XNTxx=NˆrH= mVnO*auI*ʒ~ y)~jqxŌ]/uXcj%w8~;eaa7.hE*u[ڇZ'Kѷ050U)9ZnWqxmk1٦U"j7a'֒lQ ЏN]щEvY$v p\@8`oCjUu.܅$)LQ&~\b a^ E EeFL-s>Zgl"܌@u)CO` O 7_*4OV^)#6}Q/!3sGxKE@pR+z< Bݔ @`^ $8>6?l0w&̲=Q59;sBjTJp"T%] ԧ@S|NjU\sXX^bWm>_ [s[eLwb VW747~x&XU$1m,އegZ,Vհ Dr+lǐk?عf_,IȚ5s'^-:]euV4<. 皙 _d&|vq|GLOu{ڥC(rՋp{ yfUM-a&zP:)VUQOFCr-.re0p| jï?c% ]X` ՃD]Jyف2}R Om2n6_ya LEx܃;Xr_y>Q^q5!Alg:+2/$iZ:VM2nB2NLoÉnlb ފW{kңV:VǂV cP"-5) /}q7ɵ&EMߤO\]r 5 m}&n 5 =k-$V湙s>5͛\Gaeډ7w^܆nUم&sCrx;fiux&A<+2FfyMDM,R[+iKM{t8KsPDrMF|4=y!Dg^{ qrE^l6˩QF Z%uٴJCr8m/X6lR3NbJo:zSL zaP_Aܢ tg4UuWfGQe "WHNY>%X嘸%ބʰL'PRPR!%l#7KijSKy@T@' tp?PK!0 docProps/app.xml (Sn0 ?7vbŐna[mϜLBeI׏iwO#HAg#YEghm٭< m AluIaȘ„uUQaeÕS+l)wV4T\_ M;q5VF}98\hQXݨS"P-El}bO Ħغ29$[ZP`;goQ[R{EH5OweX)`Uv\?L4nU:(΀G݂bbՈς3γ_0zG g-%S] _5Acxn˹ICl2ljz?>͉Kxt Kp6gE}@)`W[ {vݳǞXOV˛EߴIGW?PK!K word/fontTable.xmlN0'"Cl7@tT4q1vNk-#%vm؉g@LFMS|>EtǍZbqP}QdSKVhS]}sVMr _ىRvı\2{Kadn*|ٔ%sb! 1e>UtFr|~lxkQڮZOJeitƭ5ˢ'P}"3bfFq] W@&WJ(]EtނbBr]*%S~@ɔ;VS8ו53~ n9vQpٺ1#59V.p0t:&BR48!m¤iPLCHS"P ʅ. `|@(ܝAn~ـ/b k31ZA()pIPYHF5"|9@/ М) E q:]8Ue 0"⹗B-0SJX;Up)Lg}dSzٿAn4 w ~ }-?|kAF&MU,]Xx<%oނ 5[o#LJKDㅥn*AR̚>=:uüVLW6_>Etk봻= PK!&=<word/numbering.xmlI6 0 86FzKmtd:YmBA9R27d~1ZA@$_'_ױRJ_5]Q%Wɗdeb"}cZjH`Z)}A2dE7O.!ߚD"/WtS35S+x.[H|- hd 5[7aN!~yY`ѿ}RlRQۚ,C'rS?:jAܬ_~'l ݨۍ܈FwFR|iLWql 5E$?@yFRخg| 52<@*,qmde^٥)%: .jHqx=WndʨkTe 3|E)|$X:+#\K %e[jVXF>֬N0|g$#+NMLX;YyFnt{TcX|V.adH@b\pǨf@\U^X> ʅj.i:.\ʛ !vB:kwhYö3VE84m>+*{@ {3ttQnL@kڳ@bKuU%!4 ;Ŏt*d&LV%^=IjEN4m'qhA>;"d)Ľ&^[gO ͍ Nvtg' TgUUոogsg,cX!,>VpxQ<೻Ń(Vtc[ TYx L'D_[AX㯬t-nRʒtT59b:!JWiKT_ޭt\g:t))HA[tfДAM4AյJEQtw x>&oڒ<жo &} A¯6o!.~yZ"g\x5Y޼A:oP[ (g@~tޠl%EҗY7lKdzwހ m5o/kio80k?jH;_q7x#/CO`VF  [-Zn5r k&~aq}u9k$Waq}].;1^5Z[5^)<&o;$2𪀑p` ~~7g ~!G>_ ,5Njz@2 vc,3Xg#Pw#P7LX@ڔye=FN7;747,7_}s -?1u~>tąM:T?R:ס+rvۦBípڒ;Vu]R"GoPK!`_ word/webSettings.xml[K1Ò6RDn "xyϦm0 3zVos1Lw`q:ՌkUAtaթBULXO:Q4'HlJU(AZZ[Ů)B(ÁM*W mF\K[}VPh+"Y(?| -/#ƅ_L3Bg4.f#CWUYb`n&jV⣘OX_2eֻv}w{ex] PK!SIRjdocProps/core.xml (Mo0 4E0E!MinQb ZVI~QÎ_?8] XY (Fh^ خ9 cZ0Uj,Z)(/ $bl f#о+MW=$gs 7G > Q-@p bu` {( .ԏ{+곕먞c ZmZͮ8<:K%LY'x5 sM+fNxO8#s*г[ vޯeByy?$ْN4qF.QC8ɘxAwSPK-!k֭[Content_Types].xmlPK-!N _rels/.relsPK-!H6ޣ[ word/_rels/document.xml.relsPK-!P$o word/document.xmlPK-!󷋦KE word/footer1.xmlPK-!ЀLword/footnotes.xmlPK-!)FEword/endnotes.xmlPK-!}pword/theme/theme1.xmlPK-!sL2yrword/settings.xmlPK-!jڗword/stylesWithEffects.xmlPK-! 1word/styles.xmlPK-!0 BdocProps/app.xmlPK-!K Eword/fontTable.xmlPK-!&=<Hword/numbering.xmlPK-!`_ AQword/webSettings.xmlPK-!SIRj~RdocProps/core.xmlPKUMediana/NAMESPACE0000644000176200001440000000362613464544365013054 0ustar liggesusers# Generated by roxygen2: do not edit by hand S3method("+",AnalysisModel) S3method("+",DataModel) S3method("+",EvaluationModel) S3method("+",PresentationModel) S3method(AnalysisModel,MultAdj) S3method(AnalysisModel,MultAdjProc) S3method(AnalysisModel,MultAdjStrategy) S3method(AnalysisModel,Statistic) S3method(AnalysisModel,Test) S3method(AnalysisModel,default) S3method(AnalysisStack,default) S3method(CSE,default) S3method(DataModel,Design) S3method(DataModel,Event) S3method(DataModel,OutcomeDist) S3method(DataModel,SampleSize) S3method(DataModel,default) S3method(DataStack,default) S3method(EvaluationModel,Criterion) S3method(EvaluationModel,default) S3method(GenerateData,default) S3method(GenerateReport,default) S3method(MultAdj,MultAdjProc) S3method(MultAdj,MultAdjStrategy) S3method(MultAdj,default) S3method(MultAdjStrategy,MultAdjProc) S3method(MultAdjStrategy,default) S3method(PresentationModel,CustomLabel) S3method(PresentationModel,Project) S3method(PresentationModel,Section) S3method(PresentationModel,Subsection) S3method(PresentationModel,Table) S3method(PresentationModel,default) S3method(summary,CSE) export(AdjustCIs) export(AdjustPvalues) export(AnalysisModel) export(AnalysisStack) export(CSE) export(Criterion) export(CustomLabel) export(DataModel) export(DataStack) export(Design) export(EvaluationModel) export(Event) export(ExtractAnalysisStack) export(ExtractDataStack) export(GenerateData) export(GenerateReport) export(MultAdj) export(MultAdjProc) export(MultAdjStrategy) export(OutcomeDist) export(PresentationModel) export(Project) export(Sample) export(SampleSize) export(Section) export(SimParameters) export(Statistic) export(Subsection) export(Table) export(Test) export(families) export(parameters) export(samples) export(statistics) export(tests) import(doParallel) import(doRNG) import(foreach) importFrom(stats,poisson) Mediana/NEWS.md0000644000176200001440000001535313464523105012720 0ustar liggesusers# Mediana 1.0.8 ## New features * Create an hexagon sticker for the package. ## Bug fixes * Fix the calculation of intersection hypothesis pvalue when family weights is null for all gatekeeping procedures. * Revise the error fraction function to avoid floating point issue * Fix the images in the Case studies vignette * Revise the specification of serial and parallel parameters in MixtureGatekeepingAdj (matrix instead of list) * Revise the Outcome table generation function used for reporting # Mediana 1.0.7 ## Bug fixes * As the `ReporteRs` R package is not available on the CRAN anymore, the report generation feature has been revised using the `officer` and `flextable` R packages. These packages are now required to use the `GenerateReport` function. # Mediana 1.0.6 ## New features * Addition of the multinomial distribution (`MultinomialDist`, see [Analysis model](http://gpaux.github.io/Mediana/DataModel.html#OutcomeDistobject)). * Addition of the ordinal logistic regression test (`OrdinalLogisticRegTest`, see [Analysis model](http://gpaux.github.io/Mediana/AnalysisModel.html#Testobject)). * Addition of the Proportion statistic (`PropStat`, see [Analysis model](http://gpaux.github.io/Mediana/AnalysisModel.html#Statisticobject)). * Addition of the Fallback procedure (`FallbackAdj`, see [Analysis model](http://gpaux.github.io/Mediana/AnalysisModel.html#MultAdjProcobject)). * Addition of a function to get the analysis results generated in the CSE using the `AnalysisStack` function (see [Analysis stack](http://gpaux.github.io/Mediana/AnalysisStack.html)). * Addition of the `ExtractAnalysisStack` function to extract a specific set of results in an `AnalysisStack` object (see [Analysis stack](http://gpaux.github.io/Mediana/AnalysisStack.html#ExtractAnalysisStack.html)). * Creation of a vignette to describe the functions implementing the adjusted *p*-values (`AdjustPvalues`) and one-sided simultaneous confidence intervals (`AdjustCIs`). * Minor revisions of the generated report * It is now possible to use an option to specify the desirable direction of the treatment effect in a test, e.g., `larger = TRUE` means that numerically larger values are expected in the second sample compared to the first sample and `larger = FALSE` otherwise. This is an optional argument for all two-sample statistical tests to be included in the Test object. By default, if this argument is not specified, it is expected that a numerically larger value is expected in the second sample (i.e., by default `larger = TRUE`). ## Bug fixes * Due to difficulties for several users to install the Mediana R package because of java issue, the `ReporteRs` R package is not required anymore (remove from Imports). However, to be able to generate the report, the user will require to have the `ReporteRs` R package installed. * Minor revision to the two-sample non-inferiority test for proportions to ensure that the number of successes is not greater than the sample size # Mediana 1.0.5 ## New features * Addition of the `AdjustPvalues` function which can be used to get adjusted p-values from a Multiple Testing Procedure. This function cannot be used within the CSE framework but it is an add-on function to compute adjusted p-values. * Addition of the `AdjustCIs` function which can be used to get simultaneous confidence intervals from a Multiple Testing Procedure. This function cannot be used within the CSE framework but it is an add-on function to simultaneous confidence intervals. * Creation of vignettes ## Bug fixes * Revision of the dropout generation mechanism for time-to-event endpoints. # Mediana 1.0.4 ## New features * Addition of the Fixed-sequence procedure (`FixedSeqAdj`, see [Analysis model](http://gpaux.github.io/Mediana/AnalysisModel.html#MultAdjProcobject)). * Addition of the Cox method to calculate the HR, effect size and ratio of effect size for time-to-event endpoint. This can be accomplished by setting the `method` argument in the parameter list to set-up the calculation based on the Cox method. (`par = parameters(method = "Cox"`), see [Analysis model](http://gpaux.github.io/Mediana/AnalysisModel.html#Statisticobject)). * Addition of the package version information in the report. ## Bug fixes * Revision of one-sided p-value computation for Log-Rank test. * Revision of the call for Statistic in the core function (not visible). * Revision of the function to calculate the Hazard Ratio Statistic (HazardRatioStat method). By default, this calculation is now based on the log-rank statistic ((O2/E2)/(O1/E1) where O and E are Observed and Expected event in sample 2 and sample 1. A parameter can be added using the `method` argument in the parameter list to set-up the calculation based on the Cox method (`par = parameters(method = "Cox"`), see [Analysis model](http://gpaux.github.io/Mediana/AnalysisModel.html#Statisticobject)). * Revision of the function to calculate the effect size for time-to-event endpoint (`EffectSizeEventStat` method, based on the `HazardRatioStat` method) * Revision of the functions to calculate the ratio of effect size for continuous (`RatioEffectSizeContStat` method), binary (`RatioEffectSizePropStat` method) and event (`RatioEffectSizeEventStat method`) endpoint. * Revision of the function to generate the Test, Statistic, Design and result tables in the report. # Mediana 1.0.3 ## New features * Addition of the Beta distribution (`BetaDist`, see [Data model](http://gpaux.github.io/Mediana/DataModel.html#OutcomeDistobject)). * Addition of the Truncated exponential distribution, which could be used as enrollment distribution (`TruncatedExpoDist`, see [Data model](http://gpaux.github.io/Mediana/DataModel.html#OutcomeDistobject)). * Addition of the Non-inferiority test for proportion (`PropTestNI`, see [Analysis model](http://gpaux.github.io/Mediana/AnalysisModel.html#Testobject)). * Addition of the mixture-based gatekeeping procedure (`MixtureGatekeepingAdj` see [Analysis model](http://gpaux.github.io/Mediana/AnalysisModel.html#MultAdjProcobject)). * Addition of a function to get the data generated in the CSE using the `DataStack` function (see [Data stack](http://gpaux.github.io/Mediana/DataStack.html)). * Addition of a function to extract a specific set of data in a `DataStack` object (see [Data stack](http://gpaux.github.io/Mediana/DataStack.html#ExtractDataStack)). * Addition of the "Evaluation Model" section in the generated report describing the criteria and their parameters (see [Simulation report](http://gpaux.github.io/Mediana/Reporting.html#Description18)). ## Bug fixes * Revision of the generation of dropout time. * Correction of the `NormalParamAdj` function. * Correction of the `FisherTest` function. Mediana/R/0000755000176200001440000000000013464544415012023 5ustar liggesusersMediana/R/CustomLabel.R0000644000176200001440000000140213434027610014343 0ustar liggesusers###################################################################################################################### # Function: CustomLabel. # Argument: by. # Description: This function is used to create an object of class CustomLabel. #' @export CustomLabel = function(param, label) { # Error checks if (!is.character(param)) stop("CustomLabel: param must be character.") if (!(param %in% c("sample.size", "event", "outcome.parameter", "design.parameter", "multiplicity.adjustment"))) stop("CustomLabel: param is invalid.") if (!is.character(label)) stop("CustomLabel: label must be character.") custom.label = list(param = param, label = label) class(custom.label) = "CustomLabel" return(custom.label) invisible(custom.label) }Mediana/R/MVExpoPFSOSDist.R0000644000176200001440000000676213434027610014764 0ustar liggesusers###################################################################################################################### # Function: MVExpoPFSOSDist. # Argument: List of parameters (number of observations, list(list(rate), correlation matrix). # Description: This function is used to generate correlated exponential outcomes for PFS and OS. # Time of PFS cannot be greater than time of OS MVExpoPFSOSDist = function(parameter) { # Error checks if (missing(parameter)) stop("Data model: MVExpoPFSOSDist distribution: List of parameters must be provided.") if (is.null(parameter[[2]]$par)) stop("Data model: MVExpoPFSOSDist distribution: Parameter list (rate) must be specified.") if (is.null(parameter[[2]]$corr)) stop("Data model: MVExpoPFSOSDist distribution: Correlation matrix must be specified.") par = parameter[[2]]$par corr = parameter[[2]]$corr # Number of endpoints m = length(par) if (m != 2) stop("Data model: MVExpoPFSOSDist distribution: Only PFS and OS must be defined (2 endpoints)") if (ncol(corr) != m) stop("Data model: MVExpoPFSOSDist distribution: The size of the hazard rate vector is different to the dimension of the correlation matrix.") if (sum(dim(corr) == c(m, m)) != 2) stop("Data model: MVExpoPFSOSDist distribution: Correlation matrix is not correctly defined.") if (det(corr) <= 0) stop("Data model: MVExpoPFSOSDist distribution: Correlation matrix must be positive definite.") if (any(corr < -1 | corr > 1)) stop("Data model: MVExpoPFSOSDist distribution: Correlation values must be comprised between -1 and 1.") # Determine the function call, either to generate distribution or to return description call = (parameter[[1]] == "description") # Generate random variables if (call == FALSE) { # Error checks n = parameter[[1]] if (n%%1 != 0) stop("Data model: MVExpoPFSOSDist distribution: Number of observations must be an integer.") if (n <= 0) stop("Data model: MVExpoPFSOSDist distribution: Number of observations must be positive.") # Generate multivariate normal variables multnorm = mvtnorm::rmvnorm(n = n, mean = rep(0, m), sigma = corr) # Store resulting multivariate variables mvmixed = matrix(0, n, m) # Convert selected components to a uniform distribution and then to exponential distribution for (i in 1:m) { uniform = stats::pnorm(multnorm[, i]) if (is.null(par[[i]]$rate)) stop("Data model: MVExpoPFSOSDist distribution: Hazard rate parameter in the exponential distribution must be specified.") # Hazard rate hazard = as.numeric(par[[i]]$rate) if (hazard <= 0) stop("Data model: MVExpoPFSOSDist distribution: Hazard rate parameter in the exponential distribution must be positive.") mvmixed[, i] = -log(uniform)/hazard } # if Time of PFS is greater than time of OS, in that case, time of PFS will be replaced by time of OS PFSsupOS = mvmixed[,1]>mvmixed[,2] mvmixed[PFSsupOS,1]=mvmixed[PFSsupOS,2] result = mvmixed } else { # Provide information about the distribution function if (call == TRUE) { # Labels of distributional parameters par.labels = list() for (i in 1:m) { par.labels[[i]] = list(rate = "rate") } result = list(list(par = par.labels, corr = "corr"),list("Multivariate Exponential for PFS and OS")) } } return(result) } # End of MVExpoPFSOSDistMediana/R/CSE.R0000644000176200001440000000063413434027610012551 0ustar liggesusers############################################################################################################################ # Function: CSE # Argument: .... # Description: This function applies the metrics specified in the evaluation model to the test results (p-values) and # summaries to the statistic results. #' @export CSE = function(data, analysis, evaluation, simulation) { UseMethod("CSE") }Mediana/R/CreateDataScenarioEvent.R0000644000176200001440000000552413434027610016625 0ustar liggesusers####################################################################################################################### # Function: CreateDataScenarioEvent. # Argument: Data frame of patients and number of events. # Description: Create data stack for the current number of events. This function is used in the CreateDataStack function when the user uses the Event. CreateDataScenarioEvent = function(current.design.outcome.variables, current.events, rando.ratio) { # List of current data scenario current.data.scenario = list() current.data.scenario.index = 0 # Get the number of samples n.samples = length(current.design.outcome.variables) # Get the number of outcome n.outcomes = length(current.design.outcome.variables[[1]]) # Get the patient indicator censor of the primary outcome from all samples current.design.outcome.variables.primary = lapply(current.design.outcome.variables, function(x) !x[[1]]$data[,"patient.censor.indicator"]) # Add rows in case of unbalance randomization to bind by column maxrow = max(unlist(lapply(current.design.outcome.variables.primary, length))) current.design.outcome.variables.primary.complete = mapply(cbind,lapply(current.design.outcome.variables.primary, function(x) { length(x) = maxrow return(x) })) # Calculate the cumulative number of events for each sample according to the randomization ratio n.events.cum = mapply(function(x,y) cumsum(x)[seq(y,length(x), y)], current.design.outcome.variables.primary, as.list(rando.ratio)) index.patient = which(rowSums(n.events.cum)>=current.events)[1] # Get the number of patients required to get the current number of events in each sample index.patient = rando.ratio*index.patient # For each sample, generate the data for each outcome for the current sample size for (sample.index in 1:n.samples){ for (outcome.index in 1:n.outcomes){ # Increment the index current.data.scenario.index = current.data.scenario.index + 1 # Get the data for the current sample.size current.data = current.design.outcome.variables[[sample.index]][[outcome.index]]$data[(1:index.patient[[sample.index]]),] # Get the sample id current.id = current.design.outcome.variables[[sample.index]][[outcome.index]]$id # Get the outcome type current.outcome.type = current.design.outcome.variables[[sample.index]][[outcome.index]]$outcome.type # Add the current sample in the list current.data.scenario[[current.data.scenario.index]] = list(id = current.id, outcome.type = current.outcome.type, data = current.data ) } } # Return the object return(current.data.scenario) } # End of CreateDataScenarioEvent Mediana/R/BonferroniAdj.global.R0000644000176200001440000000133413434027610016116 0ustar liggesusers###################################################################################################################### # Function: BonferroniAdj.global. # Argument: p, Vector of p-values (1 x m) # n, Total number of testable hypotheses (in the case of modified mixture procedure) (1 x 1) # gamma, Vector of truncation parameter (1 x 1) # Description: Compute global p-value for the Bonferroni multiple testing procedure. The function returns the global adjusted pvalue (1 x 1) BonferroniAdj.global = function(p, n, gamma) { # Number of p-values k = length(p) if (k > 0 & n > 0) { adjp = n * min(p) # Bonferonni procedure } else adjp = 1 return(adjp) } # End of BonferroniAdj.globalMediana/R/DiffPropStat.R0000644000176200001440000000230113434027610014475 0ustar liggesusers###################################################################################################################### # Compute the difference of proportions between two samples for binary variable based on non-missing values in the combined sample DiffPropStat = function(sample.list, parameter) { # Determine the function call, either to generate the statistic or to return description call = (parameter[[1]] == "Description") if (call == FALSE | is.na(call)) { # Error checks if (length(sample.list)!=2) stop("Analysis model: Two samples must be specified in the DiffPropStat statistic.") # Merge the samples in the sample list sample1 = sample.list[[1]] # Merge the samples in the sample list sample2 = sample.list[[2]] # Select the outcome column and remove the missing values due to dropouts/incomplete observations outcome1 = sample1[, "outcome"] outcome2 = sample2[, "outcome"] prop1 = mean(stats::na.omit(outcome1)) prop2 = mean(stats::na.omit(outcome2)) result = (prop2 - prop1) } else if (call == TRUE) { result = list("Difference of proportions") } return(result) } # End of DiffPropStatMediana/R/OrdinalLogisticRegTest.R0000644000176200001440000000442313434027610016523 0ustar liggesusers###################################################################################################################### # Function: OrdinalLogisticRegTest # Argument: Data set and parameter (call type). # Description: Computes one-sided p-value based on Ordinal Logistic regression. OrdinalLogisticRegTest = function(sample.list, parameter) { # Determine the function call, either to generate the p-value or to return description call = (parameter[[1]] == "Description") if (call == FALSE | is.na(call)) { # No parameters are defined if (is.na(parameter[[2]])) { larger = TRUE } else { if (!all(names(parameter[[2]]) %in% c("larger"))) stop("Analysis model: OrdinalRegTest test: this function accepts only one argument (larger)") # Parameters are defined but not the larger argument if (!is.logical(parameter[[2]]$larger)) stop("Analysis model: OrdinalRegTest test: the larger argument must be logical (TRUE or FALSE).") larger = parameter[[2]]$larger } # Sample list is assumed to include two data frames that represent two analysis samples # Outcomes in Sample 1 outcome1 = sample.list[[1]][, "outcome"] # Remove the missing values due to dropouts/incomplete observations outcome1.complete = outcome1[stats::complete.cases(outcome1)] # Outcomes in Sample 2 outcome2 = sample.list[[2]][, "outcome"] # Remove the missing values due to dropouts/incomplete observations outcome2.complete = outcome2[stats::complete.cases(outcome2)] # Data frame data.complete = data.frame(rbind(cbind(2, outcome2.complete), cbind(1, outcome1.complete))) colnames(data.complete) = c("TRT", "RESPONSE") data.complete$TRT=as.factor(data.complete$TRT) # Order the level of the response data.complete$RESPONSE=factor(data.complete$RESPONSE, levels = 1:max(data.complete$RESPONSE), ordered = TRUE) # One-sided p-value (to be checked) z = summary(MASS::polr(RESPONSE ~ TRT, data = data.complete, Hess = TRUE))$coefficients["TRT2", "t value"] result = stats::pnorm(z, lower.tail = !larger) } else if (call == TRUE) { result=list("Ordinal logistic regression test") } return(result) } # End of OrdinalRegTest Mediana/R/FixedSeqAdj.R0000644000176200001440000000134513434027610014266 0ustar liggesusers###################################################################################################################### # Function: FixedSeqAdj. # Argument: p, Vector of p-values (1 x m) # par, List of procedure parameters: vector of hypothesis weights (1 x m) matrix of transition parameters (m x m) # Description: Fixed sequence procedure. FixedSeqAdj = function(p, par) { # Determine the function call, either to generate the p-value or to return description call = (par[[1]] == "Description") if (any(call == FALSE) | any(is.na(call))) { result = cummax(p) } else if (call == TRUE) { result=list(list("Fixed-sequence procedure"),NULL) } return(result) } # End of FixedSeqAdj Mediana/R/HochbergAdj.R0000644000176200001440000000565313434027610014305 0ustar liggesusers###################################################################################################################### # Function: HochbergAdj. # Argument: p, Vector of p-values (1 x m) # par, List of procedure parameters: vector of hypothesis weights (1 x m) # Description: Hochberg multiple testing procedure. HochbergAdj = function(p, par) { # Determine the function call, either to generate the p-value or to return description call = (par[[1]] == "Description") # Number of p-values m = length(p) # Extract the vector of hypothesis weights (1 x m) if (!any(is.na(par[[2]]))) { if (is.null(par[[2]]$weight)) stop("Analysis model: Hochberg procedure: Hypothesis weights must be specified.") w = par[[2]]$weight } else { w = rep(1/m, m) } if (any(call == FALSE) | any(is.na(call))) { # Error checks if (length(w) != m) stop("Analysis model: Hochberg procedure: Length of the weight vector must be equal to the number of hypotheses.") if (sum(w)!=1) stop("Analysis model: Hochberg procedure: Hypothesis weights must add up to 1.") if (any(w < 0)) stop("Analysis model: Hochberg procedure: Hypothesis weights must be greater than 0.") if (max(w) == min(w)){ # Index of ordered pvalue ind <- order(p, decreasing = TRUE) # Adjusted p-values result <- pmin(1, cummin(cumsum(w[ind]) * p[ind]/w[ind]))[order(ind)] } else { # Compute the weighted incomplete Simes p-value for an intersection hypothesis incsimes<-function(p,u) { k<-length(u[u!=0 & !is.nan(u)]) if (k>1) { temp=matrix(0,2,k) temp[1,]<-p[u!=0] temp[2,]<-u[u!=0] sort<-temp[,order(temp[1,])] modu<-u[u!=0] modu[1]<-0 modu[2:k]<-sort[2,1:k-1] incsimes<-min((1-cumsum(modu))*sort[1,]/sort[2,]) } else if (k==1) { incsimes<-p[u!=0]/u[u!=0] } else if (k==0) incsimes<-1 return(incsimes) } # End of incsimes # number of intersection nbint <- 2^m - 1 # matrix of intersection hypotheses int <- matrix(0, nbint, m) for (i in 1:m) { for (j in 0:(nbint - 1)) { k <- floor(j/2^(m - i)) if (k/2 == floor(k/2)) int[j + 1, i] <- 1 } } # matrix of local p-values int.pval <- matrix(0, nbint, m) # vector of weights for local test w.loc <- rep(0, m) # local p-values for intersection hypotheses for (i in 1:nbint) { w.loc <- w * int[i, ]/sum(w * int[i, ]) int.pval[i, ] <- int[i, ] * incsimes(p, w.loc) } result <- apply(int.pval, 2, max) } } else if (call == TRUE) { weight = paste0("Weight={",paste(round(w,2), collapse = ","),"}") result=list(list("Hochberg procedure"),list(weight)) } return(result) } # End of HochbergAdjMediana/R/WilcoxTest.R0000644000176200001440000000367113434027610014250 0ustar liggesusers###################################################################################################################### # Function: WilcoxTest . # Argument: Data set and parameter (call type). # Description: Computes one-sided p-value based on two-sample non-parametric Wilcoxon Mann-Withney test. WilcoxTest = function(sample.list, parameter) { # Determine the function call, either to generate the p-value or to return description call = (parameter[[1]] == "Description") if (call == FALSE | is.na(call)) { # No parameters are defined if (is.na(parameter[[2]])) { larger = TRUE } else { if (!all(names(parameter[[2]]) %in% c("larger"))) stop("Analysis model: WilcoxTest test: this function accepts only one argument (larger)") # Parameters are defined but not the larger argument if (!is.logical(parameter[[2]]$larger)) stop("Analysis model: WilcoxTest test: the larger argument must be logical (TRUE or FALSE).") larger = parameter[[2]]$larger } # Sample list is assumed to include two data frames that represent two analysis samples # Outcomes in Sample 1 outcome1 = sample.list[[1]][, "outcome"] # Remove the missing values due to dropouts/incomplete observations outcome1.complete = outcome1[stats::complete.cases(outcome1)] # Outcomes in Sample 2 outcome2 = sample.list[[2]][, "outcome"] # Remove the missing values due to dropouts/incomplete observations outcome2.complete = outcome2[stats::complete.cases(outcome2)] # One-sided p-value (treatment effect in sample 2 is expected to be greater than in sample 1) if (larger) result = stats::wilcox.test(outcome2.complete, outcome1.complete, alternative = "greater")$p.value else result = stats::wilcox.test(outcome2.complete, outcome1.complete, alternative = "less")$p.value } else if (call == TRUE) { result=list("Wilcoxon-Mann-Withney test") } return(result) } # End of WilcoxTest Mediana/R/FixedSeqAdj.CI.R0000644000176200001440000000450713434027610014563 0ustar liggesusers###################################################################################################################### # Function: FixedSeqAdj.CI # Argument: p, Vector of p-values (1 x m) # par, List of procedure parameters: vector of hypothesis weights (1 x m) # Description: Fixed-sequence multiple testing procedure. FixedSeqAdj.CI = function(est, par) { # Number of point estimate m = length(est) # Extract the sample size if (is.null(par[[2]]$n)) stop("Fixed-sequence procedure: Sample size must be specified (n).") n = par[[2]]$n # Extract the standard deviation if (is.null(par[[2]]$sd)) stop("Fixed-sequence procedure: Standard deviation must be specified (sd).") sd = par[[2]]$sd # Extract the simultaneous coverage probability if (is.null(par[[2]]$covprob)) stop("Fixed-sequence procedure: Coverage probability must be specified (covprob).") covprob = par[[2]]$covprob # Error checks if (m != length(est)) stop("Fixed-sequence procedure: Length of the point estimate vector must be equal to the number of hypotheses.") if (m != length(sd)) stop("Fixed-sequence procedure: Length of the standard deviation vector must be equal to the number of hypotheses.") if (covprob>=1 | covprob<=0) stop("Fixed-sequence procedure: simultaneous coverage probability must be >0 and <1") # Standard errors stderror = sd*sqrt(2/n) # T-statistics associated with each test stat = est/stderror # Compute degrees of freedom nu = 2*(n-1) # Compute raw one-sided p-values rawp = 1-stats::pt(stat,nu) # Compute the adjusted p-values adjustpval = FixedSeqAdj(rawp, list("Analysis")) # Compute the simultaneous confidence interval alpha = 1-covprob ci = rep(NA,m) rejected = (adjustpval <= alpha) if(all(rejected)){ # All null hypotheses are rejected ci = min(est-stderror*stats::qnorm(1-alpha)) } else if(!any(rejected)){ # All null hypotheses are accepted ci[1] = est[1]-stderror[1]*stats::qnorm(1-alpha) } else if (any(rejected)){ # Some null hypotheses are accepted and some are rejected last_rejected = utils::tail(which(rejected), n = 1) ci[1:(last_rejected)] = 0 ci[last_rejected + 1] = est[last_rejected + 1]-stderror[last_rejected + 1]*stats::qnorm(1-alpha) } return(ci) } # End of FixedSeqAdj.CI Mediana/R/AnalysisModel.MultAdjProc.R0000644000176200001440000000115513434027610017065 0ustar liggesusers###################################################################################################################### # Function: AnalysisModel.MultAdjProc # Argument: MultAdjProc object. # Description: This function is called by default if the class of the argument is a MultAdjProc object. #' @export AnalysisModel.MultAdjProc = function(multadjproc, ...) { analysismodel = AnalysisModel() analysismodel = analysismodel + multadjproc args = list(...) if (length(args)>0) { for (i in 1:length(args)){ analysismodel = analysismodel + args[[i]] } } return(analysismodel) }Mediana/R/z+.EvaluationModel.R0000644000176200001440000000134713434027611015555 0ustar liggesusers###################################################################################################################### # Function: +.EvaluationModel. # Argument: Two objects (EvaluationModel and another object). # Description: This function is used to add objects to the EvaluationModel object #' @export "+.EvaluationModel" = function(evaluationmodel, object) { if (is.null(object)) return(evaluationmodel) else if (class(object) == "Criterion"){ ncriteria = length(evaluationmodel$criteria) evaluationmodel$criteria[[ncriteria+1]] = unclass(object) } else stop(paste0("Evaluation Model: Impossible to add the object of class ",class(object)," to the Evaluation Model")) return(evaluationmodel) }Mediana/R/CreateTableCriterion.R0000644000176200001440000000455013434027610016172 0ustar liggesusers############################################################################################################################ # Function: CreateTableCriterion. # Argument: analysis.strucure and label (optional). # Description: Generate a summary table of criteria for the report. CreateTableCriterion = function(evaluation.structure, label = NULL) { # Number of criterion n.criterion = length(evaluation.structure$criterion) criterion.table = matrix(nrow = n.criterion, ncol = 6) ntest = unlist(lapply(evaluation.structure$criterion, function(x) length(x$test))) nstatistic = unlist(lapply(evaluation.structure$criterion, function(x) length(x$statistics))) npar = unlist(lapply(evaluation.structure$criterion, function(x) length(unlist(x$par)[which(!is.na(unlist(x$par)))]))) for (i in 1:n.criterion) { criterion.table[i, 1] = evaluation.structure$criterion[[i]]$id criterion.table[i, 2] = evaluation.structure$criterion[[i]]$method criterion.table[i, 3] = ifelse(npar[i]>0, paste0(names(evaluation.structure$criterion[[i]]$par)," = ", lapply(evaluation.structure$criterion[[i]]$par, function(x) round(x,4)), collapse = "\n"), "") criterion.table[i, 4] = ifelse(ntest[i]>0, #paste0("{",paste0(unlist(evaluation.structure$criterion[[i]]$test), collapse = ", "),"}"), paste0(unlist(evaluation.structure$criterion[[i]]$test), collapse = "\n"), "") criterion.table[i, 5] = ifelse(nstatistic[i]>0, #paste0("{",paste0(unlist(evaluation.structure$criterion[[i]]$statistics), collapse = ", "),"}"), paste0(unlist(evaluation.structure$criterion[[i]]$statistics), collapse = "\n"), "") #criterion.table[i, 6] = paste0("{",paste0(unlist(evaluation.structure$criterion[[i]]$labels), collapse = ", "),"}") criterion.table[i, 6] = paste0(unlist(evaluation.structure$criterion[[i]]$labels), collapse = "\n") } criterion.table = as.data.frame(criterion.table) colnames(criterion.table) = c("Criterion ID", "Criterion method", "Criterion parameters", "Tests", "Statistics", "Label") return(criterion.table[-c(2)]) } # End of CreateTableCriterionMediana/R/HommelAdj.global.R0000644000176200001440000000166613434027610015244 0ustar liggesusers###################################################################################################################### # Function: HommelAdj.global. # Argument: p, Vector of p-values (1 x m) # n, Total number of testable hypotheses (in the case of modified mixture procedure) (1 x 1) # gamma, Vector of truncation parameter (1 x 1) # Description: Compute global p-value for the truncated Hommel multiple testing procedure. The function returns the global adjusted pvalue (1 x 1) HommelAdj.global = function(p, n, gamma) { # Number of p-values k = length(p) if (k > 0 & n > 0) { if (gamma == 0) { adjp = n * min(p) } # Bonferonni procedure else if (gamma <= 1) { # Truncated Hommel procedure seq = 1:k denom = seq * gamma/k + (1 - gamma)/n sortp = sort(p) adjp = min(sortp/denom) } } else adjp = 1 return(adjp) } # End of HommelAdj.globalMediana/R/ExpectedRejPower.R0000644000176200001440000000241413434027610015354 0ustar liggesusers############################################################################################################################ # Function: ExpectedRejPower # Argument: Test results (p-values) across multiple simulation runs (vector or matrix), statistic results (not used in this function), # criterion parameter (Type I error rate and weigth). # Description: Compute expected number of rejected hypothesis for the test results (vector of p-values or each column of the p-value matrix). ExpectedRejPower = function(test.result, statistic.result, parameter) { # Error check if (is.null(parameter$alpha)) stop("Evaluation model: WeightedPower: alpha parameter must be specified.") # Get the parameter alpha = parameter$alpha ntests = ncol(test.result) weight = rep(1/ntests,ntests) significant = (test.result <= alpha) if (is.numeric(test.result)) # Only one test is specified and no weight is applied power = mean(significant, na.rm = TRUE) if (is.matrix(test.result)) { # Weights are applied when two or more tests are specified # Check if the number of tests equals the number of weights marginal.power = colMeans(significant) power = ntests * sum(marginal.power * weight, na.rm = TRUE) } return(power) } Mediana/R/CreateEvaluationStructure.R0000644000176200001440000000650513434027610017316 0ustar liggesusers###################################################################################################################### # Function: CreateEvaluationStructure. # Argument: Evaluation model. # Description: This function is based on the old evaluation_model_extract function. It performs error checks in the evaluation model # and creates an "evaluation structure", which is an internal representation of the original evaluation model used by all other Mediana functions. CreateEvaluationStructure = function(evaluation.model) { # TO DO Make sure that all criteria IDs are different # General set of evaluation model parameters general = evaluation.model$general if (is.null(evaluation.model$criteria)) stop("Evaluation model: At least one criterion must be specified.") # Extract criterion-specific parameters # Number of criteria n.criteria = length(evaluation.model$criteria) # List of criteria (id, method, test list, statistic list, parameters, result label list) criterion = list() for (i in 1:n.criteria) { # Metric IDs if (is.null(evaluation.model$criteria[[i]]$id)) stop("Evaluation model: IDs must be specified for all criteria.") else id = evaluation.model$criteria[[i]]$id # Criteria if (is.null(evaluation.model$criteria[[i]]$method)) { stop("Evaluation model: Criterion method must be specified for all criteria.") } else if (!exists(evaluation.model$criteria[[i]]$method)) { stop(paste0("Evaluation model: Criterion function '", evaluation.model$criteria[[i]]$method, "' does not exist.")) } else if (!is.function(get(as.character(evaluation.model$criteria[[i]]$method), mode = "any"))) { stop(paste0("Evaluation model: Criterion function '", evaluation.model$criteria[[i]]$method, "' does not exist.")) } else { method = evaluation.model$criteria[[i]]$method } # Tests and statistics if (is.null(evaluation.model$criteria[[i]]$tests) & is.null(evaluation.model$criteria[[i]]$statistics)) stop("Evaluation model: Tests or statistics must be specified for all criteria.") if (!is.null(evaluation.model$criteria[[i]]$tests)) { tests = evaluation.model$criteria[[i]]$tests } else { tests = NULL } if (!is.null(evaluation.model$criteria[[i]]$statistics)) { statistics = evaluation.model$criteria[[i]]$statistics } else { statistics = NULL } # Parameters (optional) if (is.null(evaluation.model$criteria[[i]]$par)) { par = NA } else { par = evaluation.model$criteria[[i]]$par } # Result labels if (is.null(evaluation.model$criteria[[i]]$labels)) { stop(paste0("Evaluation model: Label must be specified for the criterion ",evaluation.model$criteria[[i]]$id,".")) } else { labels = evaluation.model$criteria[[i]]$labels } criterion[[i]] = list(id = id, method = method, tests = tests, statistics = statistics, par = par, labels = labels) } # Create the evaluation structure evaluation.structure = list(description = "evaluation.structure", criterion = criterion, general = general) return(evaluation.structure) } # End of CreateEvaluationStructureMediana/R/CreateTableStatistic.R0000644000176200001440000000345513434027610016206 0ustar liggesusers############################################################################################################################ # Function: CreateTableStatistic. # Argument: analysis.strucure and label (optional). # Description: Generate a summary table of statistic for the report. CreateTableStatistic = function(analysis.structure, label = NULL) { # Number of statistic n.statistic = length(analysis.structure$statistic) statistic.table = matrix(nrow = n.statistic, ncol = 4) nsample = rep(0,n.statistic) for (i in 1:n.statistic) { statistic.table[i, 1] = analysis.structure$statistic[[i]]$id statistic.desc = do.call(analysis.structure$statistic[[i]]$method,list(c(),list("Description",analysis.structure$statistic[[i]]$par))) statistic.table[i, 2] = statistic.desc[[1]] if (length(statistic.desc)>1) { statistic.table[i, 3] = paste0(statistic.desc[[2]],analysis.structure$statistic[[i]]$par, collapse = "\n") } else { statistic.table[i, 3] = analysis.structure$statistic[[i]]$par } nsample[i]=length(analysis.structure$statistic[[i]]$samples) npersample=rep(0,nsample[i]) sample.id=rep("",nsample[i]) text="" for (j in 1:nsample[i]) { npersample[j]=length(analysis.structure$statistic[[i]]$samples[[j]]) for (k in 1:npersample[j]) { sample.id[j]=paste0(sample.id[j],", ", analysis.structure$statistic[[i]]$samples[[j]][[k]]) } sample.id[j]=paste0("{",sub(", ","",sample.id[j]),"}") text=paste0(text,", ",sample.id[j]) } statistic.table[i, 4] = sub(", ","",text) } statistic.table = as.data.frame(statistic.table) colnames(statistic.table) = c("Statistic ID", "Statistic type", "Statistic parameters", "Samples") return(statistic.table) } # End of CreateTableStatistic Mediana/R/PresentationModel.R0000644000176200001440000000056113434027610015572 0ustar liggesusers###################################################################################################################### # Function: PresentationModel. # Argument: .... # Description: This function is used to call the corresponding function according to the class of the argument. #' @export PresentationModel = function(...) { UseMethod("PresentationModel") }Mediana/R/PresentationModel.CustomLabel.R0000644000176200001440000000121513434027610020000 0ustar liggesusers###################################################################################################################### # Function: PresentationModel.CustomLabel # Argument: CustomLabel object. # Description: This function is called by default if the class of the argument is a CustomLabel object. #' @export PresentationModel.CustomLabel = function(customlabel, ...) { presentationmodel = PresentationModel() presentationmodel = presentationmodel + customlabel args = list(...) if (length(args)>0) { for (i in 1:length(args)){ presentationmodel = presentationmodel + args[[i]] } } return(presentationmodel) }Mediana/R/PresentationModel.default.R0000644000176200001440000000143313434027610017214 0ustar liggesusers###################################################################################################################### # Function: PresentationModel.default # Argument: Multiple objects. # Description: This function is called by default. #' @export PresentationModel.default = function(...) { args = list(...) if (length(args) > 0) { stop("Presentation Model doesn't know how to deal with the parameters") } else { presentationmodel = structure( list(project = list(username = "[Unknown User]", title = "[Unknown title]", description = "[No description]"), section.by = NULL, subsection.by = NULL, table.by = NULL, custom.label = NULL), class = "PresentationModel") } return(presentationmodel) }Mediana/R/MixtureGatekeepingAdj.R0000644000176200001440000002432413442015060016354 0ustar liggesusers###################################################################################################################### # Function: MixtureGatekeepingAdj # Argument: rawp, Raw p-value. # par, List of procedure parameters: vector of family (1 x m) Vector of component procedure labels ('BonferroniAdj.global' or 'HolmAdj.global' or 'HochbergAdj.global' or 'HommelAdj.global') (1 x nfam) Vector of truncation parameters for component procedures used in individual families (1 x nfam) # Description: Computation of adjusted p-values for gatekeeping procedures based on the mixture methods (ref Dmitrienko et al. (2011)) MixtureGatekeepingAdj = function(rawp, par) { # Determine the function call, either to generate the p-value or to return description call = (par[[1]] == "Description") if (any(call == FALSE) | any(is.na(call))) { # Error check if (is.null(par[[2]]$family)) stop("Analysis model: Mixture-based gatekeeping procedure: Hypothesis families must be specified.") if (is.null(par[[2]]$proc)) stop("Analysis model: Mixture-based gatekeeping procedure: Procedures must be specified.") if (is.null(par[[2]]$gamma)) stop("Analysis model: Mixture-based gatekeeping procedure: Gamma must be specified.") if (is.null(par[[2]]$parallel)) stop("Analysis model: Mixture-based gatekeeping procedure: Parallel restriction set must be specified.") if (is.null(par[[2]]$serial)) stop("Analysis model: Mixture-based gatekeeping procedure: Serial restriction set must be specified.") # Number of p-values nhyp = length(rawp) # Extract the vector of family (1 x m) family = par[[2]]$family # Number of families in the multiplicity problem nfam = length(family) # Extract the matrix of parallel resriction set (matrix (m x m)) parallel = par[[2]]$parallel # Extract the matrix of serial resriction set (matrix (m x m)) serial = par[[2]]$serial # Number of null hypotheses per family nperfam = lapply(family, function(x) length(x)) # Extract the vector of procedures (1 x m) proc = paste(unlist(par[[2]]$proc), ".global", sep = "") # Extract the vector of truncation parameters (1 x m) gamma = unlist(par[[2]]$gamma) # Simple error checks if (nhyp != length(unlist(family))) stop("Mixture-based gatekeeping adjustment: Length of the p-value vector must be equal to the number of hypothesis.") if (length(proc) != nfam) stop("Mixture-based gatekeeping adjustment: Length of the procedure vector must be equal to the number of families.") else { for (i in 1:nfam) { if (proc[i] %in% c("BonferroniAdj.global", "HolmAdj.global", "HochbergAdj.global", "HommelAdj.global") == FALSE) stop("Mixture-based gatekeeping adjustment: Only Bonferroni (BonferroniAdj), Holm (HolmAdj), Hochberg (HochbergAdj) and Hommel (HommelAdj) component procedures are supported.") } } if (length(gamma) != nfam) stop("Mixture-based gatekeeping adjustment: Length of the gamma vector must be equal to the number of families.") else { for (i in 1:nfam) { if (gamma[i] < 0 | gamma[i] > 1) stop("Mixture-based gatekeeping adjustment: Gamma must be between 0 (included) and 1 (included).") else if (proc[i] == "bonferroni.global" & gamma[i] != 0) stop("Mixture-based gatekeeping adjustment: Gamma must be set to 0 for the global Bonferroni procedure.") } } if(!is.matrix(parallel)) stop("Mixture-based gatekeeping adjustment: A matrix must be used to specify the parallel restriction set") if(!is.matrix(serial)) stop("Mixture-based gatekeeping adjustment: A matrix must be used to specify the serial restriction set") if (nhyp != ncol(parallel)) stop("Mixture-based gatekeeping adjustment: Number of columns of the parallel restriction set must be equal to the number of hypothesis.") if (nhyp != nrow(parallel)) stop("Mixture-based gatekeeping adjustment: Number of rows of the parallel restriction set must be equal to the number of hypothesis.") if (nhyp != ncol(serial)) stop("Mixture-based gatekeeping adjustment: Number of columns of the serial restriction set must be equal to the number of hypothesis.") if (nhyp != nrow(serial)) stop("Mixture-based gatekeeping adjustment: Number of rows of the serial restriction set must be equal to the number of hypothesis.") # Number of intersection hypotheses in the closed family nint = 2^nhyp - 1 # Construct the intersection index sets (int_orig) before the logical restrictions are applied. Each row is a vector of binary indicators (1 if the hypothesis is # included in the original index set and 0 otherwise) int_orig = matrix(0, nint, nhyp) serial_index = matrix(1, nint, nhyp) parallel_index = matrix(1, nint, nhyp) testable_index = matrix(1, nint, nhyp) int_rest = matrix(0, nint, nhyp) fam_rest = matrix(1, nint, nhyp) for (i in 1:nhyp) { for (j in 0:(nint - 1)) { k = floor(j/2^(nhyp - i)) if (k/2 == floor(k/2)) int_orig[j + 1, i] = 1 } # Serial index indicates for each row if the hypothesis is testebale after having applied the serial restriction serial_index[,i] = apply(int_orig, 1, function(x) all((x * serial[i,])==0)) # Parallel index indicates for each row if the hypothesis is testebale after having applied the parallel restriction parallel_index[,i] = apply(int_orig, 1, function(x) ifelse(any(parallel[i,]==1), any((x * parallel[i,])[which(parallel[i,]==1)]==0),1)) # Testable index: if serial or parallel indicates that the hypothesis is testable testable_index[,i] = mapply(x = serial_index[,i], y = parallel_index[,i], function(x,y) x==TRUE & y == TRUE) # Construct the intersection index sets (int_rest) and family index sets (fam_rest) after the logical restrictions are applied. # Each row is a vector of binary indicators (1 if the hypothesis is included in the restricted index set and 0 otherwise) int_rest[,i] = int_orig[,i] * testable_index[,i] fam_rest[,i] = fam_rest[,i] * testable_index[,i] } # Number of null hypotheses from each family included in each intersection before the logical restrictions are applied korig = do.call(cbind, lapply(family, function(x) apply(as.matrix(int_orig[, x]), 1, sum))) # Number of null hypotheses from each family included in the current intersection after the logical restrictions are applied krest = do.call(cbind, lapply(family, function(x) apply(as.matrix(int_rest[, x]), 1, sum))) # Number of null hypotheses from each family after the logical restrictions are applied nrest = do.call(cbind, lapply(family, function(x) apply(as.matrix(fam_rest[, x]), 1, sum))) # Vector of intersection p-values pint = rep(1, nint) # Matrix of component p-values within each intersection pcomp = matrix(0, nint, nfam) # Matrix of family weights within each intersection c = matrix(0, nint, nfam) # P-value for each hypothesis within each intersection p = matrix(0, nint, nhyp) # Compute the intersection p-value for each intersection hypothesis for (i in 1:nint) { # Compute component p-values for (j in 1:nfam) { # Consider non-empty restricted index sets if (krest[i, j] > 0) { # Restricted index set in the current family int = int_rest[i, family[[j]]] # Set of p-values in the current family pv = rawp[family[[j]]] # Select raw p-values included in the restricted index set pselected = pv[int == 1] # Total number of hypotheses used in the computation of the component p-value # Use the following line for modified mixture method # tot = nrest[i, j] # Use the following line for standard mixture method tot = nperfam[[j]] pcomp[i, j] = do.call(proc[j], list(pselected, tot, gamma[j])) } else if (krest[i, j] == 0) pcomp[i, j] = 1 } # Compute family weights c[i, 1] = 1 for (j in 2:nfam) { # Use the following line for modified mixture method # c[i, j] = c[i, j - 1] * (1 - errorfrac(krest[i, j - 1], nrest[i, j - 1], gamma[j - 1])) # Use the following line for standard mixture method c[i, j] = c[i, j - 1] * (1 - errorfrac(korig[i, j - 1], nperfam[[j - 1]], gamma[j - 1])) } # Compute the intersection p-value for the current intersection hypothesis pint[i] = pmin(1, min(pcomp[i, ]/c[i, ])) # Compute the p-value for each hypothesis within the current intersection p[i, ] = int_orig[i, ] * pint[i] } # Compute adjusted p-values adjustedp = apply(p, 2, max) result = adjustedp } else if (call == TRUE) { family = par[[2]]$family nfam = length(family) proc = unlist(par[[2]]$proc) gamma = unlist(par[[2]]$gamma) serial = par[[2]]$serial parallel = par[[2]]$parallel test.id=unlist(par[[3]]) proc.par = data.frame(nrow = nfam, ncol = 4) for (i in 1:nfam){ proc.par[i,1] = i proc.par[i,2] = paste0("{",paste(test.id[family[[i]]], collapse = ", "),"}") proc.par[i,3] = proc[i] proc.par[i,4] = gamma[i] } colnames(proc.par) = c("Family", "Hypotheses set", "Component procedure", "Truncation parameter") nhyp = length(test.id) hyp.par = data.frame(nrow = nhyp, ncol = 4) index = 0 for (i in 1:nfam){ for (j in 1:length(family[[i]])){ index = index + 1 hyp.par[index,1] = i hyp.par[index,2] = test.id[family[[i]][j]] hyp.par[index,3] = ifelse(sum(parallel[family[[i]][j],])>0,paste0("{",paste(test.id[which(parallel[family[[i]][j],]==1)], collapse = ", "),"}"),"") hyp.par[index,4] = ifelse(sum(serial[family[[i]][j],])>0,paste0("{",paste(test.id[which(serial[family[[i]][j],]==1)], collapse = ", "),"}"),"") } } colnames(hyp.par) = c("Family", "Tests", "Parallel rejection set", "Serial rejection set") result=list(list("Mixture-based gatekeeping"),list(proc.par, hyp.par)) } return(result) } # End of MultipleSequenceGatekeepingAdj Mediana/R/Subsection.R0000644000176200001440000000127213434027610014254 0ustar liggesusers###################################################################################################################### # Function: Subsection. # Argument: by. # Description: This function is used to create an object of class SubSection. #' @export Subsection = function(by) { # Error checks if (!is.character(by)) stop("Subsection: by must be character.") if (!any(by %in% c("sample.size", "event", "outcome.parameter", "design.parameter", "multiplicity.adjustment"))) stop("Subsection: the variables included in by are invalid.") subsection.report = list(by = by) class(subsection.report) = "Subsection" return(subsection.report) invisible(subsection.report) }Mediana/R/BinomDist.R0000644000176200001440000000277513434027610014037 0ustar liggesusers###################################################################################################################### # Function: BinomDist . # Argument: List of parameters (number of observations, proportion/probability of success). # Description: This function is used to generate binomial outcomes (0/1). BinomDist = function(parameter) { # Error checks if (missing(parameter)) stop("Data model: BinomDist distribution: List of parameters must be provided.") if (is.null(parameter[[2]]$prop)) stop("Data model: BinomDist distribution: Proportion must be specified.") prop = parameter[[2]]$prop if (prop < 0 | prop > 1) stop("Data model: BinomDist distribution: Proportion must be between 0 and 1.") # Determine the function call, either to generate distribution or to return description call = (parameter[[1]] == "description") # Generate random variables if (call == FALSE) { # Error checks n = parameter[[1]] if (n%%1 != 0) stop("Data model: BinomDist distribution: Number of observations must be an integer.") if (n <= 0) stop("Data model: BinomDist distribution: Number of observations must be positive.") result = stats::rbinom(n = n, size = 1, prob = prop) } else { # Provide information about the distribution function if (call == TRUE) { # Labels of distributional parameters result = list(list(prop = "prop"),list("Binomial")) } } return(result) } #End of BinomDistMediana/R/MaxStat.R0000644000176200001440000000157413434027610013524 0ustar liggesusers###################################################################################################################### # Compute the min based on non-missing values in the combined sample MaxStat = function(sample.list, parameter) { # Determine the function call, either to generate the statistic or to return description call = (parameter[[1]] == "Description") if (call == FALSE | is.na(call)) { # Error checks if (length(sample.list)!=1) stop("Analysis model : Only one sample must be specified in the MaxStat statistic.") sample = sample.list[[1]] # Select the outcome column and remove the missing values due to dropouts/incomplete observations outcome = sample[, "outcome"] result = max(stats::na.omit(outcome)) } else if (call == TRUE) { result = list("Maximum") } return(result) } # End of MaxStatMediana/R/RatioEffectSizeEventStat.R0000644000176200001440000000262113434027610017021 0ustar liggesusers###################################################################################################################### # Compute the ratio of effect sizes for HR (time-to-event) based on non-missing values in the combined sample RatioEffectSizeEventStat = function(sample.list, parameter) { # Determine the function call, either to generate the statistic or to return description call = (parameter[[1]] == "Description") if (call == FALSE | is.na(call)) { # Error checks if (length(sample.list)!=4) stop("Analysis model: Four samples must be specified in the RatioEffectSizeEventStat statistic.") if (is.na(parameter[[2]])) method = "Log-Rank" else { if (!(parameter[[2]]$method %in% c("Log-Rank", "Cox"))) stop("Analysis model: HazardRatioStat statistic : the method must be Log-Rank or Cox.") method = parameter[[2]]$method } result1 = EffectSizeEventStat(list(sample.list[[1]], sample.list[[2]]), parameter) result2 = EffectSizeEventStat(list(sample.list[[3]], sample.list[[4]]), parameter) # Caculate the ratio of effect size result = result1 / result2 } else if (call == TRUE) { if (is.na(parameter[[2]])) result = list("Ratio of effect size (event)") else { result = list("Ratio of effect size (event)", "method = ") } } return(result) } # End of RatioEffectSizeEventStat Mediana/R/Design.R0000644000176200001440000000433713434027610013354 0ustar liggesusers###################################################################################################################### # Function: Design. # Argument: enroll.period, enroll.dist, enroll.dist.par, followup.period, study.duration, dropout.dist, # dropout.dist.par # Description: This function is used to create an object of class Design. #' @export Design = function(enroll.period = NULL, enroll.dist = NULL, enroll.dist.par = NULL, followup.period = NULL, study.duration = NULL, dropout.dist = NULL, dropout.dist.par = NULL) { # Error checks if (!is.null(enroll.period) & !is.numeric(enroll.period)) stop("Design: enrollment period must be numeric.") if (!is.null(enroll.dist) & !is.character(enroll.dist)) stop("Design: enrollment distribution must be character.") if (!is.null(enroll.dist.par) & !is.list(enroll.dist.par)) stop("Design: enrollment distribution parameters must be provided in a list.") if (!is.null(followup.period) & !is.numeric(followup.period)) stop("Design: follow-up period must be provided in a list.") if (!is.null(study.duration) & !is.numeric(study.duration)) stop("Design: study duration must be provided in a list.") if (!is.null(dropout.dist) & !is.character(dropout.dist)) stop("Design: dropout distribution must be character.") if (!is.null(dropout.dist.par) & !is.list(dropout.dist.par)) stop("Design: enrollment distribution parameters must be provided in a list.") if (is.null(followup.period) & is.null(study.duration)) stop("Design: follow-up period or study duration must be defined") if (!is.null(followup.period) & !is.null(study.duration)) stop("Design: either follow-up period or study duration must be defined") if (is.null(enroll.dist) & !is.null(dropout.dist)) stop("Design: Dropout parameters cannot be specified without enrollment parameters.") design = list(enroll.period = enroll.period, enroll.dist = enroll.dist, enroll.dist.par = enroll.dist.par, followup.period = followup.period, study.duration = study.duration, dropout.dist = dropout.dist, dropout.dist.par = dropout.dist.par) class(design) = "Design" return(design) invisible(design) } Mediana/R/AnalysisModel.R0000644000176200001440000000054513434027610014704 0ustar liggesusers###################################################################################################################### # Function: AnalysisModel. # Argument: .... # Description: This function is used to call the corresponding function according to the class of the argument. #' @export AnalysisModel = function(...) { UseMethod("AnalysisModel") }Mediana/R/z+.AnalysisModel.R0000644000176200001440000000317213434027611015227 0ustar liggesusers###################################################################################################################### # Function: +.AnalysisModel. # Argument: Two objects (AnalysisModel and another object). # Description: This function is used to add objects to the AnalysisModel object #' @export "+.AnalysisModel" = function(analysismodel, object) { if (is.null(object)) return(analysismodel) else if (class(object) == "Test"){ ntest = length(analysismodel$tests) analysismodel$tests[[ntest+1]] = unclass(object) } else if (class(object) == "Statistic"){ nstatistic = length(analysismodel$statistics) analysismodel$statistics[[nstatistic+1]] = unclass(object) } else if (class(object) == "Interim"){ analysismodel$general$interim$interim.analysis = unclass(object) } else if (class(object) == "MultAdjProc"){ nmultadj = length(analysismodel$general$mult.adjust) analysismodel$general$mult.adjust[[nmultadj + 1]] = list(unclass(object)) } else if (class(object) == "MultAdjStrategy"){ nmultadj = length(analysismodel$general$mult.adjust) analysismodel$general$mult.adjust[[nmultadj + 1]] = list(unclass(object)) } else if (class(object) == "MultAdj"){ nmultadj = length(analysismodel$general$mult.adjust) if (length(object)>1) analysismodel$general$mult.adjust = c(analysismodel$general$mult.adjust, unclass(object)) else analysismodel$general$mult.adjust[[nmultadj + 1]] = unclass(object) } else stop(paste0("Analysis Model: Impossible to add the object of class ",class(object)," to the Analysis Model")) return(analysismodel) } Mediana/R/GenerateData.default.R0000644000176200001440000000725113434027610016110 0ustar liggesusers############################################################################################################################ # Function: GenerateData # Argument: .... # Description: This function generate data according to the data model #' @export GenerateData.default = function(data.model, sim.parameters){ # Check of class of the data.model and sim.parameters argument if (!(class(data.model) == ("DataModel"))) stop("GenerateData: a DataModel object must be specified in the data.model argument") if (!(class(sim.parameters) == c("SimParameters"))) stop("GenerateData: a SimParameters object must be specified in the sim.parameters argument") # Simulation parameters # Number of simulation runs if (is.null(sim.parameters$n.sims)) stop("GenerateData:The number of simulation runs must be provided (n.sims)") n.sims = sim.parameters$n.sims if (!is.numeric(n.sims)) stop("GenerateData:Number of simulation runs must be an integer.") if (length(n.sims) > 1) stop("GenerateData: Number of simulation runs: Only one value must be specified.") if (n.sims%%1 != 0) stop("GenerateData: Number of simulation runs must be an integer.") if (n.sims <= 0) stop("GenerateData: Number of simulation runs must be positive.") # Seed if (is.null(sim.parameters$seed)) stop("The seed must be provided (seed)") seed = sim.parameters$seed if (!is.numeric(seed)) stop("Seed must be an integer.") if (length(seed) > 1) stop("Seed: Only one value must be specified.") if (nchar(as.character(seed)) > 10) stop("Length of seed must be inferior to 10.") if (!is.null(sim.parameters$proc.load)){ proc.load = sim.parameters$proc.load if (is.numeric(proc.load)){ if (length(proc.load) > 1) stop("Number of cores: Only one value must be specified.") if (proc.load %%1 != 0) stop("Number of cores must be an integer.") if (proc.load <= 0) stop("Number of cores must be positive.") n.cores = proc.load } else if (is.character(proc.load)){ n.cores=switch(proc.load, low={1}, med={parallel::detectCores()/2}, high={parallel::detectCores()-1}, full={parallel::detectCores()}, {stop("Processor load not valid")}) } } else n.cores = 1 sim.parameters = list(n.sims = n.sims, seed = seed, proc.load = n.cores) # Simulation parameters # Use proc.load to generate the clusters cluster.mediana = parallel::makeCluster(getOption("cluster.mediana.cores", sim.parameters$proc.load)) # To make this reproducible I used the same number as the seed set.seed(seed) parallel::clusterSetRNGStream(cluster.mediana, seed) #Export all functions in the global environment to each node parallel::clusterExport(cluster.mediana,ls(envir=.GlobalEnv)) doParallel::registerDoParallel(cluster.mediana) # Simulation index initialisation sim.index=0 # Generate the data data.stack.temp = foreach::foreach(sim.index=1:sim.parameters$n.sims, .packages=(.packages())) %dorng% { data = CreateDataStack(data.model = data.model, n.sims = 1) } # Stop the cluster parallel::stopCluster(cluster.mediana) #closeAllConnections() data.stack=list() data.stack$description= "data.stack" data.stack$data.set = lapply(data.stack.temp, function(x) x$data.set[[1]]) data.stack$data.scenario.grid = data.stack.temp[[1]]$data.scenario.grid data.stack$data.structure = data.stack.temp[[1]]$data.structure data.stack$sim.parameters = sim.parameters class(data.stack) = "DataStack" return(data.stack) } Mediana/R/BroadClaimPower.R0000644000176200001440000000256613434027610015157 0ustar liggesusers############################################################################################################################ # Function: BroadClaimPower # Argument: Test results (p-values) across multiple simulation runs (vector or matrix), statistic results, # criterion parameter (Type I error rate and Influence cutoff). # Description: Compute probability of broad claim (new treatment is effective in the overall population without substantial effect in the subgroup of interest) BroadClaimPower = function(test.result, statistic.result, parameter) { # Error check if (is.null(parameter$alpha)) stop("Evaluation model: BroadClaimPower: alpha parameter must be specified.") if (is.null(parameter$cutoff_influence)) stop("Evaluation model: BroadClaimPower: cutoff_influence parameter must be specified.") if (is.null(parameter$cutoff_interaction)) stop("Evaluation model: BroadClaimPower: cutoff_interaction parameter must be specified.") alpha = parameter$alpha cutoff_influence = parameter$cutoff_influence cutoff_interaction = parameter$cutoff_interaction significant = ((test.result[,1] <= alpha & test.result[,2] <= alpha & statistic.result[,1] >= cutoff_influence & statistic.result[,2] < cutoff_interaction) | (test.result[,1] <= alpha & test.result[,2] > alpha)) power = mean(significant) return(power) } # End of BroadClaimPower Mediana/R/EventCountStat.R0000644000176200001440000000247713434027610015074 0ustar liggesusers###################################################################################################################### # Compute the number of events based on non-missing values in the combined sample EventCountStat = function(sample.list, parameter) { # Determine the function call, either to generate the statistic or to return description call = (parameter[[1]] == "Description") if (call == FALSE | is.na(call)) { # Error checks if (length(sample.list) == 0) stop("Analysis model: One sample must be specified in the EventCountStat statistic.") # Merge the samples in the sample list sample1 = do.call(rbind, sample.list) # Select the outcome column and remove the missing values due to dropouts/incomplete observations outcome1 = sample1[, "outcome"] # Remove the missing values due to dropouts/incomplete observations outcome1.complete = outcome1[stats::complete.cases(outcome1)] # Observed events in Sample 1 (negation of censoring indicators) event1 = !sample1[, "patient.censor.indicator"] event1.complete = event1[stats::complete.cases(outcome1)] # Number of events in Sample 1 result = sum(event1.complete) } else if (call == TRUE) { result = list("Number of Events") } return(result) } # End of EventCountStatMediana/R/CreateTableSampleSize.R0000644000176200001440000000444613434027610016314 0ustar liggesusers############################################################################################################################ # Function: CreateTableSampleSize . # Argument: data.strucure and label (optional). # Description: Generate a summary table of sample size for the report. CreateTableSampleSize = function(data.structure, label = NULL) { # Number of sample ID n.id <- length(data.structure$id) id.label = c(unlist(lapply(lapply(data.structure$id, unlist), paste0, collapse = ", "))) if (!any(is.na(data.structure$sample.size.set))){ # Number of sample size n.sample.size = nrow(data.structure$sample.size.set) # Label if (is.null(label)) label = paste0("Sample size ", 1:n.sample.size) else label = unlist(label) if (length(label) != n.sample.size) stop("Summary: Number of the sample size labels must be equal to the number of sample size sets.") # Summary table sample.size.table <- matrix(nrow = n.id*n.sample.size, ncol = 4) ind <-1 for (i in 1:n.sample.size) { for (j in 1:n.id) { sample.size.table[ind, 1] = i sample.size.table[ind, 2] = label[i] sample.size.table[ind, 3] = id.label[j] sample.size.table[ind, 4] = data.structure$sample.size.set[i,j] ind <- ind+1 } } sample.size.table = as.data.frame(sample.size.table) colnames(sample.size.table) = c("sample.size","Sample size set", "Sample", "Size") } else if (!any(is.na(data.structure$event.set))){ # Number of sample size n.events = nrow(data.structure$event.set) # Label if (is.null(label)) label = paste0("Event ", 1:n.events) else label = unlist(label) if (length(label) != n.events) stop("Summary: Number of the events labels must be equal to the number of events sets.") # Summary table sample.size.table <- matrix(nrow = n.events, ncol = 3) ind <-1 for (i in 1:n.events) { sample.size.table[i, 1] = i sample.size.table[i, 2] = label[i] sample.size.table[i, 3] = data.structure$event[i,1] } sample.size.table = as.data.frame(sample.size.table) colnames(sample.size.table) = c("sample.size","Event set", "Total number of events") } return(sample.size.table) } # End of CreateTableSampleSizeMediana/R/DataModel.SampleSize.R0000644000176200001440000000110213434027610016033 0ustar liggesusers###################################################################################################################### # Function: DataModel.SampleSize # Argument: SampleSize object. # Description: This function is called by default if the class of the argument is an SampleSize object. #' @export DataModel.SampleSize = function(sample.size, ...) { datamodel = DataModel() datamodel = datamodel + sample.size args = list(...) if (length(args)>0) { for (i in 1:length(args)){ datamodel = datamodel + args[[i]] } } return(datamodel) }Mediana/R/NormalParamDist.R0000644000176200001440000000120613434027610015170 0ustar liggesusers###################################################################################################################### # Function: NormalParamDist. # Argument: c, Common critical value # w, Vector of hypothesis weights (1 x m) # corr, Correlation matrix (m x m) # Description: Multivariate normal distribution function used in the parametric multiple testing procedure based on a multivariate normal distribution NormalParamDist = function(c, w, corr) { m = dim(corr)[1] prob = mvtnorm::pmvnorm(lower = rep(-Inf, m), upper = c/(w * m), mean=rep(0, m), corr = corr) return(1 - prob[1]) } # End of NormalParamDistMediana/R/summary.CSE.R0000644000176200001440000000041113434027611014237 0ustar liggesusers############################################################################################################################ # Function: summary.CSE # Argument: x, a CSE object. #' @export summary.CSE = function(object,...) { object$simulation.results } Mediana/R/HolmAdj.R0000644000176200001440000000312213434027610013450 0ustar liggesusers###################################################################################################################### # Function: HolmAdj. # Argument: p, Vector of p-values (1 x m) # par, List of procedure parameters: vector of hypothesis weights (1 x m) # Description: Holm multiple testing procedure. HolmAdj = function(p, par) { # Determine the function call, either to generate the p-value or to return description call = (par[[1]] == "Description") # Number of p-values m = length(p) # Extract the vector of hypothesis weights (1 x m) if (!any(is.na(par[[2]]))) { if (is.null(par[[2]]$weight)) stop("Analysis model: Holm procedure: Hypothesis weights must be specified.") w = par[[2]]$weight } else { w = rep(1/m, m) } if (any(call == FALSE) | any(is.na(call))) { # Error checks if (length(w) != m) stop("Analysis model: Holm procedure: Length of the weight vector must be equal to the number of hypotheses.") if (sum(w)!=1) stop("Analysis model: Holm procedure: Hypothesis weights must add up to 1.") if (any(w < 0)) stop("Analysis model: Holm procedure: Hypothesis weights must be greater than 0.") # Index of ordered pvalue ind <- order(p/w) # Adjusted p-values adjpvalue <- pmin(1, cummax(c(1 - cumsum(c(0, w[ind])))[1:m] * p[ind]/w[ind]), na.rm=TRUE)[order(ind)] result = adjpvalue } else if (call == TRUE) { weight = paste0("Weight={",paste(round(w,2), collapse = ","),"}") result=list(list("Holm procedure"),list(weight)) } return(result) } # End of HolmAdjMediana/R/Test.R0000644000176200001440000000204313434027610013052 0ustar liggesusers###################################################################################################################### # Function: Test. # Argument: Test ID, Statistical method, Samples and Parameters. # Description: This function is used to create an object of class Test. #' @export Test = function(id, method, samples, par = NULL) { # Error checks if (!is.character(id)) stop("Test: ID must be character.") if (!is.character(method)) stop("Test: statistical method must be character.") if (!is.list(samples)) stop("Test: samples must be wrapped in a list.") if (all(lapply(samples, is.list) == FALSE) & any(lapply(samples, is.character) == FALSE)) stop("Test: samples must be character.") if (all(lapply(samples, is.list) == TRUE) & (!is.character(unlist(samples)))) stop("Test: samples must be character.") if (!is.null(par) & !is.list(par)) stop("Test: par must be wrapped in a list.") test = list(id = id, method = method, samples = samples, par = par) class(test) = "Test" return(test) invisible(test) } Mediana/R/Event.R0000644000176200001440000000211213434027610013211 0ustar liggesusers ###################################################################################################################### # Function: Event. # Argument: A list or vector of numeric. # Description: This function is used to create an object of class Event. #' @export Event = function(n.events, rando.ratio=NULL) { # Error checks if (any(!is.numeric(unlist(n.events)))) stop("Event: number of events must be numeric.") if (any(unlist(n.events) %% 1 !=0)) stop("Event: number of events must be integer.") if (any(unlist(n.events) <=0)) stop("Event: number of events must be strictly positive.") if (!is.null(rando.ratio)){ if (any(!is.numeric(unlist(rando.ratio)))) stop("Event: randomization ratio must be numeric.") if (any(unlist(rando.ratio) %% 1 !=0)) stop("Event: randomization ratio must be integer.") if (any(unlist(rando.ratio) <=0)) stop("Event: randomization ratio must be strictly positive.") } event = list(n.events = unlist(n.events), rando.ratio = unlist(rando.ratio)) class(event) = "Event" return(event) invisible(event) }Mediana/R/DataModel.R0000644000176200001440000000053113434027610013765 0ustar liggesusers###################################################################################################################### # Function: DataModel. # Argument: .... # Description: This function is used to call the corresponding function according to the class of the argument. #' @export DataModel = function(...) { UseMethod("DataModel") }Mediana/R/PropStat.R0000644000176200001440000000161413434027610013712 0ustar liggesusers###################################################################################################################### # Compute the proportion based on non-missing values in the combined sample PropStat = function(sample.list, parameter) { # Determine the function call, either to generate the statistic or to return description call = (parameter[[1]] == "Description") if (call == FALSE | is.na(call)) { # Error checks if (length(sample.list)!=1) stop("Analysis model : Only one sample must be specified in the PropStat statistic.") sample = sample.list[[1]] # Select the outcome column and remove the missing values due to dropouts/incomplete observations outcome = sample[, "outcome"] result = mean(stats::na.omit(outcome)) } else if (call == TRUE) { result = list("Proportion") } return(result) } # End of PropStat Mediana/R/DiffMeanStat.R0000644000176200001440000000227313434027610014445 0ustar liggesusers###################################################################################################################### # Compute the difference of means between two samples for continuous variable based on non-missing values in the combined sample DiffMeanStat = function(sample.list, parameter) { # Determine the function call, either to generate the statistic or to return description call = (parameter[[1]] == "Description") if (call == FALSE | is.na(call)) { # Error checks if (length(sample.list)!=2) stop("Analysis model: Two samples must be specified in the DiffMeanStat statistic.") # Merge the samples in the sample list sample1 = sample.list[[1]] # Merge the samples in the sample list sample2 = sample.list[[2]] # Select the outcome column and remove the missing values due to dropouts/incomplete observations outcome1 = sample1[, "outcome"] outcome2 = sample2[, "outcome"] mean1 = mean(stats::na.omit(outcome1)) mean2 = mean(stats::na.omit(outcome2)) result = (mean1 - mean2) } else if (call == TRUE) { result = list("Difference of means") } return(result) } # End of DiffMeanStatMediana/R/PerformAnalysis.R0000644000176200001440000005405313434027610015261 0ustar liggesusers############################################################################################################################################## # Function: PerformAnalysis. # Argument: Data model (or data stack) and analysis model. # Description: This function carries out the statistical tests and computes the descriptive statistics specified in # the analysis model using the specified data model (or data stack generated by the user). # The number of simulations (n.sims) is used only if a data model is specified. If a data stack is specified, # the number of simulations is obtained from this data stack. #' @import doRNG #' @import doParallel #' @import foreach PerformAnalysis = function(data, analysis.model, sim.parameters) { # Check if a data stack was specified and, if a data model is specified, call the CreateDataStructure function if (class(data) == "DataStack") { if (data$description == "data.stack") { call.CreateDataStructure = FALSE data.stack = data n.sims = data.stack$sim.parameters$n.sims seed = data.stack$sim.parameters$seed data.structure = data.stack$data.structure } else { stop("The data object is not recognized.") } } else { call.CreateDataStructure = TRUE data.model = data # Create a dummy data.stack data.stack = CreateDataStack(data.model, 1) data.structure = CreateDataStructure(data.model) } # Simulation parameters # Number of simulation runs if (is.null(sim.parameters$n.sims)) stop("The number of simulation runs must be provided (n.sims)") if (call.CreateDataStructure == TRUE){ n.sims = sim.parameters$n.sims } else { # Get the number of simulations with the data stack n.sims = n.sims warning("The number of simulation runs from the sim.parameter was ignored as a data stack was defined.") } if (!is.numeric(n.sims)) stop("Number of simulation runs must be an integer.") if (length(n.sims) > 1) stop("Number of simulation runs: Only one value must be specified.") if (n.sims%%1 != 0) stop("Number of simulation runs must be an integer.") if (n.sims <= 0) stop("Number of simulation runs must be positive.") # Seed if (is.null(sim.parameters$seed)) stop("The seed must be provided (seed)") if (call.CreateDataStructure == TRUE){ seed = sim.parameters$seed } else { # Get the number of simulations with the data stack seed = seed warning("The seed from the sim.parameter was ignored as a data stack was defined.") } if (!is.numeric(seed)) stop("Seed must be an integer.") if (length(seed) > 1) stop("Seed: Only one value must be specified.") if (nchar(as.character(seed)) > 10) stop("Length of seed must be inferior to 10.") if (!is.null(sim.parameters$proc.load)){ proc.load = sim.parameters$proc.load if (is.numeric(proc.load)){ if (length(proc.load) > 1) stop("Number of cores: Only one value must be specified.") if (proc.load %%1 != 0) stop("Number of cores must be an integer.") if (proc.load <= 0) stop("Number of cores must be positive.") n.cores = proc.load } else if (is.character(proc.load)){ n.cores=switch(proc.load, low={1}, med={parallel::detectCores()/2}, high={parallel::detectCores()-1}, full={parallel::detectCores()}, {stop("Processor load not valid")}) } } else n.cores = 1 sim.parameters = list(n.sims = n.sims, seed = seed, proc.load = n.cores) # Perform error checks for the data model and create an internal analysis structure analysis.structure = CreateAnalysisStructure(analysis.model) # Check if the samples referenced in the analysis model are actually specified in the data model # List of sample IDs sample.id = unlist(data.structure$id) # Number of tests specified in the analysis model n.tests = length(analysis.structure$test) if (n.tests > 0) { # Test IDs test.id = rep(" ", n.tests) for (test.index in 1:n.tests) { test.id[test.index] = analysis.structure$test[[test.index]]$id # Number of samples in the current test n.test.samples = length(analysis.structure$test[[test.index]]$samples) # When processing samples specified for individual tests, it is important to remember that # a hierarchical structure can be used, i.e., samples can first be merged and then passed to a specific test for (i in 1:n.test.samples) { # Number of subsamples in the current sample n.subsamples = length(analysis.structure$test[[test.index]]$samples[[i]]) if (n.subsamples == 1) { if (!(analysis.structure$test[[test.index]]$samples[[i]] %in% sample.id)) stop(paste0("Analysis model: Sample '", analysis.structure$test[[test.index]]$samples[[i]], "' is not defined in the data model.")) } else { # Multiple subsamples for (j in 1:n.subsamples) { if (!(analysis.structure$test[[test.index]]$samples[[i]][[j]] %in% sample.id)) stop(paste0("Analysis model: Sample '", analysis.structure$test[[test.index]]$samples[[i]][[j]], "' is not defined in the data model.")) } } } } } # Number of statistics specified in the analysis model n.statistics = length(analysis.structure$statistic) # Statistic IDs statistic.id = rep(" ", n.statistics) if (n.statistics > 0) { for (statistic.index in 1:n.statistics) { statistic.id[statistic.index] = analysis.structure$statistic[[statistic.index]]$id # Number of samples in the current statistic n.statistic.samples = length(analysis.structure$statistic[[statistic.index]]$samples) for (i in 1:n.statistic.samples) { # Number of subsamples in the current sample n.subsamples = length(analysis.structure$statistic[[statistic.index]]$samples[[i]]) if (n.subsamples == 1) { if (!(analysis.structure$statistic[[statistic.index]]$samples[[i]] %in% sample.id)) stop(paste0("Analysis model: Sample '", analysis.structure$statistic[[statistic.index]]$samples[[i]], "' is not defined in the data model.")) } else { # Multiple subsamples for (j in 1:n.subsamples) { if (!(analysis.structure$statistic[[statistic.index]]$samples[[i]][[j]] %in% sample.id)) stop(paste0("Analysis model: Sample '", analysis.structure$statistic[[statistic.index]]$samples[[i]][[j]], "' is not defined in the data model.")) } } } } } # Information on the analysis scenario factors # Number of multiplicity adjustment sets if (!is.null(analysis.structure$mult.adjust)) { n.mult.adjust = length(analysis.structure$mult.adjust) } else { n.mult.adjust = 1 } # Number of analysis points (total number of interim and final analyses) if (!is.null(analysis.structure$interim.analysis)) { n.analysis.points = length(analysis.structure$interim.analysis$interim.looks$fraction) } else { # No interim analyses n.analysis.points = 1 } # Create the analysis stack (list of the analysis sets produced by the test and statistic functions, # each element in this list contains the results generated in a single simulation run) analysis.set = list() # Number of data scenarios n.data.scenarios = dim(data.stack$data.scenario.grid)[1] data.scenario.grid = data.stack$data.scenario.grid # Create a grid of the data and analysis scenario factors (outcome parameter, sample size, # design parameter, multiplicity adjustment) scenario.grid = matrix(0, n.data.scenarios * n.mult.adjust, 2) index = 1 for (i in 1:n.data.scenarios) { for (j in 1:n.mult.adjust) { scenario.grid[index, 1] = i scenario.grid[index, 2] = j index = index + 1 } } # Number of data and analysis scenarios n.scenarios = dim(scenario.grid)[1] # Number of analysis samples in each data scenario n.analysis.samples = length(data.stack$data.set[[1]]$data.scenario[[1]]$sample) # Simulation parameters # Use proc.load to generate the clusters cluster.mediana = parallel::makeCluster(getOption("cluster.mediana.cores", sim.parameters$proc.load)) # To make this reproducible I used the same number as the seed set.seed(seed) parallel::clusterSetRNGStream(cluster.mediana, seed) #Export all functions in the global environment to each node parallel::clusterExport(cluster.mediana,ls(envir=.GlobalEnv)) doParallel::registerDoParallel(cluster.mediana) # Simulation index initialisation sim.index=0 # Loop over simulation runs result.analysis.scenario=foreach::foreach(sim.index=1:sim.parameters$n.sims, .packages=(.packages())) %dorng% { # Select the current data set within the data stack if (!call.CreateDataStructure){ current.data.set = data.stack$data.set[[sim.index]] } else { current.data.stack = CreateDataStack(data.model, 1) current.data.set = current.data.stack$data.set[[1]] } # Matrix of results (p-values) produced by the tests test.results = matrix(0, n.tests, n.analysis.points) # Matrix of results produced by the statistics statistic.results = matrix(0, n.statistics, n.analysis.points) # Create the analysis scenario list (one element for each unique combination of the data scenario factors) result.data.scenario = list() # Loop over the data scenario factors (outcome parameter, sample size, and design parameter) for (scenario.index in 1:n.data.scenarios) { # Current data scenario current.data.scenario = current.data.set$data.scenario[[scenario.index]] # Current sample size set current.sample.size.set = data.stack$data.scenario.grid[scenario.index, "sample.size"] # Vector of sample sizes across the data samples in the current sample size set if (!any(is.na(data.stack$data.structure$sample.size.set))) current.sample.sizes = data.stack$data.structure$sample.size.set[current.sample.size.set, ] # Loop over interim analyses for (analysis.point.index in 1:n.analysis.points) { # Create a data slice for the current interim look if interim analyses are specified in the analysis model if (!is.null(analysis.structure$interim.analysis)) { sample.list = analysis.structure$interim.analysis$interim.looks$sample parameter = analysis.structure$interim.analysis$interim.looks$parameter fraction = analysis.structure$interim.analysis$interim.looks$fraction[[analysis.point.index]] # Compute the total sample size in the sample list n.sample.list = length(sample.list) total.sample.size = 0 # Number of samples n.samples = length(sample.id) for (k in 1:n.samples) { for (l in 1:n.sample.list) { if(sample.list[[l]] == sample.id[k]) total.sample.size = total.sample.size + current.sample.sizes[k] } } data.slice = CreateDataSlice(current.data.scenario, sample.list, parameter, round(total.sample.size * fraction)) } else { # No interim analyses are specified in the analysis model -- simply use the current data scenario data.slice = current.data.scenario } # Loop over the tests specified in the analysis model to compute statistic results # if tests are specified in the analysis model if (n.tests > 0) { # Loop over the tests specified in the analysis model to compute test results (p-values) for (test.index in 1:n.tests) { # Current test current.test = analysis.structure$test[[test.index]] # Number of analysis samples specified in the current test n.samples = length(current.test$samples) # Extract the data frames for the analysis samples specified in the current test sample.list = list() # Extract the data frames for the analysis samples specified in the current test for (sample.index in 1:n.samples) { # Number of subsamples within the current analysis sample n.subsamples = length(current.test$samples[[sample.index]]) if (n.subsamples == 1) { # If there is only one subsamples, no merging is required, simply select the right analysis sample sample.flag.num = match(current.test$samples[[sample.index]],sample.id) sample.list[[sample.index]] = data.slice$sample[[sample.flag.num]]$data } else { # If there are two or more subsamples, these subsamples must be merged first to create analysis samples # that are passed to the statistic function subsample.flag.num = match(current.test$samples[[sample.index]],sample.id) selected.subsamples = lapply(as.list(subsample.flag.num), function(x) data.slice$sample[[x]]$data) # Merge the subsamples sample.list[[sample.index]] = do.call(rbind, selected.subsamples) } } # Compute the test results (p-values) by calling the function for the current test with the test parameters test.results[test.index, analysis.point.index] = do.call(current.test$method, list(sample.list, list("PerformAnalysis",current.test$par))) } # End of the loop over the tests } # End of the if n.tests>0 # Loop over the statistics specified in the analysis model to compute statistic results # if statistics are specified in the analysis model if (n.statistics > 0) { for (statistic.index in 1:n.statistics) { # Current statistic current.statistic = analysis.structure$statistic[[statistic.index]] # Number of analysis samples specified in the current statistic n.samples = length(current.statistic$samples) # Extract the data frames for the analysis samples specified in the current statistic sample.list = list() for (sample.index in 1:n.samples) { # Number of subsamples within the current analysis sample n.subsamples = length(current.statistic$samples[[sample.index]]) if (n.subsamples == 1) { # If there is only one subsamples, no merging is required, simply select the right analysis sample sample.flag.num = match(current.statistic$samples[[sample.index]],sample.id) sample.list[[sample.index]] = data.slice$sample[[sample.flag.num]]$data } else { # If there are two or more subsamples, these subsamples must be merged first to create analysis samples # that are passed to the statistic function subsample.flag.num = match(current.statistic$samples[[sample.index]],sample.id) selected.subsamples = lapply(as.list(subsample.flag.num), function(x) data.slice$sample[[x]]$data) # Merge the subsamples sample.list[[sample.index]] = do.call(rbind, selected.subsamples) } } # Compute the statistic results by calling the function for the current statistic with the statistic parameters statistic.results[statistic.index, analysis.point.index] = do.call(current.statistic$method, list(sample.list, list("PerformAnalysis",current.statistic$par))) } # End of the loop over the statistics } # End of the if n.statistics>0 } # Loop over interim analyses # Assign test names if (n.tests > 0) { test.results = as.data.frame(test.results) rownames(test.results) = test.id if (n.analysis.points == 1) { colnames(test.results) = "Analysis" } else { names = rep("", n.analysis.points) for (j in 1:n.analysis.points) names[j] = paste0("Analysis ", j) colnames(test.results) = names } } else { # No tests are specified in the analysis model test.results = NA } # Assign statistic names if (n.statistics > 0) { statistic.results = as.data.frame(statistic.results) rownames(statistic.results) = statistic.id if (n.analysis.points == 1) { colnames(statistic.results) = "Analysis" } else { names = rep("", n.analysis.points) for (j in 1:n.analysis.points) names[j] = paste0("Analysis ", j) colnames(statistic.results) = names } } else { # No statistics are specified in the analysis model statistic.results = NA } result = list(tests = test.results, statistic = statistic.results) result.data.scenario[[scenario.index]] = list(result = result) } # Loop over the data scenario factors # Loop for each data scenario for (data.scenario.index in 1:n.data.scenarios) { # If at least one multiplicity adjustment has been specified loop over the analysis scenario factors (multiplicity adjustment) if (!is.null(analysis.structure$mult.adjust)) { # Create the analysis scenario list (one element for each unique combination of the data and analysis scenario factors) result.data.scenario[[data.scenario.index]]$result$tests.adjust = list() # Loop for each analysis.scenarios for (scenario.index in 1:n.mult.adjust) { # Matrix of results (p-values) produced by the tests test.results.adj = matrix(0, n.tests, n.analysis.points) # Get the current multiplicity adjustment procedure current.mult.adjust = analysis.structure$mult.adjust[[scenario.index]] # Get the unadjusted pvalues for the current data scenarios current.pvalues = result.data.scenario[[data.scenario.index]]$result$tests # Loop for each analysis point for (analysis.point.index in 1:n.analysis.points) { # Number of multiplicity adjustment procedure within the multiplicity adjustment scenarios n.mult.adjust.sc = length(current.mult.adjust) # Loop for each multiplicity adjustment procedure within the multiplicity adjustment scenarios for (mult.adjust.sc in 1:n.mult.adjust.sc) { # Apply the multiple testing procedure specified in the current multiplicity adjustment set # to the tests specified in this set # Extract the p-values for the tests specified in the current multiplicity adjustment set pvalues.flag.num = match(current.mult.adjust[[mult.adjust.sc]]$tests, test.id) selected.pvalues = current.pvalues[pvalues.flag.num, analysis.point.index] if (!is.na(current.mult.adjust[[mult.adjust.sc]]$proc)) { test.results.adj[pvalues.flag.num, analysis.point.index] = do.call(current.mult.adjust[[mult.adjust.sc]]$proc, list(selected.pvalues, list("Analysis", current.mult.adjust[[mult.adjust.sc]]$par))) } else { # If no multiplicity procedure is defined, there is no adjustment test.results.adj[pvalues.flag.num, analysis.point.index] = selected.pvalues } } # End Loop for each multiplicity adjustment procedure within the multiplicity adjustment scenario } # End Loop for each analysis point # Assign test names if (n.tests > 0) { test.results.adj = as.data.frame(test.results.adj) rownames(test.results.adj) = test.id if (n.analysis.points == 1) { colnames(test.results.adj) = "Analysis" } else { names = rep("", n.analysis.points) for (j in 1:n.analysis.points) names[j] = paste0("Analysis ", j) colnames(test.results.adj) = names } } else { # No tests are specified in the analysis model test.results.adj = NA } result.data.scenario[[data.scenario.index]]$result$tests.adjust$analysis.scenario[[scenario.index]] = test.results.adj } # End Loop for each analysis.scenarios } # End if analysis.structure else { result.data.scenario[[data.scenario.index]]$result$tests.adjust$analysis.scenario[[1]] = result.data.scenario[[data.scenario.index]]$result$tests } } # End loop for each data scenario result.analysis.scenario = result.data.scenario return(result.analysis.scenario) } # End of the loop over the simulations # Stop the cluster parallel::stopCluster(cluster.mediana) #closeAllConnections() # Define the analysis scenario grid (unique combinations of the data and analysis scenario factors) analysis.scenario.grid = as.data.frame(matrix(0, n.data.scenarios * n.mult.adjust, 4)) d = data.stack$data.scenario.grid analysis.scenario.grid[, 1:3] = d[scenario.grid[,1], ] analysis.scenario.grid[, 4] = scenario.grid[,2] colnames(analysis.scenario.grid) = c("design.parameter", "outcome.parameter", "sample.size", "multiplicity.adjustment") # Create the analysis stack analysis.stack = list(description = "analysis.stack", analysis.set = result.analysis.scenario, analysis.scenario.grid = analysis.scenario.grid, data.structure = data.structure, analysis.structure = analysis.structure, sim.parameters = sim.parameters) class(analysis.stack) = "AnalysisStack" return(analysis.stack) } # End of PerformAnalysis Mediana/R/StepDownDunnettAdj.CI.R0000644000176200001440000000442013434027610016152 0ustar liggesusers###################################################################################################################### # Function: StepDownDunnettAdj.CI # Argument: p, Vector of p-values (1 x m) # par, List of procedure parameters: vector of hypothesis weights (1 x m) # Description: Dunnett multiple testing procedure. StepDownDunnettAdj.CI = function(est, par) { # Number of point estimate m = length(est) # Extract the sample size if (is.null(par[[2]]$n)) stop("Step-down Dunnett procedure: Sample size must be specified (n).") n = par[[2]]$n # Extract the standard deviation if (is.null(par[[2]]$sd)) stop("Step-down Dunnett procedure: Standard deviation must be specified (sd).") sd = par[[2]]$sd # Extract the simultaneous coverage probability if (is.null(par[[2]]$covprob)) stop("Step-down Dunnett procedure: Coverage probability must be specified (covprob).") covprob = par[[2]]$covprob # Error checks if (m != length(est)) stop("Step-down Dunnett procedure: Length of the point estimate vector must be equal to the number of hypotheses.") if (m != length(sd)) stop("Step-down Dunnett procedure: Length of the standard deviation vector must be equal to the number of hypotheses.") if (covprob>=1 | covprob<=0) stop("Step-down Dunnett procedure: simultaneous coverage probability must be >0 and <1") # Standard errors stderror = sd*sqrt(2/n) # T-statistics associated with each test stat = est/stderror # Compute degrees of freedom of the test statistic nu = 2*(n-1) # Compute raw one-sided p-values rawp = 1-stats::pt(stat,nu) # Compute the adjusted p-values adjustpval = StepDownDunnettAdj(rawp, list("Analysis", list(n = n))) # Alpha alpha = 1-covprob # Compute the degree of freedom for the Step-down Dunnett procedure nu_dunnett = (m+1)*(n-1) ci = rep(0,m) rejected = (adjustpval <= alpha) if (all(rejected)){ # All null hypotheses are rejected # Critical value critical_value = stats::qt(1-alpha, nu_dunnett) ci <- pmax(0, est-critical_value*stderror) } else { critical_value = qdunnett(1-alpha,nu, m-sum(rejected)) ci[!rejected] = est[!rejected] - critical_value*stderror[!rejected] } return(ci) } # End of StepDownDunnettAdj.CI Mediana/R/argmin.R0000644000176200001440000000165613434027610013421 0ustar liggesusers###################################################################################################################### # Function: argmin. # Argument: p, Vector of p-values (1 x m) # w, Vector of hypothesis weights (1 x m) # processed, Vector of binary indicators (1 x m) [1 if processed and 0 otherwise]. # Description: Hidden function used in the Chain function. Find the index of the smallest weighted p-value among the non-processed null hypotheses with a positive weight (index=0 if # the smallest weighted p-value does not exist) in a chain procedure argmin = function(p, w, processed) { index = 0 m = length(p) for (i in 1:m) { if (w[i] > 0 & processed[i] == 0) { if (index == 0) { pmin = p[i]/w[i] index = i } if (index > 0 & p[i]/w[i] < pmin) { pmin = p[i]/w[i] index = i } } } return(index) } # End of argminMediana/R/MultAdj.MultAdjStrategy.R0000644000176200001440000000101013434027610016546 0ustar liggesusers###################################################################################################################### # Function: MultAdj.MultAdjStrategy. # Argument: MultAdjStrategy object # Description: This function is used to create an object of class MultAdjStrategy. #' @export MultAdj.MultAdjStrategy = function(...) { multadj = lapply(list(...),function(x) {if(class(x)=="MultAdjProc") list(unclass(x)) else unclass(x)} ) class(multadj) = "MultAdj" return(multadj) invisible(multadj) }Mediana/R/BetaDist.R0000644000176200001440000000312313463627503013643 0ustar liggesusers###################################################################################################################### # Function: BetaDist . # Argument: List of parameters (number of observations, a, b). # Description: This function is used to generate beta distributed outcomes. BetaDist = function(parameter) { # Error checks if (missing(parameter)) stop("Data model: BetaDist distribution: List of parameters must be provided.") if (is.null(parameter[[2]]$a)) stop("Data model: BetaDist distribution: Parameter a must be specified.") if (is.null(parameter[[2]]$b)) stop("Data model: BetaDist distribution: Parameter b must be specified.") a = parameter[[2]]$a b = parameter[[2]]$b if (a <= 0 | b <= 0) stop("Data model: BetaDist distribution: Parameters a and b must be non-negative.") # Determine the function call, either to generate distribution or to return description call = (parameter[[1]] == "description") # Generate random variables if (call == FALSE) { # Error checks n = parameter[[1]] if (n%%1 != 0) stop("Data model: BetaDist distribution: Number of observations must be an integer.") if (n <= 0) stop("Data model: BetaDist distribution: Number of observations must be positive.") result = stats::rbeta(n = n, shape1 = a, shape2 = b) } else { # Provide information about the distribution function if (call == TRUE) { # Labels of distributional parameters result = list(list(a = "a", b = "b"),list("Beta")) } } return(result) } #End of BetaDistMediana/R/CreateSummaryTable.R0000644000176200001440000000212713434027610015667 0ustar liggesusers###################################################################################################################### # Function: CreateSummaryTable. # Argument: Results returned by the CSE function. # Description: This function is used to create a summary table with all results CreateSummaryTable = function(evaluation.result){ nscenario = length(evaluation.result) table.evaluation.result = list() for (i in 1:nscenario){ ncriterion = length(evaluation.result[[i]]$criterion) table.list = list() for (j in 1:ncriterion){ scenario = i id = evaluation.result[[i]]$criterion[[j]]$id res = format(round(evaluation.result[[i]]$criterion[[j]]$result, digits = 4), digits = 4, nsmall = 4) rownames(res) = colnames(res) = NULL test = rownames(evaluation.result[[i]]$criterion[[j]]$result) table.list[[j]] = data.frame(scenario = scenario, id=rep(id,nrow(res)), test=test, result=res) } table.evaluation.result[[i]] = do.call(rbind, table.list) } return(do.call(rbind, table.evaluation.result)) } # End of CreateSummaryTableMediana/R/is.PresentationModel.R0000644000176200001440000000052213434027611016202 0ustar liggesusers###################################################################################################################### # Function: is.PresentationModel. # Argument: an object. # Description: Return if the object is of class PresentationModel is.PresentationModel = function(arg){ return(any(class(arg)=="PresentationModel")) }Mediana/R/HolmAdj.global.R0000644000176200001440000000207213434027610014712 0ustar liggesusers###################################################################################################################### # Function: HolmAdj.global. # Argument: p, Vector of p-values (1 x m) # n, Total number of testable hypotheses (in the case of modified mixture procedure) (1 x 1) # gamma, Vector of truncation parameter (1 x 1) # Description: Compute global p-value for the truncated Holm multiple testing procedure. The function returns the global adjusted pvalue (1 x 1) HolmAdj.global = function(p, n, gamma) { # Number of p-values k = length(p) if (k > 0 & n > 0) { if (gamma == 0) { adjp = n * min(p) } # Bonferonni procedure else if (gamma <= 1) { # Truncated Holm procedure Index of ordered pvalue ind = order(p) # Denominator (1 x m) seq = seq_vector(k) denom = gamma/(k - seq + 1) + (1 - gamma)/n # Adjusted p-values sortp = sort(p) adjp = min(cummax(sortp/denom)[order(ind)]) } } else adjp = 1 return(adjp) } # End of HolmAdj.globalMediana/R/Statistic.R0000644000176200001440000000170213434027610014103 0ustar liggesusers###################################################################################################################### # Function: Statistic. # Argument: Statistic ID, Statistical method, Samples and Parameters. # Description: This function is used to create an object of class Statistic. #' @export Statistic = function(id, method, samples, par = NULL) { # Error checks if (!is.character(id)) stop("Statistic: ID must be character.") if (!is.character(method)) stop("Statistic: statistical method must be character.") if (!is.list(samples)) stop("Statistic: samples must be wrapped in a list.") if (any(lapply(samples, is.character) == FALSE)) stop("Statistic: samples must be character.") if (!is.null(par) & !is.list(par)) stop("MultAdj: par must be wrapped in a list.") statistic = list(id = id, method = method, samples = samples, par = par) class(statistic) = "Statistic" return(statistic) invisible(statistic) }Mediana/R/DataModel.default.R0000644000176200001440000000201413434027610015406 0ustar liggesusers###################################################################################################################### # Function: DataModel.default # Argument: Multiple character strings. # Description: This function is called by default if the class of the argument is neither an Outcome, # nor a SampleSize object. #' @export DataModel.default = function(...) { args = list(...) if (length(args) > 0) { stop("Data Model doesn't know how to deal with the parameters") } else { datamodel = structure(list(general = list(outcome.dist = NULL, outcome.type = NULL, sample.size = NULL, event = NULL, rando.ratio = NULL, design = NULL), samples = NULL), class = "DataModel") } return(datamodel) }Mediana/R/MedianStat.R0000644000176200001440000000162013434027610014164 0ustar liggesusers###################################################################################################################### # Compute the median based on non-missing values in the combined sample MedianStat = function(sample.list, parameter) { # Determine the function call, either to generate the statistic or to return description call = (parameter[[1]] == "Description") if (call == FALSE | is.na(call)) { # Error checks if (length(sample.list)!=1) stop("Analysis model: Only one sample must be specified in the MedianStat statistic.") sample = sample.list[[1]] # Select the outcome column and remove the missing values due to dropouts/incomplete observations outcome = sample[, "outcome"] result = stats::median(stats::na.omit(outcome)) } else if (call == TRUE) { result = list("Median") } return(result) } # End of MedianStatMediana/R/MultAdjStrategy.R0000644000176200001440000000055313434027610015222 0ustar liggesusers###################################################################################################################### # Function: MultAdjStrategy. # Argument: .... # Description: This function is used to call the corresponding function according to the class of the argument. #' @export MultAdjStrategy = function(...) { UseMethod("MultAdjStrategy") }Mediana/R/AdjustCIs.R0000644000176200001440000000164113434027610013767 0ustar liggesusers############################################################################################################################################## # Function: AdjustCIs # Argument: est (vector) and proc and par (list of parameters). # Description: This function returns adjusted pvalues according to the multiple testing procedure specified in the multadj argument #' @export AdjustCIs = function(est, proc, par=NA){ # Check if the multiplicity adjustment procedure is specified, check if it exists if (!exists(paste0(proc,".CI"))) { stop(paste0("AdjustCIs: Simultaneous confidence intervals for '", proc, "' does not exist.")) } else if (!is.function(get(as.character(paste0(proc,".CI")), mode = "any"))) { stop(paste0("AdjustCIs: Simultaneous confidence intervals for '", proc, "' does not exist.")) } result = do.call(paste0(proc,".CI"), list(est, list("Analysis", par))) return(result) } Mediana/R/BonferroniAdj.R0000644000176200001440000000277213434027610014666 0ustar liggesusers###################################################################################################################### # Function: BonferroniAdj. # Argument: p, Vector of p-values (1 x m) # par, List of procedure parameters: vector of hypothesis weights (1 x m) # Description: Bonferroni multiple testing procedure. BonferroniAdj = function(p, par) { # Determine the function call, either to generate the p-value or to return description call = (par[[1]] == "Description") # Number of p-values m = length(p) # Extract the vector of hypothesis weights (1 x m) if (!any(is.na(par[[2]]))) { if (is.null(par[[2]]$weight)) stop("Analysis model: Bonferroni procedure: Hypothesis weights must be specified.") w = par[[2]]$weight } else { w = rep(1/m, m) } # Error checks if (length(w) != m) stop("Analysis model: Bonferroni procedure: Length of the weight vector must be equal to the number of hypotheses.") if (sum(w)!=1) stop("Analysis model: Bonferroni procedure: Hypothesis weights must add up to 1.") if (any(w < 0)) stop("Analysis model: Bonferroni procedure: Hypothesis weights must be greater than 0.") if (any(call == FALSE) | any(is.na(call))) { # Adjusted p-values adjpvalue = pmin(1, p/w) result = adjpvalue } else if (call == TRUE) { weight = paste0("Weight={",paste(round(w,2), collapse = ","),"}") result=list(list("Bonferroni procedure"),list(weight)) } return(result) } # End of BonferroniAdjMediana/R/MedianSumm.R0000644000176200001440000000104213434027610014170 0ustar liggesusers############################################################################################################################ # Function: MedianSumm. # Argument: Descriptive statistics across multiple simulation runs (vector or matrix), method parameters (not used in this function). # Description: Compute median for the vector of statistics or in each column of the matrix. MedianSumm = function(test.result, statistic.result, parameter) { result = apply(statistic.result, 2, stats::median) return(result) } # End of MedianSummMediana/R/CreateDataScenarioSampleSize.R0000644000176200001440000000401113434027610017606 0ustar liggesusers####################################################################################################################### # Function: CreateDataScenarioSampleSize . # Argument: Data frame of patients and sample size. # Description: Create data stack for the current sample size. This function is used in the CreateDataStack function when the user uses the SampleSize. CreateDataScenarioSampleSize = function(current.design.outcome.variables, current.sample.size) { # List of current data scenario current.data.scenario = list() current.data.scenario.index = 0 # Get the number of samples n.samples = length(current.design.outcome.variables) # Get the number of outcome n.outcomes = length(current.design.outcome.variables[[1]]) # For each sample, generate the data for each outcome for the current sample size for (sample.index in 1:n.samples){ for (outcome.index in 1:n.outcomes){ # Increment the index current.data.scenario.index = current.data.scenario.index + 1 # Get the current sample.size for the current sample current.sample.size.sample = as.numeric(current.sample.size[sample.index]) # Get the data for the current sample.size current.data = current.design.outcome.variables[[sample.index]][[outcome.index]]$data[(1:current.sample.size.sample),] # Get the sample id current.id = current.design.outcome.variables[[sample.index]][[outcome.index]]$id # Get the outcome type current.outcome.type = current.design.outcome.variables[[sample.index]][[outcome.index]]$outcome.type # Add the current sample in the list current.data.scenario[[current.data.scenario.index]] = list(id = current.id, outcome.type = current.outcome.type, data = current.data ) } } # Return the object return(current.data.scenario) } # End of CreateDataScenarioSampleSize Mediana/R/GenerateReport.R0000644000176200001440000000153013434027610015061 0ustar liggesusers###################################################################################################################### # Function: GenerateReport. # Argument: Results returned by the CSE function and presentation model and Word-document title and Word-template. # Description: This function is used to create a summary table with all results #' @export GenerateReport = function(presentation.model = NULL, cse.results, report.filename, report.template = NULL){ if (!requireNamespace("officer", quietly = TRUE)) { stop("officer R package is needed for to generate the report. Please install it.", call. = FALSE) } if (!requireNamespace("flextable", quietly = TRUE)) { stop("flextable R package is needed for to generate the report. Please install it.", call. = FALSE) } UseMethod("GenerateReport") } Mediana/R/MinStat.R0000644000176200001440000000157613434027610013524 0ustar liggesusers###################################################################################################################### # Compute the min based on non-missing values in the combined sample MinStat = function(sample.list, parameter) { # Determine the function call, either to generate the statistic or to return description call = (parameter[[1]] == "Description") if (call == FALSE | is.na(call)) { # Error checks if (length(sample.list)!=1) stop("Analysis model : Only one sample must be specified in the MinStat statistic.") sample = sample.list[[1]] # Select the outcome column and remove the missing values due to dropouts/incomplete observations outcome = sample[, "outcome"] result = min(stats::na.omit(outcome)) } else if (call == TRUE) { result = list("Minimum") } return(result) } # End of MinStat Mediana/R/Sample.R0000644000176200001440000000204613434027610013357 0ustar liggesusers###################################################################################################################### # Function: Sample. # Argument: Sample ID, Outcome Parameters, Sample Size. # Description: This function is used to create an object of class Sample. #' @export Sample = function(id, outcome.par, sample.size = NULL) { # Error checks if (!is.character(unlist(id))) stop("Sample: sample ID must be character.") if (!is.list(outcome.par)) stop("Sample: outcome parameters must be provided in a list.") if (!is.null(sample.size)){ # Error checks if (any(!is.numeric(unlist(sample.size)))) stop("Sample: sample size must be numeric.") if (any(unlist(sample.size) %% 1 !=0)) stop("Sample: sample size must be integer.") if (any(unlist(sample.size) <=0)) stop("Sample: sample size must be strictly positive.") } sample = list(id = id, outcome.par = outcome.par, sample.size = sample.size) class(sample) = "Sample" return(sample) invisible(sample) } Mediana/R/SdStat.R0000644000176200001440000000157113434027610013342 0ustar liggesusers###################################################################################################################### # Compute the sd based on non-missing values in the combined sample SdStat = function(sample.list, parameter) { # Determine the function call, either to generate the statistic or to return description call = (parameter[[1]] == "Description") if (call == FALSE | is.na(call)) { # Error checks if (length(sample.list)!=1) stop("Analysis model : Only one sample must be specified in the SdStat statistic.") sample = sample.list[[1]] # Select the outcome column and remove the missing values due to dropouts/incomplete observations outcome = sample[, "outcome"] result = stats::sd(stats::na.omit(outcome)) } else if (call == TRUE) { result = list("SD") } return(result) } # End of SdStatMediana/R/CreateAnalysisStructure.R0000644000176200001440000001631113434027610016766 0ustar liggesusers###################################################################################################################### # Function: CreateAnalysisStructure. # Argument: Analysis model. # Description: This function is based on the old analysis_model_extract function. It performs error checks in the analysis model # and creates an "analysis structure", which is an internal representation of the original analysis model used by all other Mediana functions. CreateAnalysisStructure = function(analysis.model) { # Check the general set if (is.null(analysis.model$tests) & is.null(analysis.model$statistics)) stop("Analysis model: At least one test or statistic must be specified.") # General set of analysis model parameters # Extract interim analysis parameters if (!is.null(analysis.model$general$interim.analysis)) { interim.looks = analysis.model$general$interim.analysis$interim.looks if (!(interim.looks$parameter %in% c("sample.size", "event", "time"))) stop("Analysis model: Parameter in the interim analysis specifications must be sample.size, event or time.") interim.analysis = list(interim.looks = interim.looks) } else { interim.analysis = NULL } # Extract test-specific parameters if (!is.null(analysis.model$tests)) { # Number of tests in the analysis model n.tests = length(analysis.model$tests) # List of tests (id, statistical method, sample list, parameters) test = list() for (i in 1:n.tests) { # Test IDs if (is.null(analysis.model$tests[[i]]$id)) stop("Analysis model: IDs must be specified for all tests.") else id = analysis.model$tests[[i]]$id # List of samples if (is.null(analysis.model$tests[[i]]$samples)) stop("Analysis model: Samples must be specified for all tests.") else samples = analysis.model$tests[[i]]$samples # Statistical method method = analysis.model$test[[i]]$method if (!exists(method)) { stop(paste0("Analysis model: Statistical method function '", method, "' does not exist.")) } else if (!is.function(get(as.character(method), mode = "any"))) { stop(paste0("Analysis model: Statistical method function '", method, "' does not exist.")) } # Test parameters (optional) if (is.null(analysis.model$tests[[i]]$par)) par = NA else par = analysis.model$tests[[i]]$par test[[i]] = list(id = id, method = method, samples = samples, par = par) } # Check if id is uniquely defined if (any(table(unlist(lapply(test,function(list) list$id)))>1)) stop("Analysis model: Tests IDs must be uniquely defined.") } else { # No tests are specified test = NULL } # Extract statistic-specific parameters if (!is.null(analysis.model$statistics)) { # Number of statistics in the analysis model n.statistics = length(analysis.model$statistics) # List of statistics (id, statistical method, sample list, parameters) statistic = list() for (i in 1:n.statistics) { # Statistic IDs if (is.null(analysis.model$statistic[[i]]$id)) stop("Analysis model: IDs must be specified for all statistics.") else id = analysis.model$statistic[[i]]$id # List of samples if (is.null(analysis.model$statistic[[i]]$samples)) stop("Analysis model: Samples must be specified for all statistics.") else samples = analysis.model$statistic[[i]]$samples # Statistical method method = analysis.model$statistic[[i]]$method if (!exists(method)) { stop(paste0("Analysis model: Statistical method function '", method, "' does not exist.")) } else if (!is.function(get(as.character(method), mode = "any"))) { stop(paste0("Analysis model: Statistical method function '", method, "' does not exist.")) } if (is.null(analysis.model$statistic[[i]]$par)) par = NA else par = analysis.model$statistic[[i]]$par statistic[[i]] = list(id = id, method = method, samples = samples, par = par) } # Check if id is uniquely defined if (any(table(unlist(lapply(statistic,function(list) list$id)))>1)) stop("Analysis model: Statistic IDs must be uniquely defined.") } else { # No statistics are specified statistic = NULL } # Extract parameters of multiplicity adjustment methods # List of multiplicity adjustments (procedure, parameters, tests) mult.adjust = list(list()) # Number of multiplicity adjustment methods if (is.null(analysis.model$general$mult.adjust)) { # No multiplicity adjustment is specified mult.adjust = NULL } else { n.mult.adjust = length(analysis.model$general$mult.adjust) for (i in 1:n.mult.adjust) { mult.adjust.temp = list() # Number of multiplicity adjustments within each mult.adj scenario n.mult.adj.sc=length(analysis.model$general$mult.adjust[[i]]) for (j in 1:n.mult.adj.sc){ proc = analysis.model$general$mult.adjust[[i]][[j]]$proc if (is.na(proc) | is.null(analysis.model$general$mult.adjust[[i]][[j]]$par)) par = NA else par = analysis.model$general$mult.adjust[[i]][[j]]$par if (is.null(analysis.model$general$mult.adjust[[i]][[j]]$tests)) { tests = lapply(test, function(list) list$id) } else { tests = analysis.model$general$mult.adjust[[i]][[j]]$tests } # If the multiplicity adjustment procedure is specified, check if it exists if (!is.na(proc)) { if (!exists(proc)) { stop(paste0("Analysis model: Multiplicity adjustment procedure function '", proc, "' does not exist.")) } else if (!is.function(get(as.character(proc), mode = "any"))) { stop(paste0("Analysis model: Multiplicity adjustment procedure function '", proc, "' does not exist.")) } } # Check if tests defined in the multiplicity adjustment exist (defined in the test list) temp_list = lapply(lapply(tests,function(l1,l2) l1 %in% l2, lapply(test, function(list) list$id)), function(l) any(l == FALSE)) if (!is.na(proc) & any(temp_list == TRUE)) stop(paste0("Analysis model: Multiplicity adjustment procedure test has not been specified in the test-specific model.")) mult.adjust.temp[[j]] = list(proc = proc, par = par, tests = tests) } mult.adjust[[i]] = mult.adjust.temp # Check if tests defined in multiplicity adjustment is defined in one and only one multiplicity adjustment if (any(table(unlist(lapply(mult.adjust[[i]],function(list) list$tests)))>1)) stop(paste0("Analysis model: Multiplicity adjustment procedure test has been specified in more than one multiplicity adjustment.")) } } # Create the analysis structure analysis.structure = list(description = "analysis.structure", test = test, statistic = statistic, mult.adjust = mult.adjust, interim.analysis = interim.analysis) return(analysis.structure) } # End of CreateAnalysisStructure Mediana/R/CDFDunnett.R0000644000176200001440000000160013434027610014067 0ustar liggesusers###################################################################################################################### # Function: CDFDunnett # Argument: stat, Test statistic (1 x 1) # df, Number of degrees of freedom (1 x 1) # m, Number of comparisons (1 x 1) # Description: Cumulative distribution function of the Dunnett distribution in one-sided # multiplicity problems with a balanced one-way layout and # equally weighted null hypotheses CDFDunnett = function(stat, df, m) { # Correlation matrix corr = matrix(0.5, m, m) for (i in 1:m) corr[i, i] = 1 p = mvtnorm::pmvt( lower = rep(-Inf, m), upper = rep(stat, m), delta = rep(0, m), df = df, corr = corr, algorithm = mvtnorm::GenzBretz( maxpts = 25000, abseps = 0.00001, releps = 0 ) )[1] return(p) } Mediana/R/families.R0000644000176200001440000000073713434027611013735 0ustar liggesusers# Function: families # Argument: Multiple character strings. # Description: This function is used mostly for the user's convenience. It simply creates a list of character strings and # can be used in the specification of parameters for gatekeeping procedures. #' @export families = function(...) { args = list(...) nargs = length(args) if (nargs <= 0) stop("Families function: At least one family must be specified.") return(args) invisible(args) }Mediana/R/NormalDist.R0000644000176200001440000000301413434027610014206 0ustar liggesusers# Function: NormalDist. # Argument: List of parameters (number of observations, list(mean, standard deviation)). # Description: This function is used to generate normal outcomes. NormalDist = function(parameter) { # Error checks if (missing(parameter)) stop("Data model: NormalDist distribution: List of parameters must be provided.") if (is.null(parameter[[2]]$mean)) stop("Data model: NormalDist distribution: Mean must be specified.") if (is.null(parameter[[2]]$sd)) stop("Data model: NormalDist distribution: SD must be specified.") mean = parameter[[2]]$mean sd = parameter[[2]]$sd if (sd <= 0) stop("Data model: NormalDist distribution: Standard deviations in the normal distribution must be positive.") # Determine the function call, either to generate distribution or to return description call = (parameter[[1]] == "description") # Generate random variables if (call == FALSE) { # Error checks n = parameter[[1]] if (n%%1 != 0) stop("Data model: NormalDist distribution: Number of observations must be an integer.") if (n <= 0) stop("Data model: NormalDist distribution: Number of observations must be positive.") result = stats::rnorm(n = n, mean = mean, sd = sd) } else { # Provide information about the distribution function if (call == TRUE) { # Labels of distributional parameters result = list(list(mean = "mean", sd = "SD"),list("Normal")) } } return(result) } #End of NormalDistMediana/R/GLMPoissonTest.R0000644000176200001440000000432513434027610014772 0ustar liggesusers###################################################################################################################### # Function: GLMPoissonTest . # Argument: Data set and parameter (call type). # Description: Computes one-sided p-value based on Poisson regression. #' @importFrom stats poisson GLMPoissonTest = function(sample.list, parameter) { # Determine the function call, either to generate the p-value or to return description call = (parameter[[1]] == "Description") if (call == FALSE | is.na(call)) { # No parameters are defined if (is.na(parameter[[2]])) { larger = TRUE } else { if (!all(names(parameter[[2]]) %in% c("larger"))) stop("Analysis model: GLMPoissonTest test: this function accepts only one argument (larger)") # Parameters are defined but not the larger argument if (!is.logical(parameter[[2]]$larger)) stop("Analysis model: GLMPoissonTest test: the larger argument must be logical (TRUE or FALSE).") larger = parameter[[2]]$larger } # Sample list is assumed to include two data frames that represent two analysis samples # Outcomes in Sample 1 outcome1 = sample.list[[1]][, "outcome"] # Remove the missing values due to dropouts/incomplete observations outcome1.complete = outcome1[stats::complete.cases(outcome1)] # Outcomes in Sample 2 outcome2 = sample.list[[2]][, "outcome"] # Remove the missing values due to dropouts/incomplete observations outcome2.complete = outcome2[stats::complete.cases(outcome2)] # Data frame data.complete = data.frame(rbind(cbind(2, outcome2.complete), cbind(1, outcome1.complete))) colnames(data.complete) = c("TRT", "RESPONSE") data.complete$TRT=as.factor(data.complete$TRT) # One-sided p-value (to be checked) # result = summary(stats::glm(RESPONSE ~ TRT, data = data.complete, family = poisson))$coefficients["TRT2", "Pr(>|z|)"]/2 z = summary(stats::glm(RESPONSE ~ TRT, data = data.complete, family = poisson))$coefficients["TRT2", "z value"] result = stats::pnorm(z, lower.tail = !larger) } else if (call == TRUE) { result=list("Poisson regression test") } return(result) } # End of GLMPoissonTest Mediana/R/CreateDataSlice.R0000644000176200001440000001523013434027610015112 0ustar liggesusers####################################################################################################################### # Function: CreateDataSlice. # Argument: Data scenario (results of a single simulation run for a single combination of data scenario factores), # list of analysis samples for defining a data slice, slice criterion and slice value. # Description: Creates a subset of the original data set (known as a slice) based on the specified parameter. This function is useful for # implementing interim looks and requires that the enrollment parameters should be specified in the data model. # If parameter = "sample.size", the time point for defining the slice is determined by computing the time when X patients # in the combined samples from the sample list complete the trial (patients who dropped out of the trial or who are censored # are not counted as completers). # If parameter = "event", the time point for defining the slice is determined by computing the time when X events occurs # in the combined samples specified in the sample list. # If parameter = "time", the data slice includes the patients who complete the trial by the specified time cutoff # (patients who dropped out of the trial or who are censored are not counted as completers). # After the time cutoff is determined, the data slice is then created by keeping only the # patients who complete the trial prior to the time cutoff. The patients not included in the data slice are # assumed to have dropped out of the trial (if the outcome type if "standard") or have been censored (if the outcome type if "event"). # X is specified by the value argument. CreateDataSlice = function(data.scenario, sample.list, parameter, value) { # Number of analysis samples within the data scenario n.analysis.samples = length(data.scenario$sample) # Create the data slice data.slice = data.scenario # Select the samples from the sample list within the current data scenario selected.samples = list() index = 1 for (i in 1:n.analysis.samples) { if (data.scenario$sample[[i]]$id %in% sample.list) { selected.samples[[index]] = data.scenario$sample[[i]]$data index = index + 1 } } # Merge the selected analysis samples selected.analysis.sample = do.call(rbind, selected.samples) # Determine the time cutoff depending on the parameter specified if (parameter == "sample.size") { # Remove patients who dropped out of the trial or who are censored (they are not counted as completers) non.missing.outcome = !(is.na(selected.analysis.sample[, "outcome"])) completers = selected.analysis.sample[non.missing.outcome, ] # Check if there are any completers if (dim(completers)[1] > 0) { # Sort by the patient end time completers = completers[order(completers[, "patient.end.time"]), ] # Total number of completers total.sample.size = dim(completers)[1] # Find the time cutoff corresponding to the specified sample size in the selected sample # Truncate the VALUE argument if greater than the total number of completers time.cutoff = completers[min(value, total.sample.size), "patient.end.time"] } else { # No completers time.cutoff = 0 } } if (parameter == "event") { # Remove patients who dropped out of the trial or who are censored (they are not counted as events) non.missing.outcome = !(is.na(selected.analysis.sample[, "outcome"])) & (selected.analysis.sample[, "patient.censor.indicator"] == 0) events = selected.analysis.sample[non.missing.outcome, ] # Check if there are any events if (dim(events)[1] > 0) { # Sort by the patient end time (this is when the events occurred) events = events[order(events[, "patient.end.time"]), ] # Total number of events total.event.count = dim(events)[1] # Find the time cutoff corresponding to the specified event count in the selected sample # Truncate the VALUE argument if greater than the total event count time.cutoff = events[min(value, total.event.count), "patient.end.time"] } else { # No events time.cutoff = 0 } } if (parameter == "time") { # Time cutoff is directly specified time.cutoff = value } # Create the data slice by applying the time cutoff to all analysis samples # Loop over the analysis samples for (i in 1:n.analysis.samples) { sliced.analysis.sample = data.scenario$sample[[i]]$data if (data.scenario$sample[[i]]$outcome.type == "event") { # Apply slicing rules for event-type outcomes # If the patient end time is greater than the time cutoff, the patient censor indicator is set to TRUE sliced.analysis.sample[, "patient.censor.indicator"] = (sliced.analysis.sample[, "patient.end.time"] > time.cutoff) # If the patient end time or dropout time is greater than the time cutoff, the patient end time is set to the time cutoff sliced.analysis.sample[, "patient.end.time"] = pmin(time.cutoff, sliced.analysis.sample[, "patient.end.time"]) sliced.analysis.sample[, "patient.dropout.time"] = pmin(time.cutoff, sliced.analysis.sample[, "patient.dropout.time"]) # Outcome is truncated at the time cutoff for censored observations sliced.analysis.sample[, "outcome"] = (time.cutoff - sliced.analysis.sample[, "patient.start.time"]) * sliced.analysis.sample[, "patient.censor.indicator"] + sliced.analysis.sample[, "outcome"] * (1 - sliced.analysis.sample[, "patient.censor.indicator"]) # If the patient outcome is negative, the outcome is set to NA (patient is enrolled after the time cutoff) sliced.analysis.sample[sliced.analysis.sample[, "outcome"] < 0, "outcome"] = NA } else { # Apply slicing rules for standard outcomes (binary and continuous outcome variables) # If the patient end time is greater than the time cutoff, the patient is considered a dropout and the outcome is set to NA sliced.analysis.sample[sliced.analysis.sample[, "patient.end.time"] > time.cutoff, "outcome"] = NA # If the patient end time or dropout time is greater than the time cutoff, the patient end time is set to the time cutoff sliced.analysis.sample[, "patient.end.time"] = pmin(time.cutoff, sliced.analysis.sample[, "patient.end.time"]) sliced.analysis.sample[, "patient.dropout.time"] = pmin(time.cutoff, sliced.analysis.sample[, "patient.dropout.time"]) } # Put the sliced data sample in the data slice data.slice$sample[[i]]$data = sliced.analysis.sample } # Loop over the analysis samples return(data.slice) } # End of CreateDataSliceMediana/R/MulinomialDist.R0000644000176200001440000000475613434027610015102 0ustar liggesusers# Function: MultinomialDist # Argument: List of parameters (number of observations, list(probabilities)). # Description: This function is used to generate multinomial (possibly ordered) outcomes. MultinomialDist = function(parameter) { # Determine the function call, either to generate distribution or to return the distribution's description call = (parameter[[1]] == "description") # Generate random variables if (call == FALSE) { # The number of observations to generate n = parameter[[1]] ############################################################## # Distribution-specific component # Get the distribution's parameters (stored in the parameter[[2]] list) prob = parameter[[2]]$prob ############################################################## # Error checks (other checks could be added by the user if needed) if (n%%1 != 0) stop("Data model: MultinomialDist distribution: Number of observations must be an integer.") if (n <= 0) stop("Data model: MultinomialDist distribution: Number of observations must be positive.") # Error cheks on prob if (any(prob < 0 | prob > 1)) stop("Data model: MultinomialDist distribution: Probabilities must be between 0 and 1.") if (sum(prob)!=1) stop("Data model: MultinomialDist distribution: The sum of probabilities must be equal to 1.") ############################################################## # Distribution-specific component # Observations are generated using the "fundist" function and assigned to the "result" object #result = fundist(n = n, parameter1 = parameter1, parameter2 = parameter2) #result = which((rmultinom(n, size = 1, prob = prob)==1), arr.ind = TRUE)[,'row'] result = sample.int(length(prob), n, replace = TRUE, prob = prob) ############################################################## } else { # Provide information about the distribution function if (call == TRUE) { ############################################################## # Distribution-specific component # The labels of the distributional parameters and the distribution's label must be stored in the "result" list # result = list(list(parameter1 = "parameter1", parameter2 = "parameter2"), # list("Template")) result = list(list(prob = "prob"), list("Multinomial")) ############################################################## } } return(result) } Mediana/R/seq_vector.R0000644000176200001440000000040513434027611014306 0ustar liggesusers############################################################################################################################ # Function: seq_vector. # Argument: Number. # Description: Generate simple number sequence. seq_vector = function(n) { 1:n }Mediana/R/z+.PresentationModel.R0000644000176200001440000000241713434027611016120 0ustar liggesusers###################################################################################################################### # Function: +.PresentationModel. # Argument: Two objects (PresentationModel and another object). # Description: This function is used to add objects to the PresentationModel object #' @export "+.PresentationModel" = function(presentationmodel, object) { if (is.null(object)) return(presentationmodel) else if (class(object) == "Project"){ presentationmodel$project$username = object$username presentationmodel$project$title = object$title presentationmodel$project$description = object$description } else if (class(object) == "Section"){ presentationmodel$section.by = unclass(object) } else if (class(object) == "Subsection"){ presentationmodel$subsection.by = unclass(object) } else if (class(object) == "Table"){ presentationmodel$table.by = unclass(object) } else if (class(object) == "CustomLabel"){ ncustomlabel = length(presentationmodel$custom.label) presentationmodel$custom.label[[ncustomlabel+1]] = unclass(object) } else stop(paste0("Presentation Model: Impossible to add the object of class ",class(object)," to the Presentation Model")) return(presentationmodel) }Mediana/R/AnalysisModel.MultAdjStrategy.R0000644000176200001440000000120513434027610017760 0ustar liggesusers###################################################################################################################### # Function: AnalysisModel.MultAdjStrategy # Argument: MultAdjStrategy object. # Description: This function is called by default if the class of the argument is a MultAdjStrategy object. #' @export AnalysisModel.MultAdjStrategy = function(multadjstrategy, ...) { analysismodel = AnalysisModel() analysismodel = analysismodel + multadjstrategy args = list(...) if (length(args)>0) { for (i in 1:length(args)){ analysismodel = analysismodel + args[[i]] } } return(analysismodel) }Mediana/R/EffectSizeEventStat.R0000644000176200001440000000522613434027610016026 0ustar liggesusers###################################################################################################################### # Compute the log hazard ratio based on non-missing values in the combined sample EffectSizeEventStat = function(sample.list, parameter) { # Determine the function call, either to generate the statistic or to return description call = (parameter[[1]] == "Description") if (call == FALSE | is.na(call)) { # Error checks if (length(sample.list)!=2) stop("Analysis model: Two samples must be specified in the EffectSizeEventStat statistic.") if (is.na(parameter[[2]])) method = "Log-Rank" else { if (!(parameter[[2]]$method %in% c("Log-Rank", "Cox"))) stop("Analysis model: HazardRatioStat statistic : the method must be Log-Rank or Cox.") method = parameter[[2]]$method } # Outcomes in Sample 1 outcome1 = sample.list[[1]][, "outcome"] # Remove the missing values due to dropouts/incomplete observations outcome1.complete = outcome1[stats::complete.cases(outcome1)] # Observed events in Sample 1 (negation of censoring indicators) event1 = !sample.list[[1]][, "patient.censor.indicator"] event1.complete = event1[stats::complete.cases(outcome1)] # Sample size in Sample 1 n1 = length(outcome1.complete) # Outcomes in Sample 2 outcome2 = sample.list[[2]][, "outcome"] # Remove the missing values due to dropouts/incomplete observations outcome2.complete = outcome2[stats::complete.cases(outcome2)] # Observed events in Sample 2 (negation of censoring indicators) event2 = !sample.list[[2]][, "patient.censor.indicator"] event2.complete = event2[stats::complete.cases(outcome2)] # Sample size in Sample 2 n2 = length(outcome2.complete) # Create combined samples of outcomes, censoring indicators (all events are observed) and treatment indicators outcome = c(outcome1.complete, outcome2.complete) event = c(event1.complete, event2.complete) treatment = c(rep(0, n1), rep(1, n2)) # Get the HR if (method == "Log-Rank"){ surv.test = survival::survdiff(survival::Surv(outcome, event) ~ treatment) result = -log((surv.test$obs[2]/surv.test$exp[2])/(surv.test$obs[1]/surv.test$exp[1])) } else if (method == "Cox"){ result = -log(summary(survival::coxph(survival::Surv(outcome, event) ~ treatment))$coef[,"exp(coef)"]) } } else if (call == TRUE) { if (is.na(parameter[[2]])) result = list("Effect size (event)") else { result = list("Effect size (event)", "method = ") } } return(result) } # End of EffectSizeEventStat Mediana/R/EffectSizeContStat.R0000644000176200001440000000240213434027610015641 0ustar liggesusers###################################################################################################################### # Compute the effect size for continuous based on non-missing values in the combined sample EffectSizeContStat = function(sample.list, parameter) { # Determine the function call, either to generate the statistic or to return description call = (parameter[[1]] == "Description") if (call == FALSE | is.na(call)) { # Error checks if (length(sample.list)!=2) stop("Analysis model: Two samples must be specified in the EffectSizeContStat statistic.") # Merge the samples in the sample list sample1 = sample.list[[1]] # Merge the samples in the sample list sample2 = sample.list[[2]] # Select the outcome column and remove the missing values due to dropouts/incomplete observations outcome1 = sample1[, "outcome"] outcome2 = sample2[, "outcome"] mean1 = mean(stats::na.omit(outcome1)) mean2 = mean(stats::na.omit(outcome2)) sdcom = stats::sd(c(stats::na.omit(outcome1),stats::na.omit(outcome2))) result = (mean2 - mean1) / sdcom } else if (call == TRUE) { result = list("Effect size (continuous)") } return(result) } # End of EffectSizeContStatMediana/R/mergeOutcomeParameter.R0000644000176200001440000000121613434027611016431 0ustar liggesusers###################################################################################################################### # Function: mergeOutcomeParameter . # Argument: two lists. # Description: This function is used to merge two lists mergeOutcomeParameter <- function (first, second) { stopifnot(is.list(first), is.list(second)) firstnames <- names(first) for (v in names(second)) { first[[v]] <- if (v %in% firstnames && is.list(first[[v]]) && is.list(second[[v]])) appendList(first[[v]], second[[v]]) else paste0(first[[v]],' = ', lapply(second[[v]], function(x) round(x,3))) } paste0(first, collapse = ", ") }Mediana/R/CSE.default.R0000644000176200001440000003373513434027610014204 0ustar liggesusers############################################################################################################################ # Function: CSE.default # Argument: Data model (or data stack), analysis model (or analysis stack) and evaluation model. # Description: This function applies the metrics specified in the evaluation model to the test results (p-values) and # summaries to the statistic results. #' @export CSE.default = function(data, analysis, evaluation, simulation) { # Error check if (!(class(data) %in% c("DataModel", "DataStack"))) stop("CSE: a DataModel object must be specified in the data argument") if (!(class(analysis) %in% c("AnalysisModel", "AnalysisStack"))) stop("CSE: an AnalysisModel object must be specified in the analysis argument") if (!(class(evaluation) %in% c("EvaluationModel"))) stop("CSE: an EvaluationModel object must be specified in the evaluation argument") if (!(class(simulation) %in% c("SimParameters"))) stop("CSE: a SimParameters object must be specified in the simulation argument") # Start time start.time = Sys.time() # Perform error checks for the evaluation model and create an internal evaluation structure # (in the first position in order to be sure that if any error is made, the simulation won't run) evaluation.structure = CreateEvaluationStructure(evaluation) # Case 1: Data model and Analysis model if (class(data) == "DataModel" & class(analysis) == "AnalysisModel"){ data.model = data analysis.model = analysis # Data structure data.structure = CreateDataStructure(data.model) # Create the analysis stack from the specified data and analysis models analysis.stack = PerformAnalysis(data.model, analysis.model, sim.parameters = simulation) # Analysis structure analysis.structure = analysis.stack$analysis.structure # Simulation parameters sim.parameters = analysis.stack$sim.parameters } # Case 2: Data stack and Analysis model if (class(data) == "DataStack" & class(analysis) == "AnalysisModel"){ data.stack = data analysis.model = analysis # Data structure data.structure = data.stack$data.structure # Create the analysis stack from the specified data and analysis models analysis.stack = PerformAnalysis(data.stack, analysis.model, sim.parameters = simulation) # Analysis structure analysis.structure = analysis.stack$analysis.structure # Simulation parameters sim.parameters = analysis.stack$sim.parameters } # Case 3: Data stack and Analysis stack if (class(data) == "DataStack" & class(analysis) == "AnalysisStack"){ data.stack = data analysis.stack = analysis # Data structure data.structure = data.stack$data.structure # Analysis structure analysis.structure = analysis.stack$analysis.structure # Simulation parameters if (!is.null(simulation)) warning("The simulation parameters (simulation) defined in the CSE function will be ignored as an analysis stack has been defined.") sim.parameters = analysis.stack$sim.parameters } # Get the number of simulation runs n.sims = sim.parameters$n.sims # Extract the analysis scenario grid and compute the number of analysis scenarios in this analysis stack analysis.scenario.grid = analysis.stack$analysis.scenario.grid n.analysis.scenarios = dim(analysis.scenario.grid)[1] # Number of outcome parameter sets, sample size sets and design parameter sets n.outcome.parameter.sets = length(levels(as.factor(analysis.scenario.grid$outcome.parameter))) n.design.parameter.sets = length(levels(as.factor(analysis.scenario.grid$design.parameter))) n.sample.size.sets = length(levels(as.factor(analysis.scenario.grid$sample.size))) # Number of data scenario n.data.scenarios = n.outcome.parameter.sets*n.sample.size.sets*n.design.parameter.sets # Number of multiplicity adjustment procedure n.mult.adj = length(levels(as.factor(analysis.scenario.grid$multiplicity.adjustment))) # Number of criteria specified in the evaluation model n.criteria = length(evaluation.structure$criterion) # Criterion IDs criterion.id = rep(" ", n.criteria) # Check if the tests and statistics referenced in the evaluation model are actually defined in the analysis model # Number of tests specified in the analysis model n.tests = length(analysis.structure$test) # Number of statistics specified in the analysis model n.statistics = length(analysis.structure$statistic) if(is.null(analysis.structure$test)) { # Test IDs test.id = " " } else { # Test IDs test.id = rep(" ", n.tests) for (test.index in 1:n.tests) { test.id[test.index] = analysis.structure$test[[test.index]]$id } } if(is.null(analysis.structure$statistic)) { # Statistic IDs statistic.id = " " } else { # Statistic IDs statistic.id = rep(" ", n.statistics) for (statistic.index in 1:n.statistics) { statistic.id[statistic.index] = analysis.structure$statistic[[statistic.index]]$id } } for (criterion.index in 1:n.criteria) { current.criterion = evaluation.structure$criterion[[criterion.index]] criterion.id[criterion.index] = current.criterion$id # Number of tests specified within the current criterion n.criterion.tests = length(current.criterion$tests) # Number of statistics specified within the current criterion n.criterion.statistics = length(current.criterion$statistics) if (n.criterion.tests > 0) { for (i in 1:n.criterion.tests) { if (!(current.criterion$tests[[i]] %in% test.id)) stop(paste0("Evaluation model: Test '", current.criterion$tests[[i]], "' is not defined in the analysis model.")) } } if (n.criterion.statistics > 0) { for (i in 1:n.criterion.statistics) { if (!(current.criterion$statistics[[i]] %in% statistic.id)) stop(paste0("Evaluation model: Statistic '", current.criterion$statistics[[i]], "' is not defined in the analysis model.")) } } } # Number of analysis points (total number of interim and final analyses) if (!is.null(analysis.structure$interim.analysis)) { n.analysis.points = length(analysis.structure$interim.analysis$interim.looks$fraction) } else { # No interim analyses n.analysis.points = 1 } # Create the evaluation stack (list of evaluation results for each analysis scenario in the analysis stack) #evaluation.set = list() # List of analysis scenarios analysis.scenario = list() analysis.scenario.index = 0 # Loop over the data scenarios for (data.scenario.index in 1:n.data.scenarios) { # Loop over the multiplicity adjustment for (mult.adj.index in 1:n.mult.adj) { analysis.scenario.index = analysis.scenario.index +1 # List of criteria criterion = list() # Loop over the criteria for (criterion.index in 1:n.criteria) { # Current metric current.criterion = evaluation.structure$criterion[[criterion.index]] # Number of tests specified in the current criterion n.criterion.tests = length(current.criterion$tests) # Number of statistics specified in the current criterion n.criterion.statistics = length(current.criterion$statistics) # Create a matrix of test results (p-values) across the simulation runs for the current criterion and analysis scenario if (n.criterion.tests > 0) { test.result.matrix = matrix(0, n.sims, n.criterion.tests) # Create a template for selecting the test results (p-values) test.result.flag = lapply(analysis.structure$test, function(x) any(current.criterion$tests == x$id)) test.result.flag.num = match(current.criterion$tests,test.id) } else { test.result.matrix = NA test.result.flag = NA test.result.flag.num = NA } # Create a matrix of statistic results across the simulation runs for the current criterion and analysis scenario if (n.criterion.statistics > 0) { statistic.result.matrix = matrix(0, n.sims, n.criterion.statistics) # Create a template for selecting the statistic results statistic.result.flag = lapply(analysis.structure$statistic, function(x) any(current.criterion$statistics == x$id)) statistic.result.flag.num = match(current.criterion$statistics ,statistic.id) } else { statistic.result.matrix = NA statistic.result.flag = NA statistic.result.flag.num = NA } # Loop over the analysis points for(analysis.point.index in 1:n.analysis.points){ # Loop over the simulation runs for (sim.index in 1:n.sims) { # Current analysis scenario current.analysis.scenario = analysis.stack$analysis.set[[sim.index]][[data.scenario.index]]$result$tests.adjust$analysis.scenario[[mult.adj.index]] # Extract the tests specified in the current criterion if (n.criterion.tests > 0) { #test.result.matrix[sim.index, ] = current.analysis.scenario[test.result.flag, analysis.point.index] test.result.matrix[sim.index, ] = current.analysis.scenario[test.result.flag.num, analysis.point.index] } # Extract the statistics specified in the current criterion if (n.criterion.statistics > 0) { #statistic.result.matrix[sim.index, ] = analysis.stack$analysis.set[[sim.index]][[data.scenario.index]]$result$statistic[statistic.result.flag, analysis.point.index] statistic.result.matrix[sim.index, ] = analysis.stack$analysis.set[[sim.index]][[data.scenario.index]]$result$statistic[statistic.result.flag.num, analysis.point.index] } } # Loop over the simulation runs # Apply the method specified in the current metric with metric parameters single.result = as.matrix(do.call(current.criterion$method, list(test.result.matrix, statistic.result.matrix, current.criterion$par))) if (n.analysis.points == 1) { # Only one analysis point (final analysis) is specified evaluation.results = single.result } else { # Two or more analysis points (interim and final analyses) are specified if (analysis.point.index == 1) { evaluation.results = single.result } else { evaluation.results = cbind(evaluation.results, single.result) } } } # Loop over the analysis points evaluation.results = as.data.frame(evaluation.results) # Assign labels rownames(evaluation.results) = unlist(current.criterion$labels) if (n.analysis.points == 1) { colnames(evaluation.results) = "Analysis" } else { names = rep("", n.analysis.points) for (j in 1:n.analysis.points) names[j] = paste0("Analysis ", j) colnames(evaluation.results) = names } criterion[[criterion.index]] = list(id = criterion.id[[criterion.index]], result = evaluation.results) } # Loop over the criteria analysis.scenario[[analysis.scenario.index]] = list(criterion = criterion) } # Loop over the multiplicity adjustment } # Loop over the data scenarios #evaluation.set = list(analysis.scenario = analysis.scenario) # Create a single data frame with simulation results simulation.results = data.frame(sample.size = numeric(), outcome.parameter = numeric(), design.parameter = numeric(), multiplicity.adjustment = numeric(), criterion = character(), test.statistic = character(), result = numeric(), stringsAsFactors = FALSE) row.index = 1 n.analysis.scenarios = length(analysis.scenario) for (scenario.index in 1:(n.data.scenarios*n.mult.adj)) { current.analysis.scenario = analysis.scenario[[scenario.index]] n.criteria = length(current.analysis.scenario$criterion) current.analysis.scenario.grid = analysis.scenario.grid[scenario.index, ] for (j in 1:n.criteria) { n.rows = dim(current.analysis.scenario$criterion[[j]]$result)[1] for (k in 1:n.rows) { simulation.results[row.index, 1] = current.analysis.scenario.grid[1, 3] simulation.results[row.index, 2] = current.analysis.scenario.grid[1, 2] simulation.results[row.index, 3] = current.analysis.scenario.grid[1, 1] simulation.results[row.index, 4] = current.analysis.scenario.grid[1, 4] simulation.results[row.index, 5] = current.analysis.scenario$criterion[[j]]$id simulation.results[row.index, 6] = rownames(current.analysis.scenario$criterion[[j]]$result)[k] simulation.results[row.index, 7] = current.analysis.scenario$criterion[[j]]$result[k, 1] row.index = row.index + 1 } } } end.time = Sys.time() timestamp = list(start.time = start.time, end.time = end.time, duration = difftime(end.time,start.time, units = "auto")) # Create the evaluation stack evaluation.stack = list(#description = "CSE", simulation.results = simulation.results, #evaluation.set = evaluation.set, analysis.scenario.grid = analysis.scenario.grid, data.structure = data.structure, analysis.structure = analysis.structure, evaluation.structure = evaluation.structure, sim.parameters = sim.parameters, #env.information = env.information, timestamp = timestamp ) class(evaluation.stack) = "CSE" return(evaluation.stack) } # End of CSE Mediana/R/AnalysisModel.Statistic.R0000644000176200001440000000114113434027610016643 0ustar liggesusers###################################################################################################################### # Function: AnalysisModel.Statistic # Argument: Statistic object. # Description: This function is called by default if the class of the argument is a Statistic object. #' @export AnalysisModel.Statistic = function(statistic, ...) { analysismodel = AnalysisModel() analysismodel = analysismodel + statistic args = list(...) if (length(args)>0) { for (i in 1:length(args)){ analysismodel = analysismodel + args[[i]] } } return(analysismodel) }Mediana/R/statistics.R0000644000176200001440000000073513434027611014334 0ustar liggesusers# Function: statistics # Argument: Multiple character strings. # Description: This function is used mostly for the user's convenience. It simply creates a list of character strings and # can be used in cases where multiple statistics need to be specified. #' @export statistics = function(...) { args = list(...) nargs = length(args) if (nargs <= 0) stop("Statistics function: At least one test must be specified.") return(args) invisible(args) }Mediana/R/CreateTableStructure.R0000644000176200001440000002701613434027610016236 0ustar liggesusers###################################################################################################################### # Function: CreateTableStructure. # Argument: Results returned by the CSE function and presentation model. # Description: This function is used to create the tables for each section/subsection CreateTableStructure = function(results = NULL, presentation.model = NULL, custom.label.sample.size = NULL, custom.label.design.parameter = NULL, custom.label.outcome.parameter = NULL, custom.label.multiplicity.adjustment = NULL){ # TO DO: Add checks on parameters # Get analysis scenario grid and add a scenario number to the dataframe analysis.scenario.grid = results$analysis.scenario.grid analysis.scenario.grid$scenario = as.numeric(rownames(results$analysis.scenario.grid)) analysis.scenario.grid$all = 1 # Apply the label on the scenario grid analysis.scenario.grid.label = analysis.scenario.grid analysis.scenario.grid.label$design.parameter.label = as.factor(analysis.scenario.grid.label$design.parameter) analysis.scenario.grid.label$outcome.parameter.label = as.factor(analysis.scenario.grid.label$outcome.parameter) analysis.scenario.grid.label$sample.size.label = as.factor(analysis.scenario.grid.label$sample.size) analysis.scenario.grid.label$multiplicity.adjustment.label = as.factor(analysis.scenario.grid.label$multiplicity.adjustment) levels(analysis.scenario.grid.label$design.parameter.label) = custom.label.design.parameter$label levels(analysis.scenario.grid.label$outcome.parameter.label) = custom.label.outcome.parameter$label levels(analysis.scenario.grid.label$sample.size.label) = custom.label.sample.size$label levels(analysis.scenario.grid.label$multiplicity.adjustment.label) = custom.label.multiplicity.adjustment$label analysis.scenario.grid.label$all.label = as.factor(analysis.scenario.grid.label$all) # Create the summary table with all results #summary.table = CreateSummaryTable(results$evaluation.set$analysis.scenario) summary.table = merge(results$simulation.results,analysis.scenario.grid.label, by = c("sample.size", "outcome.parameter", "design.parameter", "multiplicity.adjustment")) summary.table$result = format(round(summary.table$result, digits = 4), digits = 4, nsmall = 4) # Check if Sample Size or event has been used to set the column names sample.size = (!any(is.na(results$data.structure$sample.size.set))) event = (!any(is.na(results$data.structure$event.set))) # Get the "by" section.by = presentation.model$section.by$by if (is.null(section.by)) { section.by = "all" custom.label.all = list(label = "Results", custom = FALSE) } subsection.by = presentation.model$subsection.by$by if (any(section.by %in% subsection.by)) stop("PresentationModel: the parameters must be defined either in the Section or in the Subsection object, but not in both") table.by = presentation.model$table.by$by if (any(section.by %in% table.by)) stop("PresentationModel: the parameters must be defined either in the Section or in the Table object, but not in both") if (any(subsection.by %in% table.by)) stop("PresentationModel: the parameters must be defined either in the Subsection or in the Table object, but not in both") # If the user used event, the "by" "event" need to be changed by "sample.size" as the if (event) { if (!is.null(section.by)) section.by = gsub("event","sample.size",section.by) if (!is.null(subsection.by)) subsection.by = gsub("event","sample.size",subsection.by) if (!is.null(table.by)) table.by = gsub("event","sample.size",table.by) } # Create a list with scenario number for all sections # This list will get the number of scenarios defined by the user for each parameters section.par = list() if (!is.null(section.by)){ for (i in 1:length(section.by)){ section.par[[i]] = levels(analysis.scenario.grid.label[,paste0(section.by[i],".label")]) } } else section.par = NULL # Create the combination of scenario for each section section.create = rev(expand.grid(rev(section.par))) colnames(section.create) = section.by section.create$section = 1:nrow(section.create) # Create the title for the section section.by.label = sapply(gsub("."," ",section.by,fixed = TRUE),capwords) if(get(paste0("custom.label.",section.by)[1])$custom) { title = paste0(section.by.label[1]," (",section.create[,1],")") } else { title = paste0(section.by.label[1]," ",1:max(analysis.scenario.grid[,section.by[[1]][1]])) } if (length(section.by)>1) { for (i in 2:length(section.by)){ if(get(paste0("custom.label.",section.by)[i])$custom) { title = paste0(title," and ",section.by.label[i]," (",section.create[,i],")") } else { title = paste0(title," and ",section.by.label[i]," ",1:max(analysis.scenario.grid[,section.by[[i]][1]])) } } } section.create$title.section = title if (any(section.by == "all")) section.create$title.section = "Results" # Create a list with scenario number for all subsections subsection.create = NULL if (!is.null(subsection.by)){ # This list will get the number of scenarios defined by the user for each parameters subsection.par = list() for (i in 1:length(subsection.by)){ subsection.par[[i]] = levels(analysis.scenario.grid.label[,paste0(subsection.by[i],".label")]) } # Create the combination of scenario for each subsection subsection.create = rev(expand.grid(rev(subsection.par))) colnames(subsection.create) = subsection.by subsection.create$subsection = 1:nrow(subsection.create) # Create the title for the subsection subsection.by.label = sapply(gsub("."," ",subsection.by,fixed = TRUE),capwords) if(get(paste0("custom.label.",subsection.by)[1])$custom) { title = paste0(subsection.by.label[1]," (",subsection.create[,1],")") } else { title = paste0(subsection.by.label[1]," ",1:max(analysis.scenario.grid[,subsection.by[[1]][1]])) } if (length(subsection.by)>1) { for (i in 2:length(subsection.by)){ if(get(paste0("custom.label.",subsection.by)[i])$custom) { title = paste0(title," and ",subsection.by.label[i]," (",subsection.create[,i],")") } else { title = paste0(title," and ",subsection.by.label[i]," ",1:max(analysis.scenario.grid[,subsection.by[[i]][1]])) } } } subsection.create$title.subsection = title } # Create a list with order tables # If the user did not define any parameter to sort the table, the parameters not defined in the section or subsection will be used to sort the table by default table.by=c(table.by, colnames(analysis.scenario.grid.label[which(!(colnames(analysis.scenario.grid) %in% c(section.by, subsection.by, table.by, "scenario")))])) # If no design or no multiplicity adjustment have been defined, we can delete them from the table.by if (is.null(results$analysis.structure$mult.adjust)) table.by=table.by[which(table.by!="multiplicity.adjustment")] if (is.null(results$data.structure$design.parameter.set)) table.by=table.by[which(table.by!="design.parameter")] if (any(section.by != "all")) table.by=table.by[which(table.by!="all")] table.create = NULL if (length(table.by)>0){ # This list will get the number of scenarios defined by the user for each parameters table.par = list() for (i in 1:length(table.by)){ table.par[[i]] = as.numeric(levels(as.factor(analysis.scenario.grid.label[,table.by[i]]))) } # Create the combination of scenario for each table table.create = rev(expand.grid(rev(table.par))) colnames(table.create) = table.by } # Create report structure if(!is.null(subsection.create)){ report.structure = rev(merge(subsection.create,section.create,by=NULL,suffixes = c(".subsection",".section"))) } else report.structure = section.create # Get the scenario number for each section/subsection report.structure.scenario = list() for (i in 1:nrow(report.structure)){ report.structure.temp = as.data.frame(report.structure[i,]) colnames(report.structure.temp) = paste0(colnames(report.structure),".label") report.structure.scenario[[i]] = as.vector(merge(analysis.scenario.grid.label,report.structure.temp)$scenario) } # Create a list containing the table to report under each section/subsection report.structure.scenario.summary.table = list() for (i in 1:nrow(report.structure)){ report.structure.scenario.summary.table[[i]] = summary.table[which(summary.table$scenario %in% report.structure.scenario[[i]]),c(table.by,"criterion","test.statistic","result")] colnames(report.structure.scenario.summary.table[[i]])=c(sapply(gsub("."," ",table.by,fixed = TRUE),capwords),"Criterion","Test/Statistic","Result") rownames(report.structure.scenario.summary.table[[i]])=NULL } # Order the table as requested by the user if (length(table.by)>0){ table.by.label = sapply(gsub("."," ",table.by,fixed = TRUE),capwords) data.order = as.data.frame(report.structure.scenario.summary.table[[1]][,table.by.label]) colnames(data.order) = table.by order.table = order(as.numeric(apply(data.order, 1, paste, collapse = ""))) report.structure.scenario.summary.table.order = lapply(report.structure.scenario.summary.table,function(x) x[order.table,]) } else report.structure.scenario.summary.table.order = report.structure.scenario.summary.table # Add the labels report.structure.scenario.summary.table.order = lapply(report.structure.scenario.summary.table.order, function(x) { if ("Design Parameter" %in% colnames(x)) { x[,"Design Parameter"] = as.factor(x[,"Design Parameter"]) levels(x[,"Design Parameter"]) = custom.label.design.parameter$label } if ("Outcome Parameter" %in% colnames(x)) { x[,"Outcome Parameter"] = as.factor(x[,"Outcome Parameter"]) levels(x[,"Outcome Parameter"]) = custom.label.outcome.parameter$label } if ("Sample Size" %in% colnames(x)) { x[,"Sample Size"] = as.factor(x[,"Sample Size"]) levels(x[,"Sample Size"]) = custom.label.sample.size$label } if ("Multiplicity Adjustment" %in% colnames(x)) { x[,"Multiplicity Adjustment"] = as.factor(x[,"Multiplicity Adjustment"]) levels(x[,"Multiplicity Adjustment"]) = custom.label.multiplicity.adjustment$label } return(x) }) # Change the Sample Size column name if Event has been used ChangeColNames = function(x) { colnames(x)[colnames(x)=="Sample Size"] <- "Event Set" x } if (event) report.structure.scenario.summary.table.order = lapply(report.structure.scenario.summary.table.order, ChangeColNames) # Create the object to return, i.e. a list with the parameter of the section/subsection and the table res = list() for (i in 1:nrow(report.structure)){ report.structure.temp = as.data.frame(report.structure[i,]) colnames(report.structure.temp) = colnames(report.structure) res[[i]] = list(section = list(number = report.structure.temp$section, title = report.structure.temp$title.section), subsection = list(number = report.structure.temp$subsection, title = report.structure.temp$title.subsection), parameter = as.character(report.structure.temp[,c(section.by, subsection.by)]), results = report.structure.scenario.summary.table.order[[i]]) } # Return the list of results return(list(section = section.create, subsection = subsection.create, table.structure = res)) } # End of CreateTableStructure Mediana/R/samples.R0000644000176200001440000000071713434027611013606 0ustar liggesusers# Function: samples # Argument: Multiple character strings. # Description: This function is used mostly for user's convenience. It simply creates a list of character strings and # can be used in cases where multiple samples need to be specified. #' @export samples = function(...) { args = list(...) nargs = length(args) if (nargs <= 0) stop("Samples function: At least one sample must be specified.") return(args) invisible(args) }Mediana/R/PoissonDist.R0000644000176200001440000000272113434027610014414 0ustar liggesusers###################################################################################################################### # Function: PoissonDist . # Argument: List of parameters (number of observations, mean). # Description: This function is used to generate Poisson outcomes. PoissonDist = function(parameter) { # Error checks if (missing(parameter)) stop("Data model: PoissonDist distribution: List of parameters must be provided.") if (is.null(parameter[[2]]$lambda)) stop("Data model: PoissonDist distribution: Lambda must be specified.") lambda = parameter[[2]]$lambda # Parameters check if (lambda <= 0) stop("Data model: PoissonDist distribution: Lambda must be non-negative.") # Determine the function call, either to generate distribution or to return description call = (parameter[[1]] == "description") # Generate random variables if (call == FALSE) { n = parameter[[1]] if (n%%1 != 0) stop("Data model: PoissonDist distribution: Number of observations must be an integer.") if (n <= 0) stop("Data model: PoissonDist distribution: Number of observations must be positive.") result = stats::rpois(n = n, lambda = lambda) } else { # Provide information about the distribution function if (call == TRUE) { # Labels of distributional parameters result = list(list(lambda = "lambda"),list("Poisson")) } } return(result) } # End of PoissonDistMediana/R/MultAdjStrategy.default.R0000644000176200001440000000064213434027610016644 0ustar liggesusers###################################################################################################################### # Function: MultAdjStrategy.default. # Argument: MultAdjProc object # Description: This function is used to create an object of class MultAdjStrategy. #' @export MultAdjStrategy.default= function(...) { stop("MultAdjStrategy: this function only accepts object of class MultAdjProc") }Mediana/R/MultAdj.R0000644000176200001440000000052313434027610013474 0ustar liggesusers###################################################################################################################### # Function: MultAdj. # Argument: .... # Description: This function is used to call the corresponding function according to the class of the argument. #' @export MultAdj = function(...) { UseMethod("MultAdj") }Mediana/R/RestrictedClaimPower.R0000644000176200001440000000215313434027610016230 0ustar liggesusers############################################################################################################################ # Function: RestrictedClaimPower # Argument: Test results (p-values) across multiple simulation runs (vector or matrix), statistic results, # criterion parameter (Type I error rate and Influence cutoff). # Description: Compute probability of restricted claim (new treatment is effective in the target population only) RestrictedClaimPower = function(test.result, statistic.result, parameter) { # Error check if (is.null(parameter$alpha)) stop("Evaluation model: RestrictedClaimPower: alpha parameter must be specified.") if (is.null(parameter$cutoff_influence)) stop("Evaluation model: RestrictedClaimPower: cutoff parameter must be specified.") alpha = parameter$alpha cutoff_influence = parameter$cutoff_influence significant = ((test.result[,1] > alpha & test.result[,2] <= alpha) | (test.result[,1] <= alpha & test.result[,2] <= alpha & statistic.result[,1] < cutoff_influence)) power = mean(significant) return(power) } # End of RestrictedClaimPower Mediana/R/EvaluationModel.Criterion.R0000644000176200001440000000116013434027610017157 0ustar liggesusers###################################################################################################################### # Function: EvaluationModel.Criterion # Argument: Criterion object. # Description: This function is called by default if the class of the argument is an Criterion object. #' @export EvaluationModel.Criterion = function(criterion, ...) { evaluationmodel = EvaluationModel() evaluationmodel = evaluationmodel + criterion args = list(...) if (length(args)>0) { for (i in 1:length(args)){ evaluationmodel = evaluationmodel + args[[i]] } } return(evaluationmodel) }Mediana/R/GLMNegBinomTest.R0000644000176200001440000000425713434027610015042 0ustar liggesusers###################################################################################################################### # Function: GLMNegBinomTest. # Argument: Data set and parameter (call type). # Description: Computes one-sided p-value based on Negative-Binomial regression. GLMNegBinomTest = function(sample.list, parameter) { # Determine the function call, either to generate the p-value or to return description call = (parameter[[1]] == "Description") if (call == FALSE | is.na(call)) { # No parameters are defined if (is.na(parameter[[2]])) { larger = TRUE } else { if (!all(names(parameter[[2]]) %in% c("larger"))) stop("Analysis model: GLMNegBinomTest test: this function accepts only one argument (larger)") # Parameters are defined but not the larger argument if (!is.logical(parameter[[2]]$larger)) stop("Analysis model: GLMNegBinomTest test: the larger argument must be logical (TRUE or FALSE).") larger = parameter[[2]]$larger } # Sample list is assumed to include two data frames that represent two analysis samples # Outcomes in Sample 1 outcome1 = sample.list[[1]][, "outcome"] # Remove the missing values due to dropouts/incomplete observations outcome1.complete = outcome1[stats::complete.cases(outcome1)] # Outcomes in Sample 2 outcome2 = sample.list[[2]][, "outcome"] # Remove the missing values due to dropouts/incomplete observations outcome2.complete = outcome2[stats::complete.cases(outcome2)] # Data frame data.complete = data.frame(rbind(cbind(2, outcome2.complete), cbind(1, outcome1.complete))) colnames(data.complete) = c("TRT", "RESPONSE") data.complete$TRT=as.factor(data.complete$TRT) # One-sided p-value (to be checked) # result = summary(MASS::glm.nb(RESPONSE ~ TRT, data = data.complete))$coefficients["TRT2", "Pr(>|z|)"]/2 z = summary(MASS::glm.nb(RESPONSE ~ TRT, data = data.complete))$coefficients["TRT2", "z value"] result = stats::pnorm(z, lower.tail = !larger) } else if (call == TRUE) { result=list("Negative-binomial regression test") } return(result) } # End of GLMNegBinomTest Mediana/R/HazardRatioCoxStat.R0000644000176200001440000000410113434027610015646 0ustar liggesusers###################################################################################################################### # Compute the hazard ratio based on non-missing values in the combined sample HazardRatioCoxStat = function(sample.list, parameter) { # Determine the function call, either to generate the statistic or to return description call = (parameter[[1]] == "Description") if (call == FALSE | is.na(call)) { # Error checks if (length(sample.list)!=2) stop("Analysis model: Two samples must be specified in the HazardRatioCoxStat statistic.") # Outcomes in Sample 1 outcome1 = sample.list[[1]][, "outcome"] # Remove the missing values due to dropouts/incomplete observations outcome1.complete = outcome1[stats::complete.cases(outcome1)] # Observed events in Sample 1 (negation of censoring indicators) event1 = !sample.list[[1]][, "patient.censor.indicator"] event1.complete = event1[stats::complete.cases(outcome1)] # Sample size in Sample 1 n1 = length(outcome1.complete) # Outcomes in Sample 2 outcome2 = sample.list[[2]][, "outcome"] # Remove the missing values due to dropouts/incomplete observations outcome2.complete = outcome2[stats::complete.cases(outcome2)] # Observed events in Sample 2 (negation of censoring indicators) event2 = !sample.list[[2]][, "patient.censor.indicator"] event2.complete = event2[stats::complete.cases(outcome2)] # Sample size in Sample 2 n2 = length(outcome2.complete) # Create combined samples of outcomes, censoring indicators (all events are observed) and treatment indicators outcome = c(outcome1.complete, outcome2.complete) event = c(event1.complete, event2.complete) treatment = c(rep(0, n1), rep(1, n2)) # Get the HR from the Cox method result = summary(survival::coxph(survival::Surv(outcome, event) ~ treatment))$coef[,"exp(coef)"] } else if (call == TRUE) { result = list("Hazard Ratio (Cox)") } return(result) } # End of HazardRatioCoxStatMediana/R/NormalParamAdj.R0000644000176200001440000000546713434027610015000 0ustar liggesusers###################################################################################################################### # Function: NormalParamAdj. # Argument: p, Vector of p-values (1 x m) # par, List of procedure parameters: vector of hypothesis weights (1 x m) and correlation matrix (m x m) # Description: Parametric multiple testing procedure based on a multivariate normal distribution NormalParamAdj = function(p, par) { # Determine the function call, either to generate the p-value or to return description call = (par[[1]] == "Description") if (any(call == FALSE) | any(is.na(call))) { # Number of p-values p = unlist(p) m = length(p) # Extract the vector of hypothesis weights (1 x m) and correlation matrix (m x m) # If the first parameter is a matrix and no weights are provided, the hypotheses are assumed to be equally weighted if (is.null(par[[2]]$weight)) { w = rep(1/m, m) } else { w = unlist(par[[2]]$weight) } if (is.null(par[[2]]$corr)) stop("Analysis model: Parametric multiple testing procedure: Correlation matrix must be specified.") corr = par[[2]]$corr # Error checks if (length(w) != m) stop("Analysis model: Parametric multiple testing procedure: Length of the weight vector must be equal to the number of hypotheses.") if (sum(w) != 1) stop("Analysis model: Parametric multiple testing procedure: Hypothesis weights must add up to 1.") if (any(w < 0)) stop("Analysis model: Parametric multiple testing procedure: Hypothesis weights must be greater than 0.") if (sum(dim(corr) == c(m, m)) != 2) stop("Analysis model: Parametric multiple testing procedure: Correlation matrix is not correctly defined.") if (det(corr) <= 0) stop("Analysis model: Parametric multiple testing procedure: Correlation matrix must be positive definite.") # Compute test statistics based on a normal distribution stat = stats::qnorm(1 - p) # Adjusted p-values computed using a multivariate normal distribution function adjpvalue = sapply(stat, NormalParamDist, w, corr) result = adjpvalue } else if (call == TRUE) { # Number of p-values p = unlist(p) m = length(p) if (is.null(par[[2]]$weight)) { w = rep(1/m, m) } else { w = unlist(par[[2]]$weight) } if (is.null(par[[2]]$corr)) stop("Analysis model: Parametric multiple testing procedure: Correlation matrix must be specified.") corr = par[[2]]$corr weight = paste0("Hypothesis weights={", paste(round(w, 3), collapse = ","),"}") corr = paste0("Correlation matrix={", paste(as.vector(t(corr)), collapse = ","),"}") result=list(list("Normal parametric multiple testing procedure"), list(weight,corr)) } return(result) } # End of NormalParamAdjMediana/R/MultipleSequenceGatekeepingAdj.R0000644000176200001440000001707313434027610020213 0ustar liggesusers###################################################################################################################### # Function: MultipleSequenceGatekeepingAdj. # Argument: rawp, Raw p-value. # par, List of procedure parameters: vector of family (1 x m) Vector of component procedure labels ('BonferroniAdj.global' or 'HolmAdj.global' or 'HochbergAdj.global' or 'HommelAdj.global') (1 x nfam) Vector of truncation parameters for component procedures used in individual families (1 x nfam) # Description: Computation of adjusted p-values for gatekeeping procedures based on the modified mixture methods (ref Dmitrienko et al. (2014)) MultipleSequenceGatekeepingAdj = function(rawp, par) { # Determine the function call, either to generate the p-value or to return description call = (par[[1]] == "Description") if (any(call == FALSE) | any(is.na(call))) { # Error check if (is.null(par[[2]]$family)) stop("Analysis model: Multiple sequence gatekeeping procedure: Hypothesis families must be specified.") if (is.null(par[[2]]$proc)) stop("Analysis model: Multiple sequence gatekeeping procedure: Procedures must be specified.") if (is.null(par[[2]]$gamma)) stop("Analysis model: Multiple sequence gatekeeping procedure: Gamma must be specified.") # Number of p-values nhyp = length(rawp) # Extract the vector of family (1 x m) family = par[[2]]$family # Number of families in the multiplicity problem nfam = length(family) # Number of null hypotheses per family nperfam = nhyp/nfam # Extract the vector of procedures (1 x m) proc = paste(unlist(par[[2]]$proc), ".global", sep = "") # Extract the vector of truncation parameters (1 x m) gamma = unlist(par[[2]]$gamma) # Simple error checks if (nhyp != length(unlist(family))) stop("Multiple-sequence gatekeeping adjustment: Length of the p-value vector must be equal to the number of hypothesis.") if (length(proc) != nfam) stop("Multiple-sequence gatekeeping adjustment: Length of the procedure vector must be equal to the number of families.") else { for (i in 1:nfam) { if (proc[i] %in% c("BonferroniAdj.global", "HolmAdj.global", "HochbergAdj.global", "HommelAdj.global") == FALSE) stop("Multiple-sequence gatekeeping adjustment: Only Bonferroni (BonferroniAdj), Holm (HolmAdj), Hochberg (HochbergAdj) and Hommel (HommelAdj) component procedures are supported.") } } if (length(gamma) != nfam) stop("Multiple-sequence gatekeeping adjustment: Length of the gamma vector must be equal to the number of families.") else { for (i in 1:nfam) { if (gamma[i] < 0 | gamma[i] > 1) stop("Multiple-sequence gatekeeping adjustment: Gamma must be between 0 (included) and 1 (included).") else if (proc[i] == "bonferroni.global" & gamma[i] != 0) stop("Multiple-sequence gatekeeping adjustment: Gamma must be set to 0 for the global Bonferroni procedure.") } } # Number of intersection hypotheses in the closed family nint = 2^nhyp - 1 # Construct the intersection index sets (int_orig) before the logical restrictions are applied. Each row is a vector of binary indicators (1 if the hypothesis is # included in the original index set and 0 otherwise) int_orig = matrix(0, nint, nhyp) for (i in 1:nhyp) { for (j in 0:(nint - 1)) { k = floor(j/2^(nhyp - i)) if (k/2 == floor(k/2)) int_orig[j + 1, i] = 1 } } # Construct the intersection index sets (int_rest) and family index sets (fam_rest) after the logical restrictions are applied. Each row is a vector of binary # indicators (1 if the hypothesis is included in the restricted index set and 0 otherwise) int_rest = int_orig fam_rest = matrix(1, nint, nhyp) for (i in 1:nint) { for (j in 1:(nfam - 1)) { for (k in 1:nperfam) { # Index of the current null hypothesis in Family j m = (j - 1) * nperfam + k # If this null hypothesis is included in the intersection hypothesis all dependent null hypotheses must be removed from the intersection hypothesis if (int_orig[i, m] == 1) { for (l in 1:(nfam - j)) { int_rest[i, m + l * nperfam] = 0 fam_rest[i, m + l * nperfam] = 0 } } } } } # Number of null hypotheses from each family included in each intersection before the logical restrictions are applied korig = matrix(0, nint, nfam) # Number of null hypotheses from each family included in the current intersection after the logical restrictions are applied krest = matrix(0, nint, nfam) # Number of null hypotheses from each family after the logical restrictions are applied nrest = matrix(0, nint, nfam) # Compute korig, krest and nrest for (j in 1:nfam) { # Index vector in the current family # index = which(family == j) index = family[[j]] korig[, j] = apply(as.matrix(int_orig[, index]), 1, sum) krest[, j] = apply(as.matrix(int_rest[, index]), 1, sum) nrest[, j] = apply(as.matrix(fam_rest[, index]), 1, sum) } # Vector of intersection p-values pint = rep(1, nint) # Matrix of component p-values within each intersection pcomp = matrix(0, nint, nfam) # Matrix of family weights within each intersection c = matrix(0, nint, nfam) # P-value for each hypothesis within each intersection p = matrix(0, nint, nhyp) # Compute the intersection p-value for each intersection hypothesis for (i in 1:nint) { # Compute component p-values for (j in 1:nfam) { # Consider non-empty restricted index sets if (krest[i, j] > 0) { # Restricted index set in the current family int = int_rest[i, family[[j]]] # Set of p-values in the current family pv = rawp[family[[j]]] # Select raw p-values included in the restricted index set pselected = pv[int == 1] # Total number of hypotheses used in the computation of the component p-value tot = nrest[i, j] pcomp[i, j] = do.call(proc[j], list(pselected, tot, gamma[j])) } else if (krest[i, j] == 0) pcomp[i, j] = 1 } # Compute family weights c[i, 1] = 1 for (j in 2:nfam) { c[i, j] = c[i, j - 1] * (1 - errorfrac(krest[i, j - 1], nrest[i, j - 1], gamma[j - 1])) } # Compute the intersection p-value for the current intersection hypothesis pint[i] = pmin(1, min(ifelse(c[i,]>0, pcomp[i, ]/c[i, ], NA), na.rm = TRUE)) # Compute the p-value for each hypothesis within the current intersection p[i, ] = int_orig[i, ] * pint[i] } # Compute adjusted p-values adjustedp = apply(p, 2, max) result = adjustedp } else if (call == TRUE) { family = par[[2]]$family nfam = length(family) proc = unlist(par[[2]]$proc) gamma = unlist(par[[2]]$gamma) test.id=unlist(par[[3]]) proc.par = data.frame(nrow = nfam, ncol = 4) for (i in 1:nfam){ proc.par[i,1] = i proc.par[i,2] = paste0("{",paste(test.id[family[[i]]], collapse = ", "),"}") proc.par[i,3] = proc[i] proc.par[i,4] = gamma[i] } colnames(proc.par) = c("Family", "Tests", "Component procedure", "Truncation parameter") result=list(list("Multiple-sequence gatekeeping"),list(proc.par)) } return(result) } # End of MultipleSequenceGatekeepingAdj Mediana/R/capwords.R0000644000176200001440000000106013434027610013753 0ustar liggesusers###################################################################################################################### # Function: capwords. # Argument: String. # Description: This function is used to capitalize every first letter of a word capwords <- function(s, strict = FALSE) { cap <- function(s) paste(toupper(substring(s, 1, 1)), {s <- substring(s, 2); if(strict) tolower(s) else s}, sep = "", collapse = " " ) sapply(strsplit(s, split = " "), cap, USE.NAMES = !is.null(names(s))) }Mediana/R/CreateDataStack.R0000644000176200001440000003220513434027610015121 0ustar liggesusers####################################################################################################################### # Function: CreateDataStack. # Argument: Data model and number of simulations. # Description: Generates a data stack, which is a collection of individual data sets (one data set per simulation run). CreateDataStack = function(data.model, n.sims, seed=NULL) { # Perform error checks for the data model and create an internal data structure data.structure = CreateDataStructure(data.model) # Check the seed if defined (the seed should be defined only when the user generate the data stack) if (!is.null(seed)){ if (!is.numeric(seed)) stop("Seed must be an integer.") if (length(seed) > 1) stop("Seed: Only one value must be specified.") if (nchar(as.character(seed)) > 10) stop("Length of seed must be inferior to 10.") } # Create short names for data model parameters outcome.dist = data.structure$outcome$outcome.dist outcome.type = data.structure$outcome$outcome.type outcome.dist.dim = data.structure$outcome$outcome.dist.dim data.sample.id = data.structure$id data.size = data.structure$sample.size.set data.event = data.structure$event.set rando.ratio = data.structure$rando.ratio data.design = data.structure$design.parameter.set data.outcome = data.structure$outcome.parameter.set # Number of outcome parameter sets, sample size sets and design parameter sets n.outcome.parameter.sets = length(data.structure$outcome.parameter.set) if (!is.null(data.structure$design.parameter.set)) { n.design.parameter.sets = length(data.structure$design.parameter.set) } else { n.design.parameter.sets = 1 } # Determine if sample size or event were used # Determine which sample size set corresponds to the maximum of events or sample size for each data sample sample.size = any(!is.na(data.size)) event = any(!is.na(data.event)) if (sample.size) { n.sample.size.event.sets = dim(data.structure$sample.size.set)[1] max.sample.size = apply(data.size,2,max) } else if (event){ n.sample.size.event.sets = dim(data.structure$event.set)[1] max.event = apply(data.event,2,max) } # Number of data samples specified in the data model n.data.samples = length(data.sample.id) # Create the data stack which is represented by a list of data sets (one data set for each simulation run) data.set = list() # Create a grid of the data scenario factors (outcome parameter, sample size and design parameter) data.scenario.grid = expand.grid(design.parameter.set = 1:n.design.parameter.sets, outcome.parameter.set = 1:n.outcome.parameter.sets, sample.size.set = 1:n.sample.size.event.sets) colnames(data.scenario.grid) = c("design.parameter", "outcome.parameter", "sample.size") # Number of data scenarios (number of unique combinations of the data scenario factors) n.data.scenarios = dim(data.scenario.grid)[1] # Create a grid of the ouctcome and design scenario factors (outcome parameter and design parameter) data.design.outcome.grid = expand.grid(design.parameter.set = 1:n.design.parameter.sets, outcome.parameter.set = 1:n.outcome.parameter.sets) colnames(data.design.outcome.grid) = c("design.parameter", "outcome.parameter") # Number of design and outcome scenarios (number of unique combinations of the design and outcome scenario factors) n.design.outcome.scenarios = dim(data.design.outcome.grid)[1] # Set the seed if (!is.null(seed)) set.seed(seed) # Loop over the simulations for (sim.index in 1:n.sims) { # If sample size is used (fixed number of sample size) if (sample.size) { design.outcome.variables = vector(n.design.outcome.scenarios, mode = "list") # Loop over the design and outcome grid for (design.outcome.index in 1:n.design.outcome.scenarios) { # Get the current design index and parameters current.design.index = data.design.outcome.grid[design.outcome.index, "design.parameter"] current.design.parameter = data.design[[current.design.index]] # Get the outcome index and parameters current.outcome.index = data.design.outcome.grid[design.outcome.index, "outcome.parameter"] current.outcome.parameter = data.outcome[[current.outcome.index]] # Initialized the data frame list df = vector(n.data.samples, mode = "list") # Loop over the data samples for (data.sample.index in 1:n.data.samples) { # Maximum sample size across the sample size sets for the current data sample current.max.sample.size = max.sample.size[data.sample.index] # Outcome parameter for the current data sample current.outcome = list(dist = outcome.dist, par = current.outcome.parameter[[data.sample.index]], type = outcome.type) # Get the current sample id current.sample.id = unlist(data.sample.id[[data.sample.index]]) # Generate the data for the current design and outcome parameters df[[data.sample.index]] = GeneratePatients(current.design.parameter, current.outcome, current.sample.id, current.max.sample.size) } # Loop over the data samples design.outcome.variables[[design.outcome.index]] = list(design.parameter = current.design.index, outcome.parameter = current.outcome.index, sample = df) } # Loop over the design and outcome grid # Create the data scenario list (one element for each unique combination of the data scenario factors) data.scenario = list() # Loop over the data scenarios for (data.scenario.index in 1:n.data.scenarios) { design.index = data.scenario.grid[data.scenario.index, 1] outcome.index = data.scenario.grid[data.scenario.index, 2] sample.size.index = data.scenario.grid[data.scenario.index, 3] # Get the design.outcome variables corresponding to the current data scenario current.design.outcome.index = sapply(design.outcome.variables, function(x) x$design.parameter == design.index & x$outcome.parameter == outcome.index) current.design.outcome.variables = design.outcome.variables[current.design.outcome.index][[1]]$sample # Get the sample size current.sample.size = data.size[sample.size.index,] # Generate the data for the current data scenario data.scenario[[data.scenario.index]] = list(sample = CreateDataScenarioSampleSize(current.design.outcome.variables, current.sample.size)) } } else if (event) { # If event is used (generate data until the number of event required for the first outcome is reached) design.outcome.variables = vector(n.design.outcome.scenarios, mode = "list") # Loop over the design and outcome grid for (design.outcome.index in 1:n.design.outcome.scenarios) { # Get the current design index and parameters current.design.index = data.design.outcome.grid[design.outcome.index, "design.parameter"] current.design.parameter = data.design[[current.design.index]] # Get the outcome index and parameters current.outcome.index = data.design.outcome.grid[design.outcome.index, "outcome.parameter"] current.outcome.parameter = data.outcome[[current.outcome.index]] # Initialized the data frame list df = vector(n.data.samples, mode = "list") # Initialized the temporary data frame list df.temp = vector(n.data.samples, mode = "list") # Set the Number of events n.observed.events = 0 # Loop over the data samples to generate a first set of data corresponding to the maximum number of events required divided by randomization ratio for (data.sample.index in 1:n.data.samples) { # Outcome parameter for the current data sample current.outcome = list(dist = outcome.dist, par = current.outcome.parameter[[data.sample.index]], type = outcome.type) # Get the current sample id current.sample.id = unlist(data.sample.id[[data.sample.index]]) # Generate the data for the current design and outcome parameters df.temp[[data.sample.index]] = GeneratePatients(current.design.parameter, current.outcome, current.sample.id, rando.ratio[data.sample.index] * ceiling(max.event / sum(rando.ratio))) # Merge the previous generated data with the temporary data if (!is.null(df[[data.sample.index]])) { data.temp = as.data.frame(mapply(rbind, lapply(df[[data.sample.index]], function(x) as.data.frame(x$data)), lapply(df.temp[[data.sample.index]], function(x) as.data.frame(x$data)), SIMPLIFY=FALSE)) row.names(data.temp) = NULL df[[data.sample.index]] = lapply(df[[data.sample.index]], function(x) {return(list(id = x$id, outcome.type = x$outcome.type, data = as.matrix(data.temp)))}) } else { df[[data.sample.index]] = df.temp[[data.sample.index]] } } # Loop over the data samples # Get the number of events observed accross all samples for the primary endpoint n.observed.events = sum(unlist(lapply(df, function(x) {return(!x[[1]]$data[,"patient.censor.indicator"])}))) # Loop until the maximum number of events required is observed while(n.observed.events < max.event){ # Loop over the data samples for (data.sample.index in 1:n.data.samples) { # Outcome parameter for the current data sample current.outcome = list(dist = outcome.dist, par = current.outcome.parameter[[data.sample.index]], type = outcome.type) # Get the current sample id current.sample.id = unlist(data.sample.id[[data.sample.index]]) # Generate the data for the current design and outcome parameters df.temp[[data.sample.index]] = GeneratePatients(current.design.parameter, current.outcome, current.sample.id, rando.ratio[data.sample.index]) # Merge the previous generated data with the temporary data if (!is.null(df[[data.sample.index]])) { data.temp = lapply(mapply(rbind, lapply(df[[data.sample.index]], function(x) as.data.frame(x$data)), lapply(df.temp[[data.sample.index]], function(x) as.data.frame(x$data)), SIMPLIFY=FALSE), function(x) as.matrix(x)) #df[[data.sample.index]] = mapply(function(x,y) {return(list(id=x$id, outcome.type = x$outcome.type, data = as.matrix(y, row.names = NULL)))}, x=df[[data.sample.index]], y=data.temp, SIMPLIFY=FALSE) df[[data.sample.index]] = mapply(function(x,y) {return(list(id=x$id, outcome.type = x$outcome.type, data = as.data.frame(y)))}, x=df[[data.sample.index]], y=data.temp, SIMPLIFY=FALSE) } else { df[[data.sample.index]] = df.temp[[data.sample.index]] } } # Loop over the data samples # Get the number of events observed accross all samples for the primary endpoint n.observed.events = sum(unlist(lapply(df, function(x) {return(!x[[1]]$data[,"patient.censor.indicator"])}))) } # Loop until the maximum number of events required is observed design.outcome.variables[[design.outcome.index]] = list(design.parameter = current.design.index, outcome.parameter = current.outcome.index, sample = df) } # Loop over the design and outcome grid # Create the data scenario list (one element for each unique combination of the data scenario factors) data.scenario = list() # Loop over the data scenarios for (data.scenario.index in 1:n.data.scenarios) { design.index = data.scenario.grid[data.scenario.index, 1] outcome.index = data.scenario.grid[data.scenario.index, 2] event.index = data.scenario.grid[data.scenario.index, 3] # Get the design.outcome variables corresponding to the current data scenario current.design.outcome.index = sapply(design.outcome.variables, function(x) x$design.parameter == design.index & x$outcome.parameter == outcome.index) current.design.outcome.variables = design.outcome.variables[current.design.outcome.index][[1]]$sample # Get the number of events current.events = data.event[event.index,] # Generate the data for the current data scenario data.scenario[[data.scenario.index]] = list(sample = CreateDataScenarioEvent(current.design.outcome.variables, current.events, rando.ratio)) } # Loop over the data scenarios } # If event data.set[[sim.index]] = list(data.scenario = data.scenario) } # Loop over the simulations # Create the data stack data.stack = list(description = "data.stack", data.set = data.set, data.scenario.grid = data.scenario.grid, data.structure = data.structure #, #n.sims = n.sims, #seed = seed ) class(data.stack) = "DataStack" return(data.stack) } # End of CreateDataStack Mediana/R/PresentationModel.Section.R0000644000176200001440000000116513434027610017176 0ustar liggesusers###################################################################################################################### # Function: PresentationModel.Section # Argument: Section object. # Description: This function is called by default if the class of the argument is a Section object. #' @export PresentationModel.Section = function(section, ...) { presentationmodel = PresentationModel() presentationmodel = presentationmodel + section args = list(...) if (length(args)>0) { for (i in 1:length(args)){ presentationmodel = presentationmodel + args[[i]] } } return(presentationmodel) }Mediana/R/is.AnalysisModel.R0000644000176200001440000000050213434027611015310 0ustar liggesusers###################################################################################################################### # Function: is.AnalysisModel. # Argument: an object. # Description: Return if the object is of class AnalysisModel is.AnalysisModel = function(arg){ return(any(class(arg)=="AnalysisModel")) }Mediana/R/is.EvaluationModel.R0000644000176200001440000000051313434027611015636 0ustar liggesusers###################################################################################################################### # Function: is.EvaluationModel. # Argument: an object. # Description: Return if the object is of class EvaluationModel is.EvaluationModel= function(arg){ return(any(class(arg)=="EvaluationModel")) } Mediana/R/Interim.R0000644000176200001440000000136213434027610013545 0ustar liggesusers###################################################################################################################### # Function: Interim. # Argument: Sample (sample), Criterion (criterion) and Fraction (fraction). # Description: This function is used to create an object of class MultAdj. Interim = function(sample = NULL, criterion = NULL, fraction = NULL) { if (is.null(sample)) stop("Interim: a sample must be defined") if (is.null(criterion)) stop("Interim: a criterion must be defined") if (is.null(fraction)) stop("Interim: a fraction must be defined") interim = list(sample = sample, criterion = criterion ,fraction = fraction ) class(interim) = "Interim" return(interim) invisible(interim) }Mediana/R/RatioEffectSizeCoxEventStat.R0000644000176200001440000000210213434027610017465 0ustar liggesusers###################################################################################################################### # Compute the ratio of effect sizes for HR (time-to-event) based on non-missing values in the combined sample RatioEffectSizeCoxEventStat = function(sample.list, parameter) { # Determine the function call, either to generate the statistic or to return description call = (parameter[[1]] == "Description") if (call == FALSE | is.na(call)) { # Error checks if (length(sample.list)!=4) stop("Analysis model: Four samples must be specified in the RatioEffectSizeCoxEventStat statistic.") result1 = EffectSizeCoxEventStat(list(sample.list[[1]], sample.list[[2]]), parameter) result2 = EffectSizeCoxEventStat(list(sample.list[[3]], sample.list[[4]]), parameter) # Caculate the ratio of effect size result = result1 / result2 } else if (call == TRUE) { result = list("Ratio of effect size (event - Log-Rank)") } return(result) } # End of RatioEffectSizeCoxEventStat Mediana/R/BonferroniAdj.CI.R0000644000176200001440000000435213434027610015154 0ustar liggesusers###################################################################################################################### # Function: BonferroniAdj.CI # Argument: p, Vector of p-values (1 x m) # par, List of procedure parameters: vector of hypothesis weights (1 x m) # Description: Bonferroni multiple testing procedure. BonferroniAdj.CI = function(est, par) { # Number of point estimate m = length(est) # Extract the vector of hypothesis weights (1 x m) if (is.null(par[[2]]$weight)) w = rep(1/m, m) else w = par[[2]]$weight # Extract the sample size if (is.null(par[[2]]$n)) stop("Bonferroni procedure: Sample size must be specified (n).") n = par[[2]]$n # Extract the standard deviation if (is.null(par[[2]]$sd)) stop("Bonferroni procedure: Standard deviation must be specified (sd).") sd = par[[2]]$sd # Extract the simultaneous coverage probability if (is.null(par[[2]]$covprob)) stop("Bonferroni procedure: Coverage probability must be specified (covprob).") covprob = par[[2]]$covprob # Error checks if (length(w) != m) stop("Bonferroni procedure: Length of the weight vector must be equal to the number of hypotheses.") if (m != length(est)) stop("Bonferroni procedure: Length of the point estimate vector must be equal to the number of hypotheses.") if (m != length(sd)) stop("Bonferroni procedure: Length of the standard deviation vector must be equal to the number of hypotheses.") if (sum(w)!=1) stop("Bonferroni procedure: Hypothesis weights must add up to 1.") if (any(w < 0)) stop("Bonferroni procedure: Hypothesis weights must be greater than 0.") if (covprob>=1 | covprob<=0) stop("Bonferroni procedure: simultaneous coverage probability must be >0 and <1") # Standard errors stderror = sd*sqrt(2/n) # T-statistics associated with each test stat = est/stderror # Compute degrees of freedom nu = 2*(n-1) # Compute raw one-sided p-values rawp = 1-stats::pt(stat,nu) # Compute the adjusted p-values adjustpval = BonferroniAdj(rawp, list("Analysis", list(weight = w))) # Alpha alpha = 1-covprob # Lower simultaneous confidence limit ci = est - stderror*stats::qnorm(1-(alpha*w)) return(ci) } # End of BonferroniAdj.CI Mediana/R/LogrankTest.R0000644000176200001440000000660613434027610014401 0ustar liggesusers###################################################################################################################### # Function: LogrankTest . # Argument: Data set and parameter. # Description: Computes one-sided p-value based on log-rank test. LogrankTest = function(sample.list, parameter) { # Determine the function call, either to generate the p-value or to return description call = (parameter[[1]] == "Description") if (call == FALSE | is.na(call)) { # No parameters are defined if (is.na(parameter[[2]])) { larger = TRUE } else { if (!all(names(parameter[[2]]) %in% c("larger"))) stop("Analysis model: LogRankTest test: this function accepts only one argument (larger)") # Parameters are defined but not the larger argument if (!is.logical(parameter[[2]]$larger)) stop("Analysis model: LogRankTest test: the larger argument must be logical (TRUE or FALSE).") larger = parameter[[2]]$larger } # Sample list is assumed to include two data frames that represent two analysis samples # Outcomes in Sample 1 outcome1 = sample.list[[1]][, "outcome"] # Remove the missing values due to dropouts/incomplete observations outcome1.complete = outcome1[stats::complete.cases(outcome1)] # Observed events in Sample 1 (negation of censoring indicators) event1 = !sample.list[[1]][, "patient.censor.indicator"] event1.complete = event1[stats::complete.cases(outcome1)] # Sample size in Sample 1 n1 = length(outcome1.complete) # Outcomes in Sample 2 outcome2 = sample.list[[2]][, "outcome"] # Remove the missing values due to dropouts/incomplete observations outcome2.complete = outcome2[stats::complete.cases(outcome2)] # Observed events in Sample 2 (negation of censoring indicators) event2 = !sample.list[[2]][, "patient.censor.indicator"] event2.complete = event2[stats::complete.cases(outcome2)] # Sample size in Sample 2 n2 = length(outcome2.complete) # Create combined samples of outcomes, censoring indicators (all events are observed) and treatment indicators outcome = c(outcome1.complete, outcome2.complete) event = c(event1.complete, event2.complete) treatment = c(rep(0, n1), rep(1, n2)) data = data.frame(time = outcome, event = event, treatment = treatment) data = data[order(data$time),] data$event1 = data$event*(data$treatment==0) data$event2 = data$event*(data$treatment==1) data$eventtot = data$event1 + data$event2 data$n.risk1.prior = length(outcome1) - cumsum(data$treatment==0) + (data$treatment==0) data$n.risk2.prior = length(outcome2) - cumsum(data$treatment==1) + (data$treatment==1) data$n.risk.prior = data$n.risk1.prior + data$n.risk2.prior data$e1 = data$n.risk1.prior*data$eventtot/data$n.risk.prior data$u1 = data$event1 - data$e1 data$v1 = ifelse(data$n.risk.prior > 1, (data$n.risk1.prior*data$n.risk2.prior*data$eventtot*(data$n.risk.prior - data$eventtot))/(data$n.risk.prior**2*(data$n.risk.prior-1)), 0) stat.test = sum(data$u1)/sqrt(sum(data$v1)) # Compute one-sided p-value result = stats::pnorm(stat.test, lower.tail = !larger) } else if (call == TRUE) { result = list("Log-rank test") } return(result) } # End of LogrankTest Mediana/R/AnalysisStack.R0000644000176200001440000000061013434027610014702 0ustar liggesusers############################################################################################################################ # Function: AnalysisStack # Argument: .... # Description: This function generate analysis result according to the data model and analysis model #' @export AnalysisStack = function(data.model, analysis.model, sim.parameters) { UseMethod("AnalysisStack") } Mediana/R/AnalysisStack.default.R0000644000176200001440000000101213434027610016322 0ustar liggesusers############################################################################################################################ # Function: AnalysisStack # Argument: .... # Description: This function generate analysis result according to the data model and analysis model #' @export AnalysisStack.default = function(data.model, analysis.model, sim.parameters){ analysis.stack = PerformAnalysis(data.model, analysis.model, sim.parameters) class(analysis.stack) = "AnalysisStack" return(analysis.stack) } Mediana/R/errorfrac.R0000644000176200001440000000120513434027775014133 0ustar liggesusers###################################################################################################################### # Function: errorfrac. # Argument: k: Number of null hypotheses included in the intersection within the family. # n: Total number of null hypotheses in the family. # gamma: Truncation parameter (0<=GAMMA<1). # Description: Evaluate error fraction function for a family based on Bonferroni, Holm, Hochberg or Hommel procedures errorfrac = function(k, n, gamma) { if (k > 0){ f = ifelse(k!=n, gamma + (1 - gamma) * k/n, 1) } else if (k == 0) f = 0 return(f) } # End of errorfrac Mediana/R/RatioEffectSizeContStat.R0000644000176200001440000000362013434027610016643 0ustar liggesusers###################################################################################################################### # Compute the ratio of effect sizes for continuous based on non-missing values in the combined sample RatioEffectSizeContStat = function(sample.list, parameter) { # Determine the function call, either to generate the statistic or to return description call = (parameter[[1]] == "Description") if (call == FALSE | is.na(call)) { # Error checks if (length(sample.list)!=4) stop("Analysis model: Four samples must be specified in the RatioEffectSizeContStat statistic.") # Merge the samples in the sample list sample1 = sample.list[[1]] # Merge the samples in the sample list sample2 = sample.list[[2]] # Merge the samples in the sample list sample3 = sample.list[[3]] # Merge the samples in the sample list sample4 = sample.list[[4]] # Select the outcome column and remove the missing values due to dropouts/incomplete observations outcome1 = sample1[, "outcome"] outcome2 = sample2[, "outcome"] mean1 = mean(stats::na.omit(outcome1)) mean2 = mean(stats::na.omit(outcome2)) sdcom1 = stats::sd(c(stats::na.omit(outcome1),stats::na.omit(outcome2))) result1 = (mean2 - mean1) / sdcom1 # Select the outcome column and remove the missing values due to dropouts/incomplete observations outcome3 = sample3[, "outcome"] outcome4 = sample4[, "outcome"] mean3 = mean(stats::na.omit(outcome3)) mean4 = mean(stats::na.omit(outcome4)) sdcom2 = stats::sd(c(stats::na.omit(outcome3),stats::na.omit(outcome4))) result2 = (mean4 - mean3) / sdcom2 # Caculate the ratio of effect size result = result1 / result2 } else if (call == TRUE) { result = list("Ratio of effect size (continuous)") } return(result) } # End of RatioEffectSizeContStatMediana/R/qdunnett.R0000644000176200001440000000113113434027611013773 0ustar liggesusers# QDunnett is a secondary function which computes a quantile of the # Dunnett distribution in one-sided hypothesis testing problems # with a balanced one-way layout and equally weighted null hypotheses qdunnett<-function(x,df,m) # X, Argument # DF, Number of degrees of freedom # M, Number of comparisons { # Correlation matrix corr = matrix(0.5,m,m) diag(corr) = 1 temp<-mvtnorm::qmvt(x,interval=c(0,4),tail="lower.tail",df=df, delta=rep(0,m),corr=corr, algorithm=mvtnorm::GenzBretz(maxpts=25000, abseps=0.00001, releps=0))[1] return(temp$quantile) } # End of qdunnett Mediana/R/PropTestNI.R0000644000176200001440000000610013434027610014140 0ustar liggesusers############################################################################################# # Function: PropTestNI. # Argument: Data set and parameter (call type and Yates' correction and non-inferiority margin). # Description: Computes one-sided p-value based on two-sample proportion test. PropTestNI = function(sample.list, parameter) { # Determine the function call, either to generate the p-value or to return description call = (parameter[[1]] == "Description") if (call == FALSE | is.na(call)) { if (is.null(parameter[[2]]$margin)) stop("Analysis model: PropTestNI test: Non-inferiority margin must be specified.") if (parameter[[2]]$margin <= 0 | parameter[[2]]$margin > 1) stop("Analysis model: PropTestNI test: Non-inferiority margin must be between 0 and 1 not included.") # Non-inferiority margin margin = as.numeric(parameter[[2]]$margin) # Yates' correction is set up by default to FALSE if(is.null(parameter[[2]]$yates)) yates = FALSE else { if (!is.logical(parameter[[2]]$yates)) stop("Analysis model: PropTestNI test: the yates argument must be logical (TRUE or FALSE).") yates = parameter[[2]]$yates } # Check if larger treatment effect is expected for the second sample or not (default = TRUE) if (is.null(parameter[[2]]$larger)) larger = TRUE else { if (!is.logical(parameter[[2]]$larger)) stop("Analysis model: PropTestNI test: the larger argument must be logical (TRUE or FALSE).") larger = parameter[[2]]$larger } # Sample list is assumed to include two data frames that represent two analysis samples # Outcomes in Sample 1 outcome1 = sample.list[[1]][, "outcome"] # Remove the missing values due to dropouts/incomplete observations outcome1.complete = outcome1[stats::complete.cases(outcome1)] # Outcomes in Sample 2 outcome2 = sample.list[[2]][, "outcome"] # Remove the missing values due to dropouts/incomplete observations outcome2.complete = outcome2[stats::complete.cases(outcome2)] # One-sided p-value (treatment effect in sample 2 is expected to be greater than in sample 1) if (larger) result = stats::prop.test(c(min(sum(outcome2.complete) + margin * length(outcome2.complete), length(outcome2.complete)), sum(outcome1.complete)), n = c(length(outcome2.complete), length(outcome1.complete)), alternative = "greater", correct = yates)$p.value else result = stats::prop.test(c(min(sum(outcome1.complete) + margin * length(outcome1.complete), length(outcome1.complete)), sum(outcome2.complete)), n = c(length(outcome1.complete), length(outcome2.complete)), alternative = "greater", correct = yates)$p.value } else if (call == TRUE) { result=list("Non-inferiority test for proportions") } return(result) } # End of PropTestNI Mediana/R/DataModel.Design.R0000644000176200001440000000105113434027610015173 0ustar liggesusers###################################################################################################################### # Function: DataModel.Design # Argument: Design object. # Description: This function is called by default if the class of the argument is a Design object. #' @export DataModel.Design = function(design, ...) { datamodel = DataModel() datamodel = datamodel + design args = list(...) if (length(args)>0) { for (i in 1:length(args)){ datamodel = datamodel + args[[i]] } } return(datamodel) }Mediana/R/UniformDist.R0000644000176200001440000000254013434027610014400 0ustar liggesusers# Function: UniformDist. # Argument: List of parameters (number of observations, maximum value). # Description: This function is used to generate uniform outcomes. UniformDist = function(parameter) { # Error checks if (missing(parameter)) stop("Data model: UniformDist distribution: List of parameters must be provided.") if (is.null(parameter[[2]]$max)) stop("Data model: UniformDist distribution: Maximum value must be specified.") max.value = parameter[[2]]$max if (max.value <= 0) stop("Data model: UniformDist distribution: Maximum value must be positive.") # Determine the function call, either to generate distribution or to return description call = (parameter[[1]] == "description") # Generate random variables if (call == FALSE) { # Error checks n = parameter[[1]] if (n%%1 != 0) stop("Data model: UniformDist distribution: Number of observations must be an integer.") if (n <= 0) stop("Data model: UniformDist distribution: Number of observations must be positive.") result = stats::runif(n = n, max = max.value) } else { # Provide information about the distribution function if (call == TRUE) { # Labels of distributional parameters result = list(list(max = "max"),list("Uniform")) } } return(result) } #End of UniformDistMediana/R/MVMixedDist.R0000644000176200001440000001246313434027610014277 0ustar liggesusers###################################################################################################################### # Function: MVMixedDist . # Argument: List of parameters (number of observations, list(list (distribution type), list(distribution parameters) correlation matrix)). # Description: This function is used to generate correlated normal, binary and exponential outcomes. MVMixedDist = function(parameter) { # Error checks if (missing(parameter)) stop("Data model: MVMixedDist distribution: List of parameters must be provided.") if (is.null(parameter[[2]]$type)) stop("Data model: MVMixedDist distribution: Distribution type must be specified.") if (is.null(parameter[[2]]$par)) stop("Data model: MVMixedDist distribution: Parameters list must be specified.") if (is.null(parameter[[2]]$corr)) stop("Data model: MVMixedDist distribution: Correlation matrix must be specified.") type = parameter[[2]]$type par = parameter[[2]]$par corr = parameter[[2]]$corr # Number of endpoints m = length(par) if (length(type) != m) stop("Data model: MVMixedDist distribution: Number of distribution type parameters must be equal to the number of endpoints.") for (i in 1:m) { if ((type[[i]] %in% c("NormalDist", "BinomDist", "ExpoDist")) == FALSE) stop("Data model: MVMixedDist distribution: MVMixedDist accepts only normal, binomial and exponential endpoints.") } if (ncol(corr) != m) stop("Data model: MVMixedDist distribution: The size of the outcome parameter is different to the dimension of the correlation matrix.") if (sum(dim(corr) == c(m, m)) != 2) stop("Data model: MVMixedDist distribution: Correlation matrix is not correctly defined.") if (det(corr) <= 0) stop("Data model: MVMixedDist distribution: Correlation matrix must be positive definite.") if (any(corr < -1 | corr > 1)) stop("Data model: MVMixedDist distribution: Correlation values must be comprised between -1 and 1.") # Determine the function call, either to generate distribution or to return description call = (parameter[[1]] == "description") # Generate random variables if (call == FALSE) { # Error checks n = parameter[[1]] if (n%%1 != 0) stop("Data model: MVMixedDist distribution: Number of observations must be an integer.") if (n <= 0) stop("Data model: MVMixedDist distribution: Number of observations must be positive.") # Generate multivariate normal variables multnorm = mvtnorm::rmvnorm(n = n, mean = rep(0, m), sigma = corr) # Store resulting multivariate variables mvmixed = matrix(0, n, m) # Convert selected components to a uniform distribution and then to either binomial or exponential distribution for (i in 1:m) { if (type[[i]] == "NormalDist") { if (is.null(par[[i]]$mean)) stop("Data model: MVMixedDist distribution: Mean in the normal distribution must be specified.") if (is.null(par[[i]]$sd)) stop("Data model: MVMixedDist distribution: SD in the normal distribution must be specified.") mean = as.numeric(par[[i]]$mean) sd = as.numeric(par[[i]]$sd) if (sd <= 0) stop("Data model: MVMixedDist distribution: SD in the normal distribution must be positive.") mvmixed[, i] = mean + sd * multnorm[, i] } else if (type[[i]] == "BinomDist") { uniform = stats::pnorm(multnorm[, i]) # Proportion if (is.null(par[[i]]$prop)) stop("Data model: MVMixedDist distribution: Proportion in the binomial distribution must be specified.") prop = as.numeric(par[[i]]$prop) if (prop < 0 | prop > 1) stop("Data model: MVMixedDist distribution: Proportion in the binomial distribution must be between 0 and 1.") mvmixed[, i] = (uniform <= prop) } else if (type[[i]] == "ExpoDist") { uniform = stats::pnorm(multnorm[, i]) # Hazard rate if (is.null(par[[i]]$rate)) stop("Data model: MVMixedDist distribution: Hazard rate in the exponential distribution must be specified.") hazard = as.numeric(par[[i]]) if (hazard <= 0) stop("Data model: MVMixedDist distribution: Hazard rate parameter in the exponential distribution must be positive.") mvmixed[, i] = -log(uniform)/hazard } } result = mvmixed } else { # Provide information about the distribution function if (call == TRUE) { # Labels of distributional parameters par.labels = list() outcome.name="" for (i in 1:m) { if (type[[i]] == "NormalDist") { par.labels[[i]] = list(mean = "mean", sd = "SD") outcome.name=paste0(outcome.name,", ","Normal") } if (type[[i]] == "BinomDist") { par.labels[[i]] = list(prop = "prop") outcome.name=paste0(outcome.name,", ","Binomial") } if (type[[i]] == "ExpoDist") { par.labels[[i]] = list(rate = "rate") outcome.name=paste0(outcome.name,", ","Exponential") } } result = list(list(type = "type", par = par.labels, corr = "corr"),list(paste0("Multivariate Mixed (", sub(", ","",outcome.name),")"))) } } return(result) } # End of MVMixedDistMediana/R/CreateTableDesign.R0000644000176200001440000000670513434027610015451 0ustar liggesusers############################################################################################################################ # Function: CreateTableDesign. # Argument: data.strucure and label (optional). # Description: Generate a summary table of design parameters for the report. CreateTableDesign = function(data.structure, label = NULL) { # Number of design n.design = length(data.structure$design.parameter.set) # Label if (is.null(label)) label = paste0("Design ", 1:n.design) else label = unlist(label) if (length(label) != n.design) stop("Summary: Number of the design parameters labels must be equal to the number of design parameters sets.") # Summary table design.table <- matrix(nrow = n.design, ncol = 9) design.parameter.set = data.structure$design.parameter.set for (i in 1:n.design) { design.table[i, 1] = i design.table[i, 2] = label[i] design.table[i, 3] = design.parameter.set[[i]]$enroll.period if (!is.na(design.parameter.set[[i]]$enroll.dist)){ if (design.parameter.set[[i]]$enroll.dist=="UniformDist") enroll.dist.par.dummy = list(max = design.parameter.set[[i]]$enroll.period) else enroll.dist.par.dummy = design.parameter.set[[i]]$enroll.dist.par enroll.dist.desc = do.call(design.parameter.set[[i]]$enroll.dist,list(list("description",enroll.dist.par.dummy))) design.table[i, 4] = unlist(enroll.dist.desc[[2]]) if (!any(is.na(design.parameter.set[[i]]$enroll.dist.par))){ enroll.dist.par = paste0(paste0(enroll.dist.desc[[1]]," = "), round(unlist(design.parameter.set[[i]]$enroll.dist.par),4), collapse = "\n") design.table[i, 5] = enroll.dist.par } else design.table[i, 5] = design.parameter.set[[i]]$enroll.dist.par } else { design.table[i, 4] = design.parameter.set[[i]]$enroll.dist design.table[i, 5] = design.parameter.set[[i]]$enroll.dist.par } design.table[i, 6] = design.parameter.set[[i]]$followup.period design.table[i, 7] = design.parameter.set[[i]]$study.duration if (!is.na(design.parameter.set[[i]]$dropout.dist)){ if (design.parameter.set[[i]]$dropout.dist != "UniformDist"){ dropout.dist.desc = do.call(design.parameter.set[[i]]$dropout.dist,list(list("description",design.parameter.set[[i]]$dropout.dist.par))) } else { dropout.dist.desc = do.call(design.parameter.set[[i]]$dropout.dist,list(list("description",list(max = 1/design.parameter.set[[i]]$dropout.dist.par$prop)))) dropout.dist.desc[[1]][[1]] = "prop" } design.table[i, 8] = unlist(dropout.dist.desc[[2]]) if (!any(is.na(design.parameter.set[[i]]$dropout.dist.par))){ dropout.dist.par = paste0(paste0(dropout.dist.desc[[1]]," = "), round(unlist(design.parameter.set[[i]]$dropout.dist.par),4), collapse = "\n") design.table[i, 9] = dropout.dist.par } else design.table[i, 9] = design.parameter.set[[i]]$enroll.dist.par } else { design.table[i, 8] = design.parameter.set[[i]]$dropout.dist design.table[i, 9] = design.parameter.set[[i]]$dropout.dist.par } } design.table = as.data.frame(design.table) colnames(design.table) = c("design.parameter", "Design parameter set", "Enrollment period", "Enrollment distribution", "Enrollment distribution parameter", "Follow-up period", "Study duration", "Dropout distribution", "Dropout distribution parameter") return(design.table) } # End of CreateTableDesign Mediana/R/Table.R0000644000176200001440000000120613434027610013162 0ustar liggesusers###################################################################################################################### # Function: Table. # Argument: by. # Description: This function is used to create an object of class Table. #' @export Table = function(by) { # Error checks if (!is.character(by)) stop("Table: by must be character.") if (!any(by %in% c("sample.size", "event", "outcome.parameter", "design.parameter", "multiplicity.adjustment"))) stop("Table: the variables included in by are invalid.") table.report = list(by = by) class(table.report) = "Table" return(table.report) invisible(table.report) }Mediana/R/HolmAdj.CI.R0000644000176200001440000000477513434027610013761 0ustar liggesusers###################################################################################################################### # Function: HolmAdj.CI # Argument: p, Vector of p-values (1 x m) # par, List of procedure parameters: vector of hypothesis weights (1 x m) # Description: Holm multiple testing procedure. HolmAdj.CI = function(est, par) { # Number of point estimate m = length(est) # Extract the vector of hypothesis weights (1 x m) if (is.null(par[[2]]$weight)) w = rep(1/m, m) else w = par[[2]]$weight # Extract the sample size if (is.null(par[[2]]$n)) stop("Holm procedure: Sample size must be specified (n).") n = par[[2]]$n # Extract the standard deviation if (is.null(par[[2]]$sd)) stop("Holm procedure: Standard deviation must be specified (sd).") sd = par[[2]]$sd # Extract the simultaneous coverage probability if (is.null(par[[2]]$covprob)) stop("Holm procedure: Coverage probability must be specified (covprob).") covprob = par[[2]]$covprob # Error checks if (length(w) != m) stop("Holm procedure: Length of the weight vector must be equal to the number of hypotheses.") if (m != length(est)) stop("Holm procedure: Length of the point estimate vector must be equal to the number of hypotheses.") if (m != length(sd)) stop("Holm procedure: Length of the standard deviation vector must be equal to the number of hypotheses.") if (sum(w)!=1) stop("Holm procedure: Hypothesis weights must add up to 1.") if (any(w < 0)) stop("Holm procedure: Hypothesis weights must be greater than 0.") if (covprob>=1 | covprob<=0) stop("Holm procedure: simultaneous coverage probability must be >0 and <1") # Standard errors stderror = sd*sqrt(2/n) # T-statistics associated with each test stat = est/stderror # Compute degrees of freedom nu = 2*(n-1) # Compute raw one-sided p-values rawp = 1-stats::pt(stat,nu) # Compute the adjusted p-values adjustpval = HolmAdj(rawp, list("Analysis", list(weight = w))) # Compute the simultaneous confidence interval alpha = 1-covprob ci = rep(0,m) rejected = (adjustpval <= alpha) adjalpha = (alpha*w)/sum(w[!rejected]) if(all(rejected)){ # All null hypotheses are rejected ci = pmax(0,est - stderror*stats::qnorm(1-(alpha*w))) } else { # Some null hypotheses are accepted and some are rejected ci[rejected] = 0 ci[!rejected] = est[!rejected]-(stderror[!rejected]*stats::qnorm(1-adjalpha[!rejected])) } return(ci) } # End of HolmAdj.CI Mediana/R/CreateDataStructure.R0000644000176200001440000003074413434027610016062 0ustar liggesusers###################################################################################################################### # Function: CreateDataStructure. # Argument: Data model. # Description: This function is based on the old data_model_extract function. It performs error checks in the data model # and creates a "data structure", which is an internal representation of the original data model used by all other Mediana functions. CreateDataStructure = function(data.model) { # Check the general set if (is.null(data.model$samples)) stop("Data model: At least one sample must be specified.") # Number of samples in the data model n.samples = length(data.model$samples) if (is.null(data.model$general)) stop("Data model: General set of parameters must be specified.") # General set of parameters # List of outcome distribution parameters outcome = list() # Outcome distribution is required in the general set of data model parameters if (is.null(data.model$general$outcome.dist)) stop("Data model: Outcome distribution must be specified in the general set of parameters.") outcome.dist = data.model$general$outcome.dist if (!exists(outcome.dist)) { stop(paste0("Data model: Outcome distribution function '", outcome.dist, "' does not exist.")) } else { if (!is.function(get(as.character(outcome.dist), mode = "any"))) stop(paste0("Data model: Outcome distribution function '", outcome.dist, "' does not exist.")) } # Extract sample-specific parameters # List of outcome parameter sets outcome.parameter.set = list() # List of design parameter sets design.parameter.set = list() # List of sample IDs id = list() # Determine if the data model is expanded or compact (compact if the sample size sets are # specified in the general set of parameters, extended if the sample size sets # are specified for each sample) compact.size = FALSE expanded.size = FALSE sample.size = FALSE event = FALSE if (is.null(data.model$general$sample.size) & is.null(data.model$general$event)) { if (is.null(data.model$samples[[1]]$sample.size) & is.null(data.model$samples[[1]]$event)) stop("Data model: Sample sizes or events must be specified either in the general set or in the sample-specific set of parameters.") } if (!is.null(data.model$general$sample.size)) { if (!is.null(data.model$samples[[1]]$sample.size)) stop("Data model: Sample sizes must be specified either in the general set or in the sample-specific set of parameters but not both.") } if (!is.null(data.model$general$event)) { if (!is.null(data.model$samples[[1]]$event)) stop("Data model: Events must be specified either in the general set or in the sample-specific set of parameters but not both.") } if (!is.null(data.model$general$event) & !is.null(data.model$general$sample.size)) { stop("Data model: Sample sizes or Events must be specified but not both.") } if (!is.null(data.model$samples[[1]]$event) & !is.null(data.model$samples[[1]]$sample.size)) { stop("Data model: Sample sizes or Events must be specified but not both.") } if (!is.null(data.model$samples[[1]]$event) & !is.null(data.model$general$sample.size)) { stop("Data model: Sample sizes or Events must be specified but not both.") } if (!is.null(data.model$general$event) & !is.null(data.model$samples[[1]]$sample.size)) { stop("Data model: Sample sizes or Events must be specified but not both.") } # Compute the number of sample size sets if (!is.null(data.model$general$sample.size) | !is.null(data.model$samples[[1]]$sample.size)){ sample.size = TRUE if (!is.null(data.model$general$sample.size)) { compact.size = TRUE n.sample.size.sets = length(data.model$general$sample.size) } else { expanded.size = TRUE n.sample.size.sets = length(data.model$samples[[1]]$sample.size) for (i in 1:n.samples) { if (is.null(data.model$samples[[i]]$sample.size)) stop("Data model: Sample sizes must be specified for all samples.") if (n.sample.size.sets != length(data.model$samples[[i]]$sample.size)) stop("Data model: The same number of sample sizes must be specified across the samples.") } } # Data frame of sample size sets sample.size.set = matrix(0, n.sample.size.sets, n.samples) # Create a list of sample size sets for (i in 1:n.sample.size.sets) { if (expanded.size) { for (j in 1:n.samples) { sample.size.set[i, j] = data.model$samples[[j]]$sample.size[[i]] } } if (compact.size) { for (j in 1:n.samples) { sample.size.set[i, j] = data.model$general$sample.size[[i]] } } } sample.size.set = as.data.frame(sample.size.set) # Error check if (any(sample.size.set<=0)) stop("Data model : Sample size must be strictly positive") } else { sample.size.set = NA } # Compute the number of event sets if (!is.null(data.model$general$event)){ event = TRUE compact.size = TRUE event.set = data.frame(event.total = data.model$general$event$n.events) rando.ratio = data.model$general$event$rando.ratio if (is.null(rando.ratio)) rando.ratio = rep(1,n.samples) # Error check if (any(event.set<=0)) stop("Data model : Number of events must be strictly positive") if (length(rando.ratio) != n.samples) stop("Data model: the randomization ratio of each sample must be specified") if (any(rando.ratio<=0)) stop("Data model: the randomization ratio of each sample must be positive") if (any(rando.ratio %%1 != 0)) stop("Data model: the randomization ratio of each sample must be an integer") } else { event.set = NA rando.ratio = NA } # Compute the number of outcome parameter sets for (i in 1:n.samples) { if (is.null(data.model$samples[[i]]$outcome.par)) stop("Data model: Outcome parameters must be specified for all samples.") outcome.par = data.model$samples[[i]]$outcome.par if (i == 1) { n.outcome.parameter.sets = length(outcome.par) } else { if (n.outcome.parameter.sets != length(outcome.par)) stop("Data model: The same number of outcome parameter sets must be specified across the samples.") } } # Create a list of outcome parameter sets for (i in 1:n.outcome.parameter.sets) { temp = list() for (j in 1:n.samples) { temp[[j]] = data.model$samples[[j]]$outcome.par[[i]] # Check if the outcome parameters are correctly specified and determine the dimensionality of the outcome distribution dummy.function.call = list(1, data.model$samples[[j]]$outcome.par[[i]]) outcome.dist.dim = length(do.call(outcome.dist, list(dummy.function.call))) } outcome.parameter.set[[i]] = temp } if (is.null(data.model$general$outcome.type) & sample.size == TRUE) { outcome.type = rep("standard", outcome.dist.dim) } else if (is.null(data.model$general$outcome.type) & event == TRUE) { outcome.type = rep("event", outcome.dist.dim) } else { outcome.type = data.model$general$outcome.type if (length(outcome.type) != outcome.dist.dim) stop("Data model: Number of outcome types must be equal to the number of dimensions in the outcome distribution.") } # Create a list of sample IDs for (i in 1:n.samples) { if (is.null(data.model$samples[[i]]$id)) stop("Data model: Sample IDs must be specified for all samples.") if (outcome.dist.dim != length(data.model$samples[[i]]$id)) stop("Data model: The same number of sample IDs in each sample must be equal to the number of dimensions in the outcome distribution.") id[[i]] = data.model$samples[[i]]$id } # Compute the number of design parameter sets if (is.null(data.model$general$design)) { n.design.parameter.sets = NA design.parameter.set = NULL } else { n.design.parameter.sets = length(data.model$general$design) } # Create a list of design parameter sets if (!is.null(design.parameter.set)) { for (i in 1:n.design.parameter.sets) { if (!is.null(data.model$general$design[[i]]$followup.period) & !is.null(data.model$general$design[[i]]$study.duration)) stop("Data model: Either the length of the follow-up period or total study duration can be specified but not both.") if (is.null(data.model$general$design[[i]]$enroll.dist) & !is.null(data.model$general$design[[i]]$dropout.dist)) stop("Data model: Dropout parameters may not be specified without enrollment parameters.") if (is.null(data.model$general$design[[i]]$enroll.period)) { enroll.period = NA } else { enroll.period = data.model$general$design[[i]]$enroll.period } if (is.null(data.model$general$design[[i]]$enroll.dist)) { enroll.dist = NA } else { enroll.dist = data.model$general$design[[i]]$enroll.dist if (!exists(enroll.dist)) { stop(paste0("Data model: Enrollment distribution function '", enroll.dist, "' does not exist.")) } else { if (!is.function(get(as.character(enroll.dist), mode = "any"))) stop(paste0("Data model: Enrollment distribution function '", enroll.dist, "' does not exist.")) } } if (enroll.dist == "UniformDist") { enroll.dist.par = NA } else { if (is.null(data.model$general$design[[i]]$enroll.dist.par)) { stop("Data model: Enrollment distribution parameters must be specified for non-uniform distributions.") } else { enroll.dist.par = data.model$general$design[[i]]$enroll.dist.par } } if (is.null(data.model$general$design[[i]]$followup.period)) { followup.period = NA } else { followup.period = data.model$general$design[[i]]$followup.period } if (is.null(data.model$general$design[[i]]$study.duration)) { study.duration = NA } else { study.duration = data.model$general$design[[i]]$study.duration } if (is.null(data.model$general$design[[i]]$dropout.dist)) { dropout.dist = NA dropout.dist.par = NA } else { dropout.dist = data.model$general$design[[i]]$dropout.dist if (!exists(dropout.dist)) { stop(paste0("Data model: Dropout distribution function '", dropout.dist, "' does not exist.")) } else { if (!is.function(get(as.character(dropout.dist), mode = "any"))) stop(paste0("Data model: Dropout distribution function '", dropout.dist, "' does not exist.")) } if (is.null(data.model$general$design[[i]]$dropout.dist.par)) { stop(paste0("Data model: Dropout distribution parameter must be defined")) } else{ dropout.dist.par = data.model$general$design[[i]]$dropout.dist.par if (dropout.dist == "UniformDist") { if (is.null(dropout.dist.par$prop)) { stop(paste0("Data model: the proportion of dropout must be defined in the prop argument")) } else{ if (dropout.dist.par$prop < 0 | dropout.dist.par$prop > 1) stop(paste0("Data model: the proportion of dropout must be between 0 and 1")) } } } } design.parameter.set[[i]] = list(enroll.period = enroll.period, enroll.dist = enroll.dist, enroll.dist.par = enroll.dist.par, followup.period = followup.period, study.duration = study.duration, dropout.dist = dropout.dist, dropout.dist.par = dropout.dist.par) } } # Create the data structure outcome = list(outcome.dist = outcome.dist, outcome.type = outcome.type, outcome.dist.dim = outcome.dist.dim) data.structure = list(description = "data.structure", id = id, outcome = outcome, sample.size.set = sample.size.set, event.set = event.set, rando.ratio = rando.ratio, outcome.parameter.set = outcome.parameter.set, design.parameter.set = design.parameter.set) return(data.structure) } # End of CreateDataStructure Mediana/R/Project.R0000644000176200001440000000142713434027610013546 0ustar liggesusers###################################################################################################################### # Function: Project. # Argument: username, title, project. # Description: This function is used to create an object of class Project. #' @export Project = function(username = "[Unknown User]", title = "[Unknown title]", description = "[No description]") { # Error checks if (!is.character(username)) stop("Project: username must be character.") if (!is.character(title)) stop("Project: title must be character.") if (!is.character(description)) stop("Project: description must be character.") project = list(username = username, title = title, description = description) class(project) = "Project" return(project) invisible(project) }Mediana/R/z+.DataModel.R0000644000176200001440000000242213434027611014312 0ustar liggesusers###################################################################################################################### # Function: +.DataModel. # Argument: Two objects (DataModel and another object). # Description: This function is used to add objects to the DataModel object #' @export "+.DataModel" = function(datamodel, object) { if (is.null(object)) return(datamodel) if (class(object) == "SampleSize"){ datamodel$general$sample.size = unclass(unlist(object)) } else if (class(object) == "Event"){ datamodel$general$event$n.events = unclass(unlist(object$n.events)) datamodel$general$event$rando.ratio = unclass(object$rando.ratio) } else if (class(object) == "OutcomeDist"){ datamodel$general$outcome.dist = unclass(object$outcome.dist) datamodel$general$outcome.type = unclass(object$outcome.type) } else if (class(object) == "Sample"){ nsample = length(datamodel$samples) datamodel$samples[[nsample+1]] = unclass(object) } else if (class(object) == "Design"){ ndesign = length(datamodel$general$design) datamodel$general$design[[ndesign+1]] = unclass(object) } else stop(paste0("Data Model: Impossible to add the object of class",class(object)," to the Data Model")) return(datamodel) }Mediana/R/AnalysisModel.MultAdj.R0000644000176200001440000000112513434027610016236 0ustar liggesusers###################################################################################################################### # Function: AnalysisModel.MultAdj # Argument: MultAdj object. # Description: This function is called by default if the class of the argument is a MultAdj object. #' @export AnalysisModel.MultAdj = function(multadj, ...) { analysismodel = AnalysisModel() analysismodel = analysismodel + multadj args = list(...) if (length(args)>0) { for (i in 1:length(args)){ analysismodel = analysismodel + args[[i]] } } return(analysismodel) }Mediana/R/NegBinomDist.R0000644000176200001440000000351213434027610014457 0ustar liggesusers###################################################################################################################### # Function: NegBinomDist . # Argument: List of parameters (number of observations, list(dispersion, mean)). # Description: This function is used to generate negative-binomial outcomes. NegBinomDist = function(parameter) { # Error checks if (missing(parameter)) stop("Data model: NegBinomDist distribution: List of parameters must be provided.") if (is.null(parameter[[2]]$dispersion)) stop("Data model: NegBinomDist distribution: Dispersion (size) must be specified.") if (is.null(parameter[[2]]$mean)) stop("Data model: NegBinomDist distribution: Mean (mu) must be specified.") dispersion = parameter[[2]]$dispersion mean = parameter[[2]]$mean # Parameters check if (dispersion <= 0) { stop("Data model: NegBinomDist distribution: Dispersion parameter must be positive.") } else if (mean <= 0) { stop("Data model: NegBinomDist distribution: Mean must be positive.") } # Determine the function call, either to generate distribution or to return description call = (parameter[[1]] == "description") # Generate random variables if (call == FALSE) { n = parameter[[1]] if (n%%1 != 0) stop("Data model: NegBinomDist distribution: Number of observations must be an integer.") if (n <= 0) stop("Data model: NegBinomDist distribution: Number of observations must be positive.") result = stats::rnbinom(n = n, size = dispersion, mu = mean) } else { # Provide information about the distribution function if (call == TRUE) { # Labels of distributional parameters result = list(list(dispersion = "dispersion", mean = "mean"),list("Negative binomial")) } } return(result) } #End of NegBinomDistMediana/R/ExpoDist.R0000644000176200001440000000266313434027610013702 0ustar liggesusers###################################################################################################################### # Function: ExpoDist. # Argument: List of parameters (number of observations, rate). # Description: This function is used to generate exponential outcomes. ExpoDist = function(parameter) { # Error checks if (missing(parameter)) stop("Data model: ExpoDist distribution: List of parameters must be provided.") if (is.null(parameter[[2]]$rate)) stop("Data model: ExpoDist distribution: Rate parameter must be specified.") rate = parameter[[2]]$rate # Parameters check if (rate <= 0) stop("Data model: ExpoDist distribution: Rate parameter must be positive") # Determine the function call, either to generate distribution or to return description call = (parameter[[1]] == "description") # Generate random variables if (call == FALSE) { n = parameter[[1]] if (n%%1 != 0) stop("Data model: ExpoDist distribution: Number of observations must be an integer.") if (n <= 0) stop("Data model: ExpoDist distribution: Number of observations must be positive.") result = stats::rexp(n = n, rate = rate) } else { # Provide information about the distribution function if (call == TRUE) { # Labels of distributional parameters result = list(list(rate = "rate"),list("Exponential")) } } return(result) } # End of ExpoDistMediana/R/EnhancedClaimPower.R0000644000176200001440000000251613434027610015630 0ustar liggesusers############################################################################################################################ # Function: EnhancedClaimPower # Argument: Test results (p-values) across multiple simulation runs (vector or matrix), statistic results, # criterion parameter (Type I error rate and Influence cutoff). # Description: Compute probability of enhanced claim (new treatment is effective in the overall population with substantial effect in the subgroup of interest) EnhancedClaimPower = function(test.result, statistic.result, parameter) { # Error check if (is.null(parameter$alpha)) stop("Evaluation model: EnhancedClaimPower: alpha parameter must be specified.") if (is.null(parameter$cutoff_influence)) stop("Evaluation model: EnhancedClaimPower: cutoff_influence parameter must be specified.") if (is.null(parameter$cutoff_interaction)) stop("Evaluation model: EnhancedClaimPower: cutoff_interaction parameter must be specified.") alpha = parameter$alpha cutoff_influence = parameter$cutoff_influence cutoff_interaction = parameter$cutoff_interaction significant = (test.result[,1] <= alpha & test.result[,2] <= alpha & statistic.result[,1] >= cutoff_influence & statistic.result[,2] >= cutoff_interaction) power = mean(significant) return(power) } # End of EnhancedClaimPowerMediana/R/CreateReportStructure.R0000644000176200001440000004552313434027610016465 0ustar liggesusers###################################################################################################################### # Function: CreateReportStructure. # Argument: Results returned by the CSE function and tables produced by the CreateTableStructure function. # Description: This function is used to create a report CreateReportStructure = function(evaluation, presentation.model){ # Number of scenario n.scenario = nrow(evaluation$analysis.scenario.grid) # Number of design parameter n.design = max(evaluation$analysis.scenario.grid$design.parameter) # Number of outcome parameter n.outcome = max(evaluation$analysis.scenario.grid$outcome.parameter) # Number of sample size or event set n.sample.size = max(evaluation$analysis.scenario.grid$sample.size) # Number of multiplicity adjustment n.multiplicity.adjustment = max(evaluation$analysis.scenario.grid$multiplicity.adjustment) # Empty report object report = list() # Empty section list section = list() # Empty subsection list subsection = list() # Empty subsubsection list subsubsection = list() # Empty subsubsubsection list subusbsubsection = list() # Get the label custom.label = presentation.model$custom.label # Sample size label custom.label.sample.size = list() if (any(unlist(lapply(custom.label, function(x) (x$param %in% c("sample.size","event")))))) { custom.label.sample.size$label = custom.label[[which(unlist(lapply(custom.label, function(x) (x$param %in% c("sample.size","event")))))]]$label custom.label.sample.size$custom = TRUE } else { if (any(!is.na(evaluation$data.structure$sample.size.set))) custom.label.sample.size$label = paste("Sample size", 1:n.sample.size) else if (any(!is.na(evaluation$data.structure$event.set))) custom.label.sample.size$label = paste("Event", 1:n.sample.size) custom.label.sample.size$custom = FALSE } # Outcome parameter label custom.label.outcome.parameter = list() if (any(unlist(lapply(custom.label, function(x) (x$param == "outcome.parameter"))))) { custom.label.outcome.parameter$label = custom.label[[which(unlist(lapply(custom.label, function(x) (x$param == "outcome.parameter"))))]]$label custom.label.outcome.parameter$custom = TRUE } else { custom.label.outcome.parameter$label = paste("Outcome", 1:n.outcome) custom.label.outcome.parameter$custom = FALSE } # Multiplicity adjustment label custom.label.multiplicity.adjustment = list() if (any(unlist(lapply(custom.label, function(x) (x$param == "multiplicity.adjustment"))))) { custom.label.multiplicity.adjustment$label = custom.label[[which(unlist(lapply(custom.label, function(x) (x$param == "multiplicity.adjustment"))))]]$label custom.label.multiplicity.adjustment$custom = TRUE } else { custom.label.multiplicity.adjustment$label = paste("Multiplicity adjustment scenario", 1:n.multiplicity.adjustment) custom.label.multiplicity.adjustment$custom = FALSE } # Design parameter label custom.label.design.parameter = list() if (any(unlist(lapply(custom.label, function(x) (x$param == "design.parameter"))))) { custom.label.design.parameter$label = custom.label[[which(unlist(lapply(custom.label, function(x) (x$param == "design.parameter"))))]]$label custom.label.design.parameter$custom = TRUE } else { custom.label.design.parameter$label = paste("Design", 1:n.design) custom.label.design.parameter$custom = FALSE } # Create a summary table for the design if (!is.null(evaluation$data.structure$design.parameter.set)) table.design = CreateTableDesign(evaluation$data.structure, custom.label.design.parameter$label) # Create a summary table for the sample size table.sample.size = CreateTableSampleSize(evaluation$data.structure, custom.label.sample.size$label) # Create a summary table for the outcome parameters outcome.information = CreateTableOutcome(evaluation$data.structure, custom.label.outcome.parameter$label) outcome.dist.name = outcome.information[[1]] table.outcome = outcome.information[[2]] # Create a summary table for the tests table.test = CreateTableTest(evaluation$analysis.structure) # Create a summary table for the statistics if (!is.null(evaluation$analysis.structure$statistic)) table.statistic = CreateTableStatistic(evaluation$analysis.structure) # Create a summary table for the results, according to the section/subsection requested by the user result.structure = CreateTableStructure(evaluation, presentation.model, custom.label.sample.size, custom.label.design.parameter, custom.label.outcome.parameter, custom.label.multiplicity.adjustment) # Get information on the multiplicity adjustment mult.adj.desc = list() if (!is.null(evaluation$analysis.structure$mult.adjust)){ for (mult in 1:n.multiplicity.adjustment) { mult.adjust.temp = list() # Number of multiplicity adjustment within each mult.adj scenario n.mult.adj.sc = length(evaluation$analysis.structure$mult.adjust[[mult]]) for (j in 1:n.mult.adj.sc){ if (!is.na(evaluation$analysis.structure$mult.adjust[[mult]][[j]]$proc)){ dummy.function.call = list("Description", evaluation$analysis.structure$mult.adjust[[mult]][[j]]$par, unlist(evaluation$analysis.structure$mult.adjust[[mult]][[j]]$tests)) analysis.mult.desc = do.call(evaluation$analysis.structure$mult.adjust[[mult]][[j]]$proc, list(rep(0,length(unlist(evaluation$analysis.structure$mult.adjust[[mult]][[j]]$tests))),dummy.function.call)) mult.adjust.temp[[j]] = list(desc = analysis.mult.desc[[1]], tests = paste0("{",paste0(unlist(evaluation$analysis.structure$mult.adjust[[mult]][[j]]$tests),collapse=", "),"}"),par = analysis.mult.desc[[2]]) } else { mult.adjust.temp[[j]] = list(desc = "No adjustment", tests=NULL, par=NULL) } } mult.adj.desc[[mult]] = mult.adjust.temp } } else { mult.adj.desc = NA } # Create a summary table for the criterion table.criterion = CreateTableCriterion(evaluation$evaluation.structure) # Section 1: General information ################################## # Items included in Section 1, Subsection 1 # Item's type is text by default item1 = list(label = "", value = paste0("This report was generated by ", presentation.model$project$username, " using the Mediana package version ", utils::packageVersion("Mediana"),". For more information about the Mediana package, see http://gpaux.github.io/Mediana.")) item2 = list(label = "Project title:", value = presentation.model$project$title) item3 = list(label = "Description:", value = presentation.model$project$description) item4 = list(label = "Random seed:", value = evaluation$sim.parameters$seed) item5 = list(label = "Number of simulations:", value = evaluation$sim.parameters$n.sims) item6 = list(label = "Number of cores:", value = evaluation$sim.parameters$proc.load) item7 = list(label = "Start time:", value = evaluation$timestamp$start.time) item8 = list(label = "End time:", value = evaluation$timestamp$end.time) item9 = list(label = "Duration:", value = format(round(evaluation$timestamp$duration, digits = 2), digits = 2, nsmall = 2)) # Create a subsection (set the title to NA to suppress the title) subsection[[1]] = list(title = "Project information", item = list(item1, item2, item3)) # Create a subsection (set the title to NA to suppress the title) subsection[[2]] = list(title = "Simulation parameters", item = list(item4, item5, item6, item7, item8, item9)) # Create the header section (set the title to NA to suppress the title) section[[1]] = list(title = "General information", subsection = subsection) # Section 2: Data model # ######################### n.subsection = 0 # Empty subsection list subsection = list() # Empty subsubsection list subsubsection = list() # Empty subsubsubsection list subusbsubsection = list() #Design parameters if (!is.null(evaluation$data.structure$design.parameter.set)){ n.subsection = n.subsection + 1 item1 = list(label = "Number of design parameter sets: ", value = n.design ) item2 = list(label = "Design", value = table.design[,2:length(table.design)], param = list(groupedheader.row = list(values = c("", "Enrollment", "", "", "Dropout"), colspan = c(1, 3, 1, 1, 2))), type = "table" ) # Create a subsection subsection[[n.subsection]] = list(title = "Design", item = list(item1, item2)) } #Sample size if (any(!is.na(evaluation$data.structure$sample.size.set))) { item1 = list(label = "Number of samples:", value = length(evaluation$data.structure$id), type = "text" ) item2 = list(label = "Number of sample size sets:", value = n.sample.size, type = "text" ) item3 = list(label = "Sample size", value = table.sample.size[,2:ncol(table.sample.size)], param = list(span.columns = "Sample size set"), type = "table" ) # Create a subsection subsection[[n.subsection+1]] = list(title = "Sample size", item = list(item1, item2, item3)) } #Event if (any(!is.na(evaluation$data.structure$event.set))) { item1 = list(label = "Number of samples:", value = length(evaluation$data.structure$id), type = "text" ) item2 = list(label = "Randomization ratio:", value = paste0("(",paste0(evaluation$data.structure$rando.ratio, collapse = ":"),")"), type = "text" ) item3 = list(label = "Number of event sets:", value = n.sample.size, type = "text" ) item4 = list(label = "Event", value = table.sample.size[,2:ncol(table.sample.size)], type = "table" ) # Create a subsection subsection[[n.subsection+1]] = list(title = "Number of events", item = list(item1, item2, item3, item4)) } # Outcome distribution item1 = list(label = "Number of outcome parameter sets:", value = n.outcome, type = "text" ) item2 = list(label = "Outcome distribution:", value = outcome.dist.name, type = "text" ) item3 = list(label = "Outcome parameter", value = table.outcome[,2:length(table.outcome)], param = list(span.columns = "Outcome parameter set"), type = "table" ) # Create a subsection subsection[[n.subsection+2]] = list(title = "Outcome distribution", item = list(item1, item2, item3)) section[[2]] = list(title = "Data model", subsection = subsection) # Section 3: Analysis model ########################### n.subsection = 0 # Empty subsection list subsection = list() # Empty subsection list subsection = list() # Empty subsubsection list subsubsection = list() # Empty subsubsubsection list subusbsubsection = list() # Test if (!is.null(evaluation$analysis.structure$test)){ n.subsection = n.subsection + 1 item1 = list(label = "Number of tests/null hypotheses: ", value = length(evaluation$analysis.structure$test) ) item2 = list(label = "Tests", value = table.test, type = "table" ) # Create a subsection subsection[[n.subsection]] = list(title = "Tests", item = list(item1, item2)) } # Statistic if (!is.null(evaluation$analysis.structure$statistic)){ n.subsection = n.subsection + 1 item1 = list(label = "Number of descriptive statistics: ", value = length(evaluation$analysis.structure$statistic) ) item2 = list(label = "Statistics", value = table.statistic, type = "table" ) # Create a subsection subsection[[n.subsection]] = list(title = "Statistics", item = list(item1, item2)) } # Multiplicity adjustment if (!is.null(evaluation$analysis.structure$mult.adjust)){ n.subsection = n.subsection + 1 subsubsection = list() for (mult in 1:n.multiplicity.adjustment) { # Number of multiplicity adjustment within each mult.adj scenario n.mult.adj.sc = length(mult.adj.desc[[mult]]) subsubsubsection = list() for (j in 1:n.mult.adj.sc){ item = list() ind.item = 1 item[[ind.item]] = list(label = "Procedure:", value = mult.adj.desc[[mult]][[j]]$desc[[1]] ) if (!is.null(mult.adj.desc[[mult]][[j]]$tests)){ ind.item = ind.item + 1 item[[ind.item]] = list(label = "Tests:", value = mult.adj.desc[[mult]][[j]]$tests ) } if (!is.null(mult.adj.desc[[mult]][[j]]$par)){ ind.item = ind.item + 1 if (length(mult.adj.desc[[mult]][[j]]$par)>1) { item[[ind.item]] = list(label = "Parameters:", value = "" ) for (k in 1:length(mult.adj.desc[[mult]][[j]]$par)){ ind.item = ind.item + 1 if (!is.data.frame(mult.adj.desc[[mult]][[j]]$par[[k]])) { item[[ind.item]] = list(label = "", value = mult.adj.desc[[mult]][[j]]$par[[k]], type = "text" ) } else if (is.data.frame(mult.adj.desc[[mult]][[j]]$par[[k]])) { item[[ind.item]] = list(label = "Parameters", value = mult.adj.desc[[mult]][[j]]$par[[k]], type = "table" ) } } } else { if (!is.data.frame(mult.adj.desc[[mult]][[j]]$par[[1]])) { item[[ind.item]] = list(label = "Parameters:", value = mult.adj.desc[[mult]][[j]]$par[[1]], type = "text" ) } else if (is.data.frame(mult.adj.desc[[mult]][[j]]$par[[1]])) { item[[ind.item]] = list(label = "Parameters:", value = mult.adj.desc[[mult]][[j]]$par[[1]], type = "table" ) } } } if (n.mult.adj.sc>1) { subsubsubsection[[j]] = list(title = paste0("Multiplicity adjustment procedure ",j), item = item) } } if (n.mult.adj.sc>1) { subsubsection[[mult]] = list(title = custom.label.multiplicity.adjustment$label[mult], subsubsubsection = subsubsubsection) } else if (!is.null(evaluation$analysis.structure$mult.adjust) & n.multiplicity.adjustment>1){ subsubsection[[mult]] = list(title = custom.label.multiplicity.adjustment$label[mult], item = item) } } if (n.mult.adj.sc>1) { subsection[[n.subsection]] = list(title = "Multiplicity adjustment", subsubsection = subsubsection ) } else if (!is.null(evaluation$analysis.structure$mult.adjust) & n.multiplicity.adjustment>1){ subsection[[n.subsection]] = list(title = "Multiplicity adjustment", subsubsection = subsubsection ) } else if (!is.null(evaluation$analysis.structure$mult.adjust) & n.multiplicity.adjustment==1){ subsection[[n.subsection]] = list(title = "Multiplicity adjustment", item = item) } } section[[3]] = list(title = "Analysis model", subsection = subsection) # Section : Evaluation Model ############################## n.subsection = 0 # Empty subsection list subsection = list() # Empty subsubsection list subsubsection = list() # Empty subsubsubsection list subusbsubsection = list() # Criterion if (!is.null(evaluation$evaluation.structure$criterion)){ n.subsection = n.subsection + 1 item1 = list(label = "Number of criteria: ", value = length(evaluation$evaluation.structure$criterion) ) item2 = list(label = "Criteria", value = table.criterion, type = "table" ) # Create a subsection subsection[[n.subsection]] = list(title = "Criteria", item = list(item1, item2)) } section[[4]] = list(title = "Evaluation model", subsection = subsection) # Section : Simulation results ############################## # Empty subsection list subsection = list() # Empty subsubsection list subsubsection = list() # Empty subsubsubsection list subusbsubsection = list() n.subsection = nrow(result.structure$section) if (!is.null(result.structure$subsection)) n.subsubsection = nrow(result.structure$subsection) else n.subsubsection = 0 # Get the names of the columns to span span = colnames(result.structure$table.structure[[1]]$results)[which(!(colnames(result.structure$table.structure[[1]]$results) %in% c("Criterion","Test/Statistic","Result")))] # Create each section for (subsection.ind in 1:n.subsection){ table.result.subsection = result.structure$table.structure[unlist(lapply(result.structure$table.structure, function(x,ind.section=subsection.ind) {(x$section$number == ind.section) } ))] # Empty subsubsection list subsubsection = list() if (n.subsubsection >0) { for (subsubsection.ind in 1:n.subsubsection){ # Result item1 = list(label = "Results summary", value = table.result.subsection[[subsubsection.ind]]$results, type = "table", param = list(span.columns = span) ) # Create a suv=bsubsection subsubsection[[subsubsection.ind]] = list(title = table.result.subsection[[subsubsection.ind]]$subsection$title, item = list(item1)) } subsection[[subsection.ind]] = list(title = table.result.subsection[[subsubsection.ind]]$section$title, subsubsection = subsubsection) } else { # Result item1 = list(label = "Results summary", value = table.result.subsection[[1]]$results, type = "table", param = list(span.columns = span) ) subsubsection[[1]] = list(title = NA, item = list(item1)) subsection[[subsection.ind]] = list(title = table.result.subsection[[1]]$section$title, subsubsection = subsubsection) } } section[[5]] = list(title = "Simulation results", subsection = subsection) # Include all sections in the report -- the report object is finalized report = list(title = "Clinical Scenario Evaluation", section = section) return(list(result.structure = result.structure, report.structure = report )) } # End of CreateReportStructure Mediana/R/GeneratePatients.R0000644000176200001440000002223713434027610015404 0ustar liggesusers####################################################################################################################### # Function: GeneratePatients. # Argument: Design parameter, outcome parameter, sample id and number of patients or events to generate. # Description: Generates data frames of simulated patients. This function is used in the CreateDataStack function. GeneratePatients = function(current.design.parameter, current.outcome, current.sample.id, number){ # Generate a set of outcome variables current.outcome.call = list(number, current.outcome$par) current.outcome.variables = as.matrix(do.call(current.outcome$dist, list(current.outcome.call))) colnames(current.outcome.variables) = paste0("outcome",1:ncol(current.outcome.variables)) # Generate a set of design variables if (!is.null(current.design.parameter)){ # Compute patient start times # Uniform patient start times if (current.design.parameter$enroll.dist == "UniformDist") { # Uniform distribution over [0, 1] enroll.par = list(number, list(max = 1)) # Uniform distribution is expanded over the enrollment period patient.start.time = current.design.parameter$enroll.period * sort(unlist(lapply(list(enroll.par), "UniformDist"))) } else if (current.design.parameter$enroll.dist == "BetaDist") { # Beta patient start times # Beta distribution parameters enroll.par = list(number, current.design.parameter$enroll.dist.par) # Beta distribution is expanded over the enrollment period patient.start.time = current.design.parameter$enroll.period * sort(unlist(lapply(list(enroll.par), "BetaDist"))) } else { # Non-uniform patient start times # List of enrollment parameters enroll.par = list(number, current.design.parameter$enroll.dist.par) patient.start.time = sort(unlist(lapply(list(enroll.par), current.design.parameter$enroll.dist))) } # Patient start times are truncated at the end of the enrollment period patient.start.time = pmin(patient.start.time, current.design.parameter$enroll.period) # Compute patient end times # Patient end times if (!is.na(current.design.parameter$followup.period)) { # In a design with a fixed follow-up (followup.period is specified), the patient end time # is equal to the patient start time plus the fixed follow-up time patient.end.time = patient.start.time + current.design.parameter$followup.period } if (!is.na(current.design.parameter$study.duration)) { # In a design with a variable follow-up (study.duration is specified), the patient end time # is equal to the end of the trial patient.end.time = rep(current.design.parameter$study.duration, number) } # Compute patient dropout times (if the dropout distribution is specified) for the maximum sample size if (!is.na(current.design.parameter$dropout.dist)) { # Uniform patient dropout times if (current.design.parameter$dropout.dist == "UniformDist") { # The parameter corresponds to the proportion of dropout # Generate Uniform distribution between 0 and 1/proportion dropout.par = list(number, list(max = 1/current.design.parameter$dropout.dist.par$prop)) # Uniform distribution is expanded over the patient-specific periods patient.dropout.time = patient.start.time + (patient.end.time - patient.start.time) * unlist(lapply(list(dropout.par), "UniformDist")) } else { # Non-uniform patient dropout times # List of dropout parameters dropout.par = list(number, current.design.parameter$dropout.dist.par) patient.dropout.time = patient.start.time + unlist(lapply(list(dropout.par), current.design.parameter$dropout.dist)) } # If the patient end time is greater than the patient dropout time, the patient end time # is truncated, the patient dropout indicator is set to TRUE. patient.dropout.indicator = (patient.end.time >= patient.dropout.time) patient.end.time = pmin(patient.end.time, patient.dropout.time) } else { # No dropout distribution is specified patient.dropout.time = rep(NA, number) patient.dropout.indicator = rep(FALSE, number) } # Patient censore will be get later on in the function according to the outcome variable patient.censor.indicator = rep(FALSE, number) # Create a data frame and save it current.design.variables = t(rbind(patient.start.time, patient.end.time, patient.dropout.time, patient.dropout.indicator, patient.censor.indicator)) } else if (is.null(current.design.parameter)){ # No design parameters are specified in the data model patient.start.time = rep(NA, number) patient.end.time = rep(NA, number) patient.dropout.time = rep(NA, number) patient.dropout.indicator = rep(FALSE, number) patient.censor.indicator = rep(FALSE, number) # Create a data frame and save it current.design.variables = t(rbind(patient.start.time, patient.end.time, patient.dropout.time, patient.dropout.indicator, patient.censor.indicator)) } colnames(current.design.variables) = c("patient.start.time", "patient.end.time", "patient.dropout.time", "patient.dropout.indicator", "patient.censor.indicator") # Create the list with the data frame for the current design and outcome parameter and for each outcome current.design.outcome.variables = list() # Create the censor indicator for each outcome for (outcome.index in 1:length(current.outcome$type)){ current.outcome.type = current.outcome$type[outcome.index] patient.end.time = current.design.variables[,"patient.end.time"] patient.start.time = current.design.variables[,"patient.start.time"] patient.dropout.time = current.design.variables[,"patient.dropout.time"] patient.censor.indicator = current.design.variables[,"patient.censor.indicator"] outcome = current.outcome.variables[,paste0("outcome",outcome.index)] # Compute patient censor times for the analysis data sample if the current outcome type is "event" if (current.outcome.type == "event") { # Dropout distribution is specified if (!all(is.na(patient.dropout.time))) { # Outcome variable is truncated and the patient censor indicator is set to TRUE # if the outcome variable is greater than the patient dropout time (relative to the patient start time) patient.censor.indicator = patient.censor.indicator | (outcome >= patient.dropout.time - patient.start.time) outcome = pmin(outcome, patient.dropout.time - patient.start.time) } # Enrollment distribution is specified if (!all(is.na(patient.start.time))) { # Outcome variable is truncated and the patient censor indicator is set to TRUE # if the outcome variable is greater than the patient end time (relative to the patient start time) patient.censor.indicator = patient.censor.indicator | (outcome >= patient.end.time - patient.start.time) outcome = pmin(outcome, patient.end.time - patient.start.time) # Patient end time (relative to the patient start time) is set to the outcome variable if the # patient experience the event (that is, the patient censor indicator is FALSE) patient.end.time = (!patient.censor.indicator) * (patient.start.time + outcome) + (patient.censor.indicator) * patient.end.time } } else { # Current outcome type is "standard" # Dropout distribution is specified if (!all(is.na(patient.dropout.time))) { # Outcome variable is set to NA if the patient dropout indicator is TRUE outcome[patient.dropout.indicator] = NA } patient.censor.indicator = rep(FALSE, length(patient.censor.indicator)) } # Create a data frame for the current sample and outcome df = as.data.frame(t(rbind(outcome, patient.start.time, patient.end.time, patient.dropout.time, patient.censor.indicator))) colnames(df) = c("outcome", "patient.start.time", "patient.end.time", "patient.dropout.time", "patient.censor.indicator") current.design.outcome.variables[[outcome.index]] = list(id = current.sample.id[outcome.index], outcome.type = current.outcome.type, data = df) } return(current.design.outcome.variables) } # End of GeneratePatients function Mediana/R/WeightedPower.R0000644000176200001440000000310113434027610014704 0ustar liggesusers############################################################################################################################ # Function: WeightedPower # Argument: Test results (p-values) across multiple simulation runs (vector or matrix), statistic results (not used in this function), # criterion parameter (Type I error rate and weigth). # Description: Compute weighted power for the test results (vector of p-values or each column of the p-value matrix). WeightedPower = function(test.result, statistic.result, parameter) { # Error check if (is.null(parameter$alpha)) stop("Evaluation model: WeightedPower: alpha parameter must be specified.") if (is.null(parameter$weight)) stop("Evaluation model: WeightedPower: weight parameter must be specified.") if (length(parameter$weight) != ncol(test.result)) stop("Evaluation model: WeightedPower: The number of test weights must be equal to the number of tests.") if (sum(parameter$weight) != 1) stop("Evaluation model: WeightedPower: sum of weights must be equal to 1.") # Get the parameter alpha = parameter$alpha weight = parameter$weight significant = (test.result <= alpha) if (is.numeric(test.result)) # Only one test is specified and no weight is applied power = mean(significant, na.rm = TRUE) if (is.matrix(test.result)) { # Weights are applied when two or more tests are specified # Check if the number of tests equals the number of weights marginal.power = colMeans(significant) power = sum(marginal.power * weight, na.rm = TRUE) } return(power) } Mediana/R/AnalysisModel.Test.R0000644000176200001440000000110313434027610015611 0ustar liggesusers###################################################################################################################### # Function: AnalysisModel.Test # Argument: Test object. # Description: This function is called by default if the class of the argument is a Test object. #' @export AnalysisModel.Test = function(test, ...) { analysismodel = AnalysisModel() analysismodel = analysismodel + test args = list(...) if (length(args)>0) { for (i in 1:length(args)){ analysismodel = analysismodel + args[[i]] } } return(analysismodel) }Mediana/R/DataModel.Event.R0000644000176200001440000000104413434027610015045 0ustar liggesusers###################################################################################################################### # Function: DataModel.Event # Argument: Event object. # Description: This function is called by default if the class of the argument is an Event object. #' @export DataModel.Event = function(event, ...) { datamodel = DataModel() datamodel = datamodel + event args = list(...) if (length(args)>0) { for (i in 1:length(args)){ datamodel = datamodel + args[[i]] } } return(datamodel) }Mediana/R/AnalysisModel.default.R0000644000176200001440000000152513434027610016326 0ustar liggesusers###################################################################################################################### # Function: AnalysisModel.default # Argument: arguments. # Description: This function is called by default if the class of the argument is neither a MultAdjust, # nor a Interim object. #' @export AnalysisModel.default = function(...) { args = list(...) if (length(args) > 0) { stop("Analysis Model doesn't know how to deal with the parameters") } else { analysismodel = structure(list(general = list(interim.analysis = NULL, mult.adjust = NULL), tests = NULL, statistics = NULL), class = "AnalysisModel") } return(analysismodel) }Mediana/R/TTestNI.R0000644000176200001440000000404413434027610013430 0ustar liggesusers###################################################################################################################### # Function: TTestNI . # Argument: Data set and parameter (call type and non-inferiority margin). # Description: Computes one-sided p-value based on two-sample t-test with a non-inferiority margin. TTestNI = function(sample.list, parameter) { # Determine the function call, either to generate the p-value or to return description call = (parameter[[1]] == "Description") if (call == FALSE | is.na(call)) { if (is.null(parameter[[2]]$margin)) stop("Analysis model: TTestNI test: Non-inferiority margin must be specified.") margin = as.numeric(parameter[[2]]$margin) # Check if larger treatment effect is expected for the second sample or not (default = TRUE) if (is.null(parameter[[2]]$larger)) larger = TRUE else { if (!is.logical(parameter[[2]]$larger)) stop("Analysis model: TTestNI test: the larger argument must be logical (TRUE or FALSE).") larger = parameter[[2]]$larger } # Sample list is assumed to include two data frames that represent two analysis samples # Outcomes in Sample 1 outcome1 = sample.list[[1]][, "outcome"] # Remove the missing values due to dropouts/incomplete observations outcome1.complete = outcome1[stats::complete.cases(outcome1)] # Outcomes in Sample 2 outcome2 = sample.list[[2]][, "outcome"] # Remove the missing values due to dropouts/incomplete observations outcome2.complete = outcome2[stats::complete.cases(outcome2)] # One-sided p-value (treatment effect in sample 2 is expected to be greater than in sample 1) if (larger) result = stats::t.test(outcome2.complete + margin, outcome1.complete, alternative = "greater")$p.value else result = stats::t.test(outcome1.complete + margin, outcome2.complete, alternative = "greater")$p.value } else if (call == TRUE) { result=list("Student's t-test (non-inferiority)") } return(result) } # End of TTestNI Mediana/R/PresentationModel.Table.R0000644000176200001440000000115113434027610016614 0ustar liggesusers###################################################################################################################### # Function: PresentationModel.Table # Argument: Table object. # Description: This function is called by default if the class of the argument is a Table object. #' @export PresentationModel.Table = function(table, ...) { presentationmodel = PresentationModel() presentationmodel = presentationmodel + table args = list(...) if (length(args)>0) { for (i in 1:length(args)){ presentationmodel = presentationmodel + args[[i]] } } return(presentationmodel) }Mediana/R/MVBinomDist.R0000644000176200001440000000577213463627475014323 0ustar liggesusers###################################################################################################################### # Function: MVBinomDist . # Argument: List of parameters (number of observations, list(list (prop), correlation matrix)). # Description: This function is used to generate correlated multivariate binomial (0/1) outcomes. MVBinomDist = function(parameter) { if (missing(parameter)) stop("Data model: MVBinomDist distribution: List of parameters must be provided.") # Error checks if (is.null(parameter[[2]]$par)) stop("Data model: MVBinomDist distribution: Parameter list (prop) must be specified.") if (is.null(parameter[[2]]$par)) stop("Data model: MVBinomDist distribution: Correlation matrix must be specified.") par = parameter[[2]]$par corr = parameter[[2]]$corr # Number of endpoints m = length(par) if (ncol(corr) != m) stop("Data model: MVBinomDist distribution: The size of the proportion vector is different to the dimension of the correlation matrix.") if (sum(dim(corr) == c(m, m)) != 2) stop("Data model: MVBinomDist distribution: Correlation matrix is not correctly defined.") if (det(corr) <= 0) stop("Data model: MVBinomDist distribution: Correlation matrix must be positive definite.") if (any(corr < -1 | corr > 1)) stop("Data model: MVBinomDist distribution: Correlation values must be comprised between -1 and 1.") # Determine the function call, either to generate distribution or to return description call = (parameter[[1]] == "description") # Generate random variables if (call == FALSE) { # Error checks n = parameter[[1]] if (n%%1 != 0) stop("Data model: MVBinomDist distribution: Number of observations must be an integer.") if (n <= 0) stop("Data model: MVBinomDist distribution: Number of observations must be positive.") # Generate multivariate normal variables multnorm = mvtnorm::rmvnorm(n = n, mean = rep(0, m), sigma = corr) # Store resulting multivariate variables mvbinom = matrix(0, n, m) # Convert selected components to a uniform distribution and then to binomial distribution for (i in 1:m) { uniform = stats::pnorm(multnorm[, i]) # Proportion if (is.null(par[[i]]$prop)) stop("Data model: MVBinomDist distribution: Proportion must be specified.") prop = as.numeric(par[[i]]$prop) if (prop < 0 | prop > 1) stop("Data model: MVBinomDist distribution: proportion in the binomial distribution must be between 0 and 1.") mvbinom[, i] = (uniform <= prop) } result = mvbinom } else { # Provide information about the distribution function if (call == TRUE) { # Labels of distributional parameters par.labels = list() for (i in 1:m) { par.labels[[i]] = list(prop = "prop") } result = list(list(par = par.labels, corr = "corr"),list("Multivariate Binomial")) } } return(result) } #End of MVBinomDistMediana/R/MVExpoDist.R0000644000176200001440000000603213434027610014137 0ustar liggesusers###################################################################################################################### # Function: MVExpoDist. # Argument: List of parameters (number of observations, list(list(rate), correlation matrix). # Description: This function is used to generate correlated exponential outcomes. MVExpoDist = function(parameter) { # Error checks if (missing(parameter)) stop("Data model: MVExpoDist distribution: List of parameters must be provided.") if (is.null(parameter[[2]]$par)) stop("Data model: MVExpoDist distribution: Parameter list (rate) must be specified.") if (is.null(parameter[[2]]$corr)) stop("Data model: MVExpoDist distribution: Correlation matrix must be specified.") par = parameter[[2]]$par corr = parameter[[2]]$corr # Number of endpoints m = length(par) if (ncol(corr) != m) stop("Data model: MVExpoDist distribution: The size of the hazard rate vector is different to the dimension of the correlation matrix.") if (sum(dim(corr) == c(m, m)) != 2) stop("Data model: MVExpoDist distribution: Correlation matrix is not correctly defined.") if (det(corr) <= 0) stop("Data model: MVExpoDist distribution: Correlation matrix must be positive definite.") if (any(corr < -1 | corr > 1)) stop("Data model: MVExpoDist distribution: Correlation values must be comprised between -1 and 1.") # Determine the function call, either to generate distribution or to return description call = (parameter[[1]] == "description") # Generate random variables if (call == FALSE) { # Error checks n = parameter[[1]] if (n%%1 != 0) stop("Data model: MVExpoDist distribution: Number of observations must be an integer.") if (n <= 0) stop("Data model: MVExpoDist distribution: Number of observations must be positive.") # Generate multivariate normal variables multnorm = mvtnorm::rmvnorm(n = n, mean = rep(0, m), sigma = corr) # Store resulting multivariate variables mvmixed = matrix(0, n, m) # Convert selected components to a uniform distribution and then to exponential distribution for (i in 1:m) { uniform = stats::pnorm(multnorm[, i]) if (is.null(par[[i]]$rate)) stop("Data model: MVExpoDist distribution: Hazard rate parameter in the exponential distribution must be specified.") # Hazard rate hazard = as.numeric(par[[i]]$rate) if (hazard <= 0) stop("Data model: MVExpoDist distribution: Hazard rate parameter in the exponential distribution must be positive.") mvmixed[, i] = -log(uniform)/hazard } result = mvmixed } else { # Provide information about the distribution function if (call == TRUE) { # Labels of distributional parameters par.labels = list() for (i in 1:m) { par.labels[[i]] = list(rate = "rate") } result = list(list(par = par.labels, corr = "corr"),list("Multivariate Exponential")) } } return(result) } # End of MVExpoDistMediana/R/CreateTableOutcome.R0000644000176200001440000000546413464521460015661 0ustar liggesusers############################################################################################################################ # Function: CreateTableOutcome. # Argument: data.strucure and label (optional). # Description: Generate a summary table of outcome parameters for the report. CreateTableOutcome = function(data.structure, label = NULL) { # Number of sample ID n.id <- length(data.structure$id) id.label = c(unlist(lapply(lapply(data.structure$id, function(x) unlist(paste0("{",x,"}"))), paste0, collapse = ", "))) # Number of outcome n.outcome = length(data.structure$outcome.parameter.set) # Dummy call of the function to get the description dummy.function.call = list("description", data.structure$outcome.parameter.set[[1]][[1]]) outcome.dist.desc = do.call(data.structure$outcome$outcome.dist, list(dummy.function.call)) parameter.labels = outcome.dist.desc[[1]] outcome.dist.name = outcome.dist.desc[[2]] # Label if (is.null(label)) label = paste0("Outcome ", 1:n.outcome) else label = unlist(label) if (length(label) != n.outcome) stop("Summary: Number of the outcome parameters labels must be equal to the number of outcome parameters sets.") # Summary table outcome.table <- matrix(nrow = n.id*n.outcome, ncol = 4) ind <-1 if (data.structure$outcome$outcome.dist.dim == 1) { for (i in 1:n.outcome) { for (j in 1:n.id) { outcome.table[ind, 1] = i outcome.table[ind, 2] = label[i] outcome.table[ind, 3] = id.label[j] outcome.table[ind, 4] = mergeOutcomeParameter(parameter.labels, data.structure$outcome.parameter.set[[i]][[j]]) ind<-ind+1 } } } if (data.structure$outcome$outcome.dist.dim > 1) { for (i in 1:n.outcome) { for (j in 1:n.id) { par = ifelse(!is.null(parameter.labels$par), paste0(mapply(function(x,y) paste0("{",mergeOutcomeParameter(x,y),"}"), parameter.labels$par, data.structure$outcome.parameter.set[[i]][[j]]$par), collapse = ", "), mergeOutcomeParameter(parameter.labels, data.structure$outcome.parameter.set[[i]][[j]]) ) corr = ifelse(!is.null(data.structure$outcome.parameter.set[[i]][[j]]$corr), paste0("corr = {", paste(t(data.structure$outcome.parameter.set[[i]][[j]]$corr), collapse = ","),"}", collapse = ""), NA) outcome.table[ind, 1] = i outcome.table[ind, 2] = label[i] outcome.table[ind, 3] = id.label[j] outcome.table[ind, 4] = ifelse(!is.na(corr), paste0(par, ",\n",corr),par) ind<-ind+1 } } } outcome.table= as.data.frame(outcome.table) colnames(outcome.table) = c("outcome.parameter","Outcome parameter set", "Sample", "Parameter") return(list(outcome.dist.name,outcome.table)) } # End of CreateTableOutcome Mediana/R/PatientCountStat.R0000644000176200001440000000152413434027610015407 0ustar liggesusers###################################################################################################################### # Compute the number of patients generated based on non-missing values in the combined sample PatientCountStat = function(sample.list, parameter) { # Determine the function call, either to generate the statistic or to return description call = (parameter[[1]] == "Description") if (call == FALSE | is.na(call)) { # Error checks if (length(sample.list)==0) stop("Analysis model: One sample must be specified in the PatientCountStat statistic.") # Merge the samples in the sample list sample1 = do.call(rbind, sample.list) result = nrow(sample1) } else if (call == TRUE) { result = list("Number of Patients") } return(result) } # End of PatientCountStatMediana/R/FisherTest.R0000644000176200001440000000406413434027610014220 0ustar liggesusers###################################################################################################################### # Function: FisherTest . # Argument: Data set and parameter (call type). # Description: Computes one-sided p-value based on two-sample Fisher exact test. FisherTest = function(sample.list, parameter) { # Determine the function call, either to generate the p-value or to return description call = (parameter[[1]] == "Description") if (call == FALSE | is.na(call)) { # No parameters are defined if (is.na(parameter[[2]])) { larger = TRUE } else { if (!all(names(parameter[[2]]) %in% c("larger"))) stop("Analysis model: FisherTest test: this function accepts only one argument (larger)") # Parameters are defined but not the larger argument if (!is.logical(parameter[[2]]$larger)) stop("Analysis model: FisherTest test: the larger argument must be logical (TRUE or FALSE).") larger = parameter[[2]]$larger } # Sample list is assumed to include two data frames that represent two analysis samples # Outcomes in Sample 1 outcome1 = sample.list[[1]][, "outcome"] # Remove the missing values due to dropouts/incomplete observations outcome1.complete = outcome1[stats::complete.cases(outcome1)] # Outcomes in Sample 2 outcome2 = sample.list[[2]][, "outcome"] # Remove the missing values due to dropouts/incomplete observations outcome2.complete = outcome2[stats::complete.cases(outcome2)] # Contingency table contingency.data = rbind(cbind(2, outcome2.complete), cbind(1, outcome1.complete)) contingency.table = table(contingency.data[, 1], contingency.data[, 2]) # One-sided p-value (treatment effect in sample 2 is expected to be greater than in sample 1) if (larger) result = stats::fisher.test(contingency.table, alternative = "greater")$p.value else result = stats::fisher.test(contingency.table, alternative = "less")$p.value } else if (call == TRUE) { result=list("Fisher exact test") } return(result) } # End of FisherTest Mediana/R/DunnettAdj.CI.R0000644000176200001440000000332513434027610014471 0ustar liggesusers###################################################################################################################### # Function: DunnettAdj.CI # Argument: p, Vector of p-values (1 x m) # par, List of procedure parameters: vector of hypothesis weights (1 x m) # Description: Dunnett multiple testing procedure. DunnettAdj.CI = function(est, par) { # Number of point estimate m = length(est) # Extract the sample size if (is.null(par[[2]]$n)) stop("Dunnett procedure: Sample size must be specified (n).") n = par[[2]]$n # Extract the standard deviation if (is.null(par[[2]]$sd)) stop("Dunnett procedure: Standard deviation must be specified (sd).") sd = par[[2]]$sd # Extract the simultaneous coverage probability if (is.null(par[[2]]$covprob)) stop("Dunnett procedure: Coverage probability must be specified (covprob).") covprob = par[[2]]$covprob # Error checks if (m != length(est)) stop("Dunnett procedure: Length of the point estimate vector must be equal to the number of hypotheses.") if (m != length(sd)) stop("Dunnett procedure: Length of the standard deviation vector must be equal to the number of hypotheses.") if (covprob>=1 | covprob<=0) stop("Dunnett procedure: simultaneous coverage probability must be >0 and <1") # Standard errors stderror = sd*sqrt(2/n) # T-statistics associated with each test stat = est/stderror # Alpha alpha = 1-covprob # Compute the degree of freedom for the Dunnett procedure nu_dunnett = (m+1)*(n-1) # Critical value of Dunett critical_value = qdunnett(1-alpha,nu_dunnett,m) # Lower simultaneous confidence limit ci = est - critical_value*stderror return(ci) } # End of DunnettAdj.CI Mediana/R/MultAdj.MultAdjProc.R0000644000176200001440000000077213434027610015665 0ustar liggesusers###################################################################################################################### # Function: MultAdj.MultAdjProc. # Argument: MultAdjProc object # Description: This function is used to create an object of class MultAdjProc. #' @export MultAdj.MultAdjProc = function(...) { multadj = lapply(list(...),function(x) {if(class(x)=="MultAdjProc") list(unclass(x)) else unclass(x)} ) class(multadj) = "MultAdj" return(multadj) invisible(multadj) }Mediana/R/CreateTableTest.R0000644000176200001440000000360613434027610015154 0ustar liggesusers############################################################################################################################ # Function: CreateTableTest. # Argument: analysis.strucure and label (optional). # Description: Generate a summary table of test for the report. CreateTableTest = function(analysis.structure, label = NULL) { # Number of test n.test = length(analysis.structure$test) test.table = matrix(nrow = n.test, ncol = 4) nsample = rep(0,n.test) for (i in 1:n.test) { test.table[i, 1] = analysis.structure$test[[i]]$id test.desc = do.call(analysis.structure$test[[i]]$method,list(c(),list("Description",analysis.structure$test[[i]]$par))) test.table[i, 2] = test.desc[[1]] # if (length(test.desc)>1) { # test.table[i, 3] = paste0(test.desc[[2]],analysis.structure$test[[i]]$par, collapse = "\n") # } else { # test.table[i, 3] = paste0(names(analysis.structure$test[[i]]$par), " = ", analysis.structure$test[[i]]$par, collapse = "\n") # } if (!all(is.na(analysis.structure$test[[i]]$par))) test.table[i, 3] = paste0(names(analysis.structure$test[[i]]$par), " = ", analysis.structure$test[[i]]$par, collapse = "\n") nsample[i]=length(analysis.structure$test[[i]]$samples) npersample=rep(0,nsample[i]) sample.id=rep("",nsample[i]) text="" for (j in 1:nsample[i]) { npersample[j]=length(analysis.structure$test[[i]]$samples[[j]]) for (k in 1:npersample[j]) { sample.id[j]=paste0(sample.id[j],", ", analysis.structure$test[[i]]$samples[[j]][[k]]) } sample.id[j]=paste0("{",sub(", ","",sample.id[j]),"}") text=paste0(text,", ",sample.id[j]) } test.table[i, 4] = sub(", ","",text) } test.table = as.data.frame(test.table) colnames(test.table) = c("Test ID", "Test type", "Test parameters", "Samples") return(test.table) } # End of CreateTableTest Mediana/R/PropTest.R0000644000176200001440000000526013434027610013717 0ustar liggesusers###################################################################################################################### # Function: PropTest . # Argument: Data set and parameter (call type and Yates' correction). # Description: Computes one-sided p-value based on two-sample proportion test. PropTest = function(sample.list, parameter) { # Determine the function call, either to generate the p-value or to return description call = (parameter[[1]] == "Description") if (call == FALSE | is.na(call)) { # No parameters are defined if (is.na(parameter[[2]])) { yates = FALSE larger = TRUE } else { if (!all(names(parameter[[2]]) %in% c("larger", "yates"))) stop("Analysis model: PropTest test: this function accepts only one argument (larger)") # Yates' correction is set up by default to FALSE if(is.null(parameter[[2]]$yates)) yates = FALSE else { if (!is.logical(parameter[[2]]$yates)) stop("Analysis model: PropTest test: the yates argument must be logical (TRUE or FALSE).") yates = parameter[[2]]$yates } # Check if larger treatment effect is expected for the second sample or not (default = TRUE) if (is.null(parameter[[2]]$larger)) larger = TRUE else { if (!is.logical(parameter[[2]]$larger)) stop("Analysis model: PropTest test: the larger argument must be logical (TRUE or FALSE).") larger = parameter[[2]]$larger } } # Sample list is assumed to include two data frames that represent two analysis samples # Outcomes in Sample 1 outcome1 = sample.list[[1]][, "outcome"] # Remove the missing values due to dropouts/incomplete observations outcome1.complete = outcome1[stats::complete.cases(outcome1)] # Outcomes in Sample 2 outcome2 = sample.list[[2]][, "outcome"] # Remove the missing values due to dropouts/incomplete observations outcome2.complete = outcome2[stats::complete.cases(outcome2)] # One-sided p-value (treatment effect in sample 2 is expected to be greater than in sample 1) if (larger) result = stats::prop.test(c(sum(outcome2.complete), sum(outcome1.complete)), n = c(length(outcome2.complete), length(outcome1.complete)), alternative = "greater", correct = yates)$p.value else result = stats::prop.test(c(sum(outcome2.complete), sum(outcome1.complete)), n = c(length(outcome2.complete), length(outcome1.complete)), alternative = "less", correct = yates)$p.value } else if (call == TRUE) { result=list("Test for proportions") } return(result) } # End of PropTest Mediana/R/ChainAdj.R0000644000176200001440000000642113434027610013600 0ustar liggesusers###################################################################################################################### # Function: ChainAdj. # Argument: p, Vector of p-values (1 x m) # par, List of procedure parameters: vector of hypothesis weights (1 x m) matrix of transition parameters (m x m) # Description: Chain multiple testing procedure. ChainAdj = function(p, par) { # Determine the function call, either to generate the p-value or to return description call = (par[[1]] == "Description") if (any(call == FALSE) | any(is.na(call))) { # Number of p-values m = length(p) # Extract the vector of hypothesis weights (1 x m) and matrix of transition parameters (m x m) if (is.null(par[[2]]$weight)) stop("Analysis model: Chain procedure: Hypothesis weights must be specified.") if (is.null(par[[2]]$transition)) stop("Analysis model: Chain procedure: Transition matrix must be specified.") w = par[[2]]$weight g = par[[2]]$transition # Error checks if (sum(w)!=1) stop("Analysis model: Chain procedure: Hypothesis weights must add up to 1.") if (any(w<0)) stop("Analysis model: Chain procedure: Weights must be greater than 1.") if (sum(dim(g) == c(m, m)) != 2) stop("Analysis model: Chain procedure: The dimension of the transition matrix is not correct.") if (any(rowSums(g)>1)) stop("Analysis model: Chain procedure: The sum of each row of the transition matrix must be lower than 1.") if (any(g < 0)) stop("Analysis model: Chain procedure: The transition matrix must include only positive values.") pmax = 0 # Index set of processed, eg, rejected, null hypotheses (no processed hypotheses at the beginning of the algorithm) processed = rep(0, m) # Adjusted p-values adjpvalue = rep(0, m) # Loop over all null hypotheses for (i in 1:m) { # Find the index of the smallest weighted p-value among the non-processed null hypotheses ind = argmin(p, w, processed) if (ind>0){ adjpvalue[ind] = max(p[ind]/w[ind], pmax) adjpvalue[ind] = min(1, adjpvalue[ind]) pmax = adjpvalue[ind] # This null hypothesis has been processed processed[ind] = 1 # Update the hypothesis weights after a null hypothesis has been processed temp = w for (j in 1:m) { if (processed[j] == 0) w[j] = temp[j] + temp[ind] * g[ind, j] else w[j] = 0 } # Update the transition parameters (connection weights) after the rejection temp = g for (j in 1:m) { for (k in 1:m) { if (processed[j] == 0 & processed[k] == 0 & j != k & temp[j, ind] * temp[ind, j] != 1) g[j, k] = (temp[j, k] + temp[j, ind] * temp[ind, k])/(1 - temp[j, ind] * temp[ind, j]) else g[j, k] = 0 } } } else { adjpvalue[which(processed==0)]=1 } } result = adjpvalue } else if (call == TRUE) { w = paste0("Weight={",paste(round(par[[2]]$weight, 3), collapse = ","),"}") g = paste0("Transition matrix={",paste(as.vector(t(par[[2]]$transition)), collapse = ","),"}") result=list(list("Chain procedure"),list(w,g)) } return(result) } # End of ChainAdjMediana/R/WeibullDist.R0000644000176200001440000000342713434027610014371 0ustar liggesusers###################################################################################################################### # Function: WeibullDist. # Argument: List of parameters (number of observations, shape, scale). # Description: This function is used to generate Weibull distributed outcomes. WeibullDist = function(parameter) { # Error checks if (missing(parameter)) stop("Data model: WeibullDist distribution: List of parameters must be provided.") if (is.null(parameter[[2]]$shape)) stop("Data model: WeibullDist distribution: shape parameter must be specified.") if (is.null(parameter[[2]]$scale)) stop("Data model: WeibullDist distribution: scale parameter must be specified.") shape = parameter[[2]]$shape scale = parameter[[2]]$scale # Parameters check if (shape <= 0) stop("Data model: WeibullDist distribution: shape parameter must be positive") if (scale <= 0) stop("Data model: WeibullDist distribution: scale parameter must be positive") # Determine the function call, either to generate distribution or to return description call = (parameter[[1]] == "description") # Generate random variables if (call == FALSE) { n = parameter[[1]] if (n%%1 != 0) stop("Data model: WeibullDist distribution: Number of observations must be an integer.") if (n <= 0) stop("Data model: WeibullDist distribution: Number of observations must be positive.") result = stats::rweibull(n = n, shape = shape, scale = scale) } else { # Provide information about the distribution function if (call == TRUE) { # Labels of distributional parameters result = list(list(shape = "shape", scale = "scale"),list("Weibull")) } } return(result) } # End of WeibullDistMediana/R/EvaluationModel.default.R0000644000176200001440000000121613434027610016647 0ustar liggesusers###################################################################################################################### # Function: EvaluationModel.default # Argument: Multiple objects. # Description: This function is called by default if the class of the argument is not a Criterion object. #' @export EvaluationModel.default = function(...) { args = list(...) if (length(args) > 0) { stop("Evaluation Model doesn't know how to deal with the parameters") } else { evaluationmodel = structure(list(general = NULL, criteria = NULL), class = "EvaluationModel") } return(evaluationmodel) }Mediana/R/PresentationModel.Project.R0000644000176200001440000000116413434027610017177 0ustar liggesusers###################################################################################################################### # Function: PresentationModel.Project # Argument: Projet object. # Description: This function is called by default if the class of the argument is a Project object. #' @export PresentationModel.Project = function(project, ...) { presentationmodel = PresentationModel() presentationmodel = presentationmodel + project args = list(...) if (length(args)>0) { for (i in 1:length(args)){ presentationmodel = presentationmodel + args[[i]] } } return(presentationmodel) }Mediana/R/MeanSumm.R0000644000176200001440000000102113434027610013650 0ustar liggesusers############################################################################################################################ # Function: MeanSumm. # Argument: Descriptive statistics across multiple simulation runs (vector or matrix), method parameters (not used in this function). # Description: Compute mean for the vector of statistics or in each column of the matrix. MeanSumm = function(test.result, statistic.result, parameter) { result = apply(statistic.result, 2, mean) return(result) } # End of MeanSummMediana/R/MarginalPower.R0000644000176200001440000000164513434027610014711 0ustar liggesusers############################################################################################################################ # Function: MarginalPower. # Argument: Test results (p-values) across multiple simulation runs (vector or matrix), statistic results (not used in this function), # criterion parameter (Type I error rate). # Description: Compute marginal power for the vector of test results (vector of p-values or each column of the p-value matrix). MarginalPower = function(test.result, statistic.result, parameter) { # Error check if (is.null(parameter$alpha)) stop("Evaluation model: MarginalPower: alpha parameter must be specified.") alpha = parameter$alpha significant = (test.result <= alpha) if (is.numeric(test.result)) power = mean(significant, na.rm = TRUE) if (is.matrix(test.result)) power = colMeans(significant, na.rm = TRUE) return(power) } # End of MarginalPowerMediana/R/GenerateReport.default.R0000644000176200001440000004036213434027610016512 0ustar liggesusers###################################################################################################################### # Function: GenerateReport. # Argument: ResultDes returned by the CSE function and presentation model and Word-document title and Word-template. # Description: This function is used to create a summary table with all results #' @export GenerateReport.default = function(presentation.model = NULL, cse.results, report.filename, report.template = NULL){ # Add error checks if (!is.null(presentation.model) & class(presentation.model) != "PresentationModel") stop("GenerateReport: the presentation.model parameter must be a PresentationModel object.") if (class(cse.results) != "CSE") stop("GenerateReport: the cse.results parameter must be a CSE object.") if (!is.character(report.filename)) stop("GenerateReport: the report.filename parameter must be character.") # Create the structure of the report # If no presentation model, initialize a presentation model object if (is.null(presentation.model)) presentation.model = PresentationModel() report = CreateReportStructure(cse.results, presentation.model) report.results = report$result.structure report.structure = report$report.structure # Delete an older version of the report if (!is.null(report.filename)){ if (file.exists(report.filename)) file.remove(report.filename) } # Create a officer::docx object doc = officer::read_docx(system.file(package = "Mediana", "template/template.docx")) dim_doc = officer::docx_dim(doc) # Report's title doc = officer::set_doc_properties(doc, title = report.structure$title) #title.format = officer::fp_text(font.size = 24, font.family = "Calibri", bold = TRUE) doc = officer::body_add_par(doc, value = report.structure$title, style = "TitleDoc") # Text formatting my.text.format = officer::fp_text(font.size = 11, font.family = "Calibri") # Table formatting header.cellProperties = officer::fp_cell(border.left = officer::fp_border(width = 0), border.right = officer::fp_border(width = 0), border.bottom = officer::fp_border(width = 2), border.top = officer::fp_border(width = 2), background.color = "#eeeeee") data.cellProperties = officer::fp_cell(border.left = officer::fp_border(width = 0), border.right = officer::fp_border(width = 0), border.bottom = officer::fp_border(width = 1), border.top = officer::fp_border(width = 0)) header.textProperties = officer::fp_text(font.size = 11, bold = TRUE, font.family = "Calibri") data.textProperties = officer::fp_text(font.size = 11, font.family = "Calibri") leftPar = officer::fp_par(text.align = "left") rightPar = officer::fp_par(text.align = "right") centerPar = officer::fp_par(text.align = "center") # Number of sections in the report (the report's title is not counted) n.sections = length(report.structure$section) # Loop over the sections in the report for(section.index in 1:n.sections) { # Section's title (if non-empty) if (!is.na(report.structure$section[[section.index]]$title)) doc = officer::body_add_par(doc, value = report.structure$section[[section.index]]$title, style = "heading 1") # Number of subsections in the current section n.subsections = length(report.structure$section[[section.index]]$subsection) # Loop over the subsections in the current section for(subsection.index in 1:n.subsections) { # Subsection's title (if non-empty) if (!is.na(report.structure$section[[section.index]]$subsection[[subsection.index]]$title)) doc = officer::body_add_par(doc, value = report.structure$section[[section.index]]$subsection[[subsection.index]]$title, style = "heading 2") # Number of subsubsections in the current section n.subsubsections = length(report.structure$section[[section.index]]$subsection[[subsection.index]]$subsubsection) if (n.subsubsections>0){ # Loop over the subsubsection in the current section for(subsubsection.index in 1:n.subsubsections) { # Subsubsection's title (if non-empty) if (!is.na(report.structure$section[[section.index]]$subsection[[subsection.index]]$subsubsection[[subsubsection.index]]$title)) doc = officer::body_add_par(doc, value = report.structure$section[[section.index]]$subsection[[subsection.index]]$subsubsection[[subsubsection.index]]$title, style = "heading 3") # Number of subsubsubsections in the current section n.subsubsubsection = length(report.structure$section[[section.index]]$subsection[[subsection.index]]$subsubsection[[subsubsection.index]]$subsubsubsection) if (n.subsubsubsection>0){ # Loop over the subsubsubsection in the current section for(subsubsubsection.index in 1:n.subsubsubsection) { # Subsubsubsection's title (if non-empty) if (!is.na(report.structure$section[[section.index]]$subsection[[subsection.index]]$subsubsection[[subsubsection.index]]$subsubsubsection[[subsubsubsection.index]]$title)) doc = officer::body_add_par(doc, value = report.structure$section[[section.index]]$subsection[[subsection.index]]$subsubsection[[subsubsection.index]]$subsubsubsection[[subsubsubsection.index]]$title, style = "heading 4") # Number of items in the current subsubsection n.items = length(report.structure$section[[section.index]]$subsection[[subsection.index]]$subsubsection[[subsubsection.index]]$subsubsubsection[[subsubsubsection.index]]$item) # Loop over the items in the current subsection for(item.index in 1:n.items) { # Create paragraphs for each item # Determine the item's type (text by default) type = report.structure$section[[section.index]]$subsection[[subsection.index]]$subsubsection[[subsubsection.index]]$subsubsubsection[[subsubsubsection.index]]$item[[item.index]]$type if (is.null(type)) type = "text" label = report.structure$section[[section.index]]$subsection[[subsection.index]]$subsubsection[[subsubsection.index]]$subsubsubsection[[subsubsubsection.index]]$item[[item.index]]$label value = report.structure$section[[section.index]]$subsection[[subsection.index]]$subsubsection[[subsubsection.index]]$subsubsubsection[[subsubsubsection.index]]$item[[item.index]]$value param = report.structure$section[[section.index]]$subsection[[subsection.index]]$subsubsection[[subsubsection.index]]$subsubsubsection[[subsubsubsection.index]]$item[[item.index]]$param if (type == "table" & is.null(param)) param = list(span.columns = NULL, groupedheader.row = NULL) switch( type, text = { if (label != "") doc = officer::body_add_par(doc, value = paste(label, value), style = "Normal") else doc = officer::body_add_par(doc, value = value, style = "Normal") doc = officer::fp_text(my.text.format) }, table = { #header.columns = (is.null(param$groupedheader.row)) summary_table = flextable::regulartable(data = value) summary_table = flextable::style(summary_table, pr_p = leftPar, pr_c = header.cellProperties, pr_t = header.textProperties, part = "header") summary_table = flextable::style(summary_table, pr_p = leftPar, pr_c = data.cellProperties, pr_t = data.textProperties, part = "body") if (!is.null(param$span.columns)) { for (ind.span in 1:length(param$span.columns)){ summary_table = flextable::merge_v(summary_table, j = param$span.columns[ind.span]) } } # if (!is.null(param$groupedheader.row)) { # header = paste0(summary_table$header$col_keys, ' = ', rep(param$groupedheader.row$values, param$groupedheader.row$colspan)) # summary_table = flextable::add_header(summary_table, header) # summary_table = flextable::add_header(summary_table, value = colnames( value )) # } width_table = flextable::dim_pretty(summary_table)$width/(sum(flextable::dim_pretty(summary_table)$width)/(dim_doc$page['width'] - dim_doc$margins['left']/2 - dim_doc$margins['right']/2)) summary_table = flextable::autofit(summary_table) summary_table = flextable::width(summary_table, width = width_table) doc = officer::body_add_par(doc, value = label, style = "rTableLegend") doc = flextable::body_add_flextable(doc, summary_table) }, plot = { doc = officer::body_add_gg(doc, x = value, width = 6, height = 5, main = label) doc = officer::body_add_par(doc, value = label, style = "rPlotLegend") } ) } } } else { # Number of items in the current subsubsection n.items = length(report.structure$section[[section.index]]$subsection[[subsection.index]]$subsubsection[[subsubsection.index]]$item) # Loop over the items in the current subsection for(item.index in 1:n.items) { # Create paragraphs for each item # Determine the item's type (text by default) type = report.structure$section[[section.index]]$subsection[[subsection.index]]$subsubsection[[subsubsection.index]]$item[[item.index]]$type if (is.null(type)) type = "text" label = report.structure$section[[section.index]]$subsection[[subsection.index]]$subsubsection[[subsubsection.index]]$item[[item.index]]$label value = report.structure$section[[section.index]]$subsection[[subsection.index]]$subsubsection[[subsubsection.index]]$item[[item.index]]$value param = report.structure$section[[section.index]]$subsection[[subsection.index]]$subsubsection[[subsubsection.index]]$item[[item.index]]$param if (type == "table" & is.null(param)) param = list(span.columns = NULL, groupedheader.row = NULL) switch( type, text = { if (label != "") doc = officer::body_add_par(doc, value = paste(label, value), style = "Normal") else doc = officer::body_add_par(doc, value = value, style = "Normal") }, table = { header.columns = (is.null(param$groupedheader.row)) summary_table = flextable::regulartable(data = value) summary_table = flextable::style(summary_table, pr_p = leftPar, pr_c = header.cellProperties, pr_t = header.textProperties, part = "header") summary_table = flextable::style(summary_table, pr_p = leftPar, pr_c = data.cellProperties, pr_t = data.textProperties, part = "body") if (!is.null(param$span.columns)) { for (ind.span in 1:length(param$span.columns)){ summary_table = flextable::merge_v(summary_table, j = param$span.columns[ind.span]) } } # if (!is.null(param$groupedheader.row)) { # header = paste0(summary_table$header$col_keys, ' = ', rep(param$groupedheader.row$values, param$groupedheader.row$colspan)) # summary_table = flextable::add_header(summary_table, header) # summary_table = flextable::add_header(summary_table, value = colnames( value )) # } width_table = flextable::dim_pretty(summary_table)$width/(sum(flextable::dim_pretty(summary_table)$width)/(dim_doc$page['width'] - dim_doc$margins['left']/2 - dim_doc$margins['right']/2)) summary_table = flextable::autofit(summary_table) summary_table = flextable::width(summary_table, width = width_table) doc = officer::body_add_par(doc, value = label, style = "rTableLegend") doc = flextable::body_add_flextable(doc, summary_table) }, plot = { doc = officer::body_add_gg(doc, x = value, width = 6, height = 5, main = label) doc = officer::body_add_par(doc, value = label, style = "rPlotLegend") } ) } } } } else { # Number of items in the current subsection n.items = length(report.structure$section[[section.index]]$subsection[[subsection.index]]$item) # Loop over the items in the current subsection for(item.index in 1:n.items) { # Create paragraphs for each item # Determine the item's type (text by default) type = report.structure$section[[section.index]]$subsection[[subsection.index]]$item[[item.index]]$type if (is.null(type)) type = "text" label = report.structure$section[[section.index]]$subsection[[subsection.index]]$item[[item.index]]$label value = report.structure$section[[section.index]]$subsection[[subsection.index]]$item[[item.index]]$value param = report.structure$section[[section.index]]$subsection[[subsection.index]]$item[[item.index]]$param if (type == "table" & is.null(param)) param = list(span.columns = NULL, groupedheader.row = NULL) switch( type, text = { if (label != "") doc = officer::body_add_par(doc, value = paste(label, value), style = "Normal") else doc = officer::body_add_par(doc, value = value, style = "Normal") }, table = { header.columns = (is.null(param$groupedheader.row)) summary_table = flextable::regulartable(data = value) summary_table = flextable::style(summary_table, pr_p = leftPar, pr_c = header.cellProperties, pr_t = header.textProperties, part = "header") summary_table = flextable::style(summary_table, pr_p = leftPar, pr_c = data.cellProperties, pr_t = data.textProperties, part = "body") if (!is.null(param$span.columns)) { for (ind.span in 1:length(param$span.columns)){ summary_table = flextable::merge_v(summary_table, j = param$span.columns[ind.span]) } } # if (!is.null(param$groupedheader.row)) { # header = paste0(summary_table$header$col_keys, ' = ', rep(param$groupedheader.row$values, param$groupedheader.row$colspan)) # summary_table = flextable::add_header(summary_table, header) # summary_table = flextable::add_header(summary_table, value = colnames( value )) # } width_table = flextable::dim_pretty(summary_table)$width/(sum(flextable::dim_pretty(summary_table)$width)/(dim_doc$page['width'] - dim_doc$margins['left']/2 - dim_doc$margins['right']/2)) summary_table = flextable::autofit(summary_table) summary_table = flextable::width(summary_table, width = width_table) doc = officer::body_add_par(doc, value = label, style = "rTableLegend") doc = flextable::body_add_flextable(doc, summary_table) }, plot = { doc = officer::body_add_gg(doc, x = value, width = 6, height = 5, main = label) doc = officer::body_add_par(doc, value = label, style = "rPlotLegend") } ) } } } } # Save the report print(doc, target = report.filename) # Return return(invisible(report.results)) } # End of GenerateReport Mediana/R/TruncatedExpoDist.R0000644000176200001440000000340213434027610015544 0ustar liggesusers# Function: TruncatedExpoDist # Argument: List of parameters (number of observations, rate, truncation parameter). # Description: This function is used to generate outcomes from a truncated exponential distribution. TruncatedExpoDist = function(parameter) { # Error checks if (missing(parameter)) stop("Data model: TruncatedExpoDist distribution: List of parameters must be provided.") if (is.null(parameter[[2]]$rate)) stop("Data model: TruncatedExpoDist distribution: rate parameter must be specified.") if (is.null(parameter[[2]]$trunc)) stop("Data model: TruncatedExpoDist distribution: trunc parameter must be specified.") rate = parameter[[2]]$rate trunc = parameter[[2]]$trunc # Parameters check if (rate <= 0) stop("Data model: TruncatedExpoDist distribution: rate parameter must be positive") if (trunc <= 0) stop("Data model: TruncatedExpoDist distribution: trunc parameter must be positive") # Determine the function call, either to generate distribution or to return description call = (parameter[[1]] == "description") # Generate random variables if (call == FALSE) { # Error checks n = parameter[[1]] if (n%%1 != 0) stop("Data model: TruncatedExpoDist distribution: Number of observations must be an integer.") if (n <= 0) stop("Data model: TruncatedExpoDist distribution: Number of observations must be positive.") result = -log(1 - stats::runif(n) * (1 - exp(-rate * trunc))) / rate } else { # Provide information about the distribution function if (call == TRUE) { # Labels of distributional parameters result = list(list("rate", "trunc"),list("Truncated Exponential")) } } return(result) } # End of TruncatedExpoDist Mediana/R/RatioEffectSizePropStat.R0000644000176200001440000000333313434027610016661 0ustar liggesusers# Compute the ratio of effect sizes for proportions based on non-missing values in the combined sample RatioEffectSizePropStat = function(sample.list, parameter) { # Determine the function call, either to generate the statistic or to return description call = (parameter[[1]] == "Description") if (call == FALSE | is.na(call)) { # Error checks if (length(sample.list)!=4) stop("Analysis model: Four samples must be specified in the RatioEffectSizePropStat statistic.") # Merge the samples in the sample list sample1 = sample.list[[1]] # Merge the samples in the sample list sample2 = sample.list[[2]] # Merge the samples in the sample list sample3 = sample.list[[3]] # Merge the samples in the sample list sample4 = sample.list[[4]] # Select the outcome column and remove the missing values due to dropouts/incomplete observations outcome1 = sample1[, "outcome"] outcome2 = sample2[, "outcome"] prop1 = mean(stats::na.omit(outcome1)) prop2 = mean(stats::na.omit(outcome2)) prop = (prop2 + prop1) / 2 result1 = (prop2 - prop1) / sqrt(prop * (1-prop)) # Select the outcome column and remove the missing values due to dropouts/incomplete observations outcome3 = sample3[, "outcome"] outcome4 = sample4[, "outcome"] prop3 = mean(stats::na.omit(outcome3)) prop4 = mean(stats::na.omit(outcome4)) prop = (prop3 + prop4) / 2 result2 = (prop4 - prop3) / sqrt(prop * (1-prop)) # Caculate the ratio of effect size result = result1 / result2 } else if (call == TRUE) { result = list("Ratio of effect size (proportion)") } return(result) } # End of RatioEffectSizePropStat Mediana/R/MultAdjStrategy.MultAdjProc.R0000644000176200001440000000074413434027610017407 0ustar liggesusers###################################################################################################################### # Function: MultAdjStrategy.MultAdjProc. # Argument: MultAdjProc object # Description: This function is used to create an object of class MultAdjStrategy. #' @export MultAdjStrategy.MultAdjProc = function(...) { multadjstrat = lapply(list(...),unclass) class(multadjstrat) = "MultAdjStrategy" return(multadjstrat) invisible(multadjstrat) }Mediana/R/ExtractDataStack.R0000644000176200001440000000524613434027610015335 0ustar liggesusers############################################################################################################################ # Function: ExtractDataStack # Argument: Data stack, data.scenario, sample.id and simulation.run # Description: This function extract the specified datasets according to the data scenario, sample id and simulation run # specified by the user. #' @export ExtractDataStack = function(data.stack, data.scenario = NULL, sample.id = NULL, simulation.run = NULL){ # Add error checks # Check class of data stack if (class(data.stack)!="DataStack") stop("ExtractDataStack: a DataStack object must be specified in the data.stack argument") # Check if the number defined in the data.scenario exists in the data.stack$data.scenario.grid if (!is.null(data.scenario) & any(!(data.scenario %in% 1:nrow(data.stack$data.scenario.grid)))) stop(paste0("ExtractDataStack: the specified data.scenario does not exist (",nrow(data.stack$data.scenario.grid)," data scenarios have been specified in the DataModel).")) # Check if sample is defined in the data structure if (!is.null(sample.id) & any(!(sample.id %in% unlist(data.stack$data.structure$id)))) stop(paste0("ExtractDataStack: the specified sample.id does not exist (the sample id ", paste0(unlist(data.stack$data.structure$id), collapse = ", ")," have been specified in the DataModel).")) # Check if simulatin run exists if (!is.null(simulation.run) & any(!(simulation.run %in% 1:length(data.stack$data.set)))) stop(paste0("ExtractDataStack: the specified simulation.runs does not exist (",length(data.stack$data.set)," simulation runs have been performed).")) # Get the simulation index specified by the user # If null, all simulation runs are selected if (is.null(simulation.run)) simularion.run.index = 1:data.stack$sim.parameters$n.sims else simularion.run.index = simulation.run # Get the data.scenario specified by the user if (is.null(data.scenario)) data.scenario.index = 1:nrow(data.stack$data.scenario.grid) else data.scenario.index = data.scenario # Get the sample.id specified by the user if (is.null(sample.id)) sample.id.index = 1:length(unlist(data.stack$data.structure$id)) else sample.id.index = which(unlist(data.stack$data.structure$id) %in% sample.id) # Create the list containing the requested data.stack data.stack.temp = list() data.stack.temp$data.set = lapply(data.stack$data.set[simularion.run.index], function(x) { y = list() y$data.scenario = lapply(x$data.scenario[data.scenario.index], function(y){ z=list() z$sample = y$sample[sample.id.index] return(z) }) return(y) }) return(data.stack.temp) } Mediana/R/DisjunctivePower.R0000644000176200001440000000167113434027610015445 0ustar liggesusers############################################################################################################################ # Function: DisjunctivePower # Argument: Test results (p-values) across multiple simulation runs (vector or matrix), statistic results (not used in this function), # criterion parameter (Type I error rate). # Description: Compute disjunctive power for the test results (vector of p-values or each column of the p-value matrix). DisjunctivePower = function(test.result, statistic.result, parameter) { # Error check if (is.null(parameter$alpha)) stop("Evaluation model: DisjunctivePower: alpha parameter must be specified.") alpha = parameter$alpha if (is.numeric(test.result)) significant = (test.result <= alpha) if (is.matrix(test.result)) significant = (rowSums(test.result <= alpha) > 0) power = mean(significant, na.rm = TRUE) return(power) } # End of DisjunctivePowerMediana/R/GenerateData.R0000644000176200001440000000052713434027610014464 0ustar liggesusers############################################################################################################################ # Function: GenerateData # Argument: .... # Description: This function generate data according to the data model #' @export GenerateData = function(data.model, sim.parameters) { UseMethod("GenerateData") } Mediana/R/InteractionPower.R0000644000176200001440000000232613434027610015433 0ustar liggesusers############################################################################################################################ # Function: InteractionPower # Argument: Test results (p-values) across multiple simulation runs (vector or matrix), statistic results, # criterion parameter (Influence cutoff). # Description: Compute probability that the interaction condition is met. InteractionPower = function(test.result, statistic.result, parameter) { # Error check if (is.null(parameter$alpha)) stop("Evaluation model: InteractionPower: alpha parameter must be specified.") if (is.null(parameter$cutoff_influence)) stop("Evaluation model: InteractionPower: cutoff_influence parameter must be specified.") if (is.null(parameter$cutoff_interaction)) stop("Evaluation model: InteractionPower: cutoff_interaction parameter must be specified.") alpha = parameter$alpha cutoff_influence = parameter$cutoff_influence cutoff_interaction = parameter$cutoff_interaction significant = (test.result[,1] <= alpha & test.result[,2] <= alpha & statistic.result[,1] >= cutoff_influence & statistic.result[,2] >= cutoff_interaction) power = mean(significant) return(power) } # End of InteractionPowerMediana/R/StepDownDunnettAdj.R0000644000176200001440000000366413434027610015671 0ustar liggesusers###################################################################################################################### # Function: StepDownDunnettAdj. # Argument: p, Vector of p-values (1 x m) # par, List of procedure parameters: common sample size per trial arm (1 x 1) # Description: Step-down Dunnett procedure. StepDownDunnettAdj = function(p, par) { # Determine the function call, either to generate the p-value or to return description call = (par[[1]] == "Description") # Number of p-value m = length(p) # Extract the common sample size per trial arm (1 x 1) if (!any(is.na(par[[2]]))) { if (is.null(par[[2]]$n)) stop( "Analysis model: Step-down Dunnett procedure: Common sample size per trial arm must be specified." ) n = par[[2]]$n } # Error checks if (n < 0) stop( "Analysis model: Step-down Dunnett procedure: Common sample size per trial arm must be greater than 0." ) # Number of degrees of freedom nu = (m + 1) * (n - 1) # Compute test statistics from p-values (assuming that each test statistic follows a t distribution) stat = stats::qt(1 - p, df = 2 * (n - 1)) if (any(call == FALSE) | any(is.na(call))) { temp = rep(1, m) adjpvalue = rep(1, m) # Sort test statistics from largest to smallest ordered = order(stat, decreasing = TRUE) sorted = stat[ordered] temp[1] = 1 - CDFDunnett(sorted[1], nu, m) maxp = temp[1] for (i in 2:(m-1)) { temp[i] = max(maxp, 1 - CDFDunnett(sorted[i], nu, m - i + 1)) maxp = max(maxp, temp[i]) } temp[m] = max(maxp, 1 - stats::pt(sorted[m], nu)) # Return to the original ordering adjpvalue[ordered] = temp result = adjpvalue } else if (call == TRUE) { n = paste0("Common sample size={", n, "}") result = list(list("Step-down Dunnett procedure"), list(n)) } return(result) } # End of StepDownDunnettAdj Mediana/R/appendList.R0000644000176200001440000000077213434027610014245 0ustar liggesusers###################################################################################################################### # Function: appendList . # Argument: two lists. # Description: This function is used to merge two lists appendList <- function (x, val) { stopifnot(is.list(x), is.list(val)) xnames <- names(x) for (v in names(val)) { x[[v]] <- if (v %in% xnames && is.list(x[[v]]) && is.list(val[[v]])) appendList(x[[v]], val[[v]]) else c(x[[v]], val[[v]]) } x }Mediana/R/DataStack.default.R0000644000176200001440000000752113434027610015423 0ustar liggesusers############################################################################################################################ # Function: DataStack # Argument: .... # Description: This function generate data according to the data model #' @export DataStack.default = function(data.model, sim.parameters){ # Check of class of the data.model and sim.parameters argument if (!(class(data.model) == ("DataModel"))) stop("DataStack: a DataModel object must be specified in the data.model argument") if (!(class(sim.parameters) == c("SimParameters"))) stop("DataStack: a SimParameters object must be specified in the sim.parameters argument") # Simulation parameters # Number of simulation runs if (is.null(sim.parameters$n.sims)) stop("DataStack:The number of simulation runs must be provided (n.sims)") n.sims = sim.parameters$n.sims if (!is.numeric(n.sims)) stop("DataStack:Number of simulation runs must be an integer.") if (length(n.sims) > 1) stop("DataStack: Number of simulation runs: Only one value must be specified.") if (n.sims%%1 != 0) stop("DataStack: Number of simulation runs must be an integer.") if (n.sims <= 0) stop("DataStack: Number of simulation runs must be positive.") # Seed if (is.null(sim.parameters$seed)) stop("The seed must be provided (seed)") seed = sim.parameters$seed if (!is.numeric(seed)) stop("Seed must be an integer.") if (length(seed) > 1) stop("Seed: Only one value must be specified.") if (nchar(as.character(seed)) > 10) stop("Length of seed must be inferior to 10.") if (!is.null(sim.parameters$proc.load)){ proc.load = sim.parameters$proc.load if (is.numeric(proc.load)){ if (length(proc.load) > 1) stop("Number of cores: Only one value must be specified.") if (proc.load %%1 != 0) stop("Number of cores must be an integer.") if (proc.load <= 0) stop("Number of cores must be positive.") n.cores = proc.load } else if (is.character(proc.load)){ n.cores=switch(proc.load, low={1}, med={parallel::detectCores()/2}, high={parallel::detectCores()-1}, full={parallel::detectCores()}, {stop("Processor load not valid")}) } } else n.cores = 1 sim.parameters = list(n.sims = n.sims, seed = seed, proc.load = n.cores) # Dummy call of function to check the structure of the DataModel and stop with error in log if issue dummy.data = CreateDataStack(data.model = data.model, n.sims = 1) dummy.data = NULL # Simulation parameters # Use proc.load to generate the clusters cluster.mediana = parallel::makeCluster(getOption("cluster.mediana.cores", sim.parameters$proc.load)) # To make this reproducible I used the same number as the seed set.seed(seed) parallel::clusterSetRNGStream(cluster.mediana, seed) #Export all functions in the global environment to each node parallel::clusterExport(cluster.mediana,ls(envir=.GlobalEnv)) doParallel::registerDoParallel(cluster.mediana) # Simulation index initialisation sim.index=0 # Generate the data data.stack.temp = foreach::foreach(sim.index=1:sim.parameters$n.sims, .packages=(.packages())) %dorng% { data = CreateDataStack(data.model = data.model, n.sims = 1) } # Stop the cluster parallel::stopCluster(cluster.mediana) #closeAllConnections() data.stack=list() data.stack$description= "data.stack" data.stack$data.set = lapply(data.stack.temp, function(x) x$data.set[[1]]) data.stack$data.scenario.grid = data.stack.temp[[1]]$data.scenario.grid data.stack$data.structure = data.stack.temp[[1]]$data.structure data.stack$sim.parameters = sim.parameters class(data.stack) = "DataStack" return(data.stack) } Mediana/R/ExtractAnalysisStack.R0000644000176200001440000000403413434027610016241 0ustar liggesusers############################################################################################################################ # Function: ExtractAnalysisStack # Argument: Analysis stack, data.scenario, simulation.run # Description: This function extract the specified results according to the data scenario and simulation run # specified by the user. #' @export ExtractAnalysisStack = function(analysis.stack, data.scenario = NULL, simulation.run = NULL){ # Add error checks # Check class of analysis stack if (class(analysis.stack)!="AnalysisStack") stop("ExtractAnalysisStack: an AnalysisStack object must be specified in the analysis.stack argument") # Check if the number defined in the data.scenario exists in the analysis.stack$analysis.scenario.grid if (!is.null(data.scenario) & any(!(data.scenario %in% 1:nrow(analysis.stack$analysis.scenario.grid)))) stop(paste0("ExtractAnalysisStack: the specified data.scenario does not exist (",nrow(analysis.stack$analysis.scenario.grid)," analysis scenarios have been specified in the AnalysisModel).")) # Check if simulatin run exists if (!is.null(simulation.run) & any(!(simulation.run %in% 1:length(analysis.stack$analysis.set)))) stop(paste0("ExtractAnalysisStack: the specified simulation.runs does not exist (",length(analysis.stack$analysis.set)," simulation runs have been performed).")) # Get the simulation index specified by the user # If null, all simulation runs are selected if (is.null(simulation.run)) simularion.run.index = 1:analysis.stack$sim.parameters$n.sims else simularion.run.index = simulation.run # Get the data.scenario specified by the user if (is.null(data.scenario)) data.scenario.index = 1:nrow(analysis.stack$analysis.scenario.grid) else data.scenario.index = data.scenario # Create the list containing the requested analysis.stack analysis.stack.temp = list() analysis.stack.temp$analysis.set = lapply(analysis.stack$analysis.set[simularion.run.index], function(x) x[data.scenario.index]) return(analysis.stack.temp) } Mediana/R/HommelAdj.R0000644000176200001440000000500113434027610013770 0ustar liggesusers###################################################################################################################### # Function: HommelAdj. # Argument: p, Vector of p-values (1 x m) # par, List of procedure parameters: vector of hypothesis weights (1 x m) # Description: Hommel multiple testing procedure (using the closed testing procedure). HommelAdj = function(p, par) { # Determine the function call, either to generate the p-value or to return description call = (par[[1]] == "Description") # Number of p-values m = length(p) # Extract the vector of hypothesis weights (1 x m) if (!any(is.na(par[[2]]))) { if (is.null(par[[2]]$weight)) stop("Analysis model: Hommel procedure: Hypothesis weights must be specified.") w = par[[2]]$weight } else { w = rep(1/m, m) } if (any(call == FALSE) | any(is.na(call))) { # Error checks if (length(w) != m) stop("Analysis model: Hommel procedure: Length of the weight vector must be equal to the number of hypotheses.") if (sum(w)!=1) stop("Analysis model: Hommel procedure: Hypothesis weights must add up to 1.") if (any(w < 0)) stop("Analysis model: Hommel procedure: Hypothesis weights must be greater than 0.") # Weighted Simes p-value for intersection hypothesis simes <- function(p, w) { nb <- length(w[w != 0 & !is.nan(w)]) if (nb > 1) { p.sort <- sort(p[w != 0]) w.sort <- w[w != 0][order(p[w != 0])] simes <- min(p.sort/cumsum(w.sort)) } else if (nb == 1) simes <- pmin(1, p/w) else if (nb == 0) simes <- 1 return(simes) } # number of intersection nbint <- 2^m - 1 # matrix of intersection hypotheses int <- matrix(0, nbint, m) for (i in 1:m) { for (j in 0:(nbint - 1)) { k <- floor(j/2^(m - i)) if (k/2 == floor(k/2)) int[j + 1, i] <- 1 } } # matrix of local p-values int.pval <- matrix(0, nbint, m) # vector of weights for local test w.loc <- rep(0, m) # local p-values for intersection hypotheses for (i in 1:nbint) { w.loc <- w * int[i, ]/sum(w * int[i, ]) int.pval[i, ] <- int[i, ] * simes(p, w.loc) } adjpvalue <- apply(int.pval, 2, max) result = adjpvalue } else if (call == TRUE) { weight = paste0("Weight={",paste(round(w,2), collapse = ","),"}") result=list(list("Hommel procedure"),list(weight)) } return(result) } # End of HommelAdjMediana/R/PresentationModel.Subsection.R0000644000176200001440000000120713434027610017705 0ustar liggesusers###################################################################################################################### # Function: PresentationModel.Subsection # Argument: Subsection object. # Description: This function is called by default if the class of the argument is a Subsection object. #' @export PresentationModel.Subsection = function(subsection, ...) { presentationmodel = PresentationModel() presentationmodel = presentationmodel + subsection args = list(...) if (length(args)>0) { for (i in 1:length(args)){ presentationmodel = presentationmodel + args[[i]] } } return(presentationmodel) }Mediana/R/DataStack.R0000644000176200001440000000051613434027610013775 0ustar liggesusers############################################################################################################################ # Function: DataStack # Argument: .... # Description: This function generate data according to the data model #' @export DataStack = function(data.model, sim.parameters) { UseMethod("DataStack") } Mediana/R/AnalysisModel.Interim.R0000644000176200001440000000111313434027610016302 0ustar liggesusers###################################################################################################################### # Function: AnalysisModel.Interim # Argument: Interim object. # Description: This function is called by default if the class of the argument is a Interim object. AnalysisModel.Interim = function(interim, ...) { analysismodel = AnalysisModel() analysismodel = analysismodel + interim args = list(...) if (length(args)>0) { for (i in 1:length(args)){ analysismodel = analysismodel + args[[i]] } } return(analysismodel) }Mediana/R/ConjunctivePower.R0000644000176200001440000000171213434027610015441 0ustar liggesusers############################################################################################################################ # Function: ConjunctivePower # Argument: Test results (p-values) across multiple simulation runs (vector or matrix), statistic results (not used in this function), # criterion parameter (Type I error rate). # Description: Compute conjunctive power for the test results (vector of p-values or each column of the p-value matrix). ConjunctivePower = function(test.result, statistic.result, parameter) { # Error check if (is.null(parameter$alpha)) stop("Evaluation model: ConjunctivePower: alpha parameter must be specified.") alpha = parameter$alpha if (is.numeric(test.result)) significant = (test.result <= alpha) if (is.matrix(test.result)) significant = (rowSums(test.result <= alpha) == ncol(test.result)) power = mean(significant, na.rm = TRUE) return(power) } # End of ConjunctivePowerMediana/R/DunnettAdj.R0000644000176200001440000000271413434027610014200 0ustar liggesusers###################################################################################################################### # Function: DunnettAdj. # Argument: p, Vector of p-values (1 x m) # par, List of procedure parameters: common sample size per trial arm (1 x 1) # Description: Single-step Dunnett procedure. DunnettAdj = function(p, par) { # Determine the function call, either to generate the p-value or to return description call = (par[[1]] == "Description") # Number of test statistics m = length(p) # Extract the common sample size per trial arm (1 x 1) if (!any(is.na(par[[2]]))) { if (is.null(par[[2]]$n)) stop("Analysis model: Single-step Dunnett procedure: Common sample size per trial arm must be specified.") n = par[[2]]$n } # Error checks if (n < 0) stop("Analysis model: Single-step Dunnett procedure: Common sample size per trial arm must be greater than 0.") # Number of degrees of freedom nu = (m + 1) * (n - 1) # Compute test statistics from p-values (assuming that each test statistic follows a t distribution) stat = stats::qt(1 - p, df = 2 * (n - 1)) if (any(call == FALSE) | any(is.na(call))) { # Adjusted p-values result = sapply(stat, function(x) 1 - CDFDunnett(x,nu,m)) } else if (call == TRUE) { n = paste0("Common sample size={",n,"}") result=list(list("Single-step Dunnett procedure"),list(n)) } return(result) } # End of DunnettAdj Mediana/R/is.DataModel.R0000644000176200001440000000046213434027611014403 0ustar liggesusers###################################################################################################################### # Function: is.DataModel. # Argument: an object. # Description: Return if the object is of class DataModel is.DataModel = function(arg){ return(any(class(arg)=="DataModel")) }Mediana/R/HazardRatioStat.R0000644000176200001440000000503613434027610015204 0ustar liggesusers # Compute the hazard ratio based on non-missing values in the combined sample HazardRatioStat = function(sample.list, parameter) { # Determine the function call, either to generate the statistic or to return description call = (parameter[[1]] == "Description") if (call == FALSE | is.na(call)) { # Error checks if (length(sample.list)!=2) stop("Analysis model: Two samples must be specified in the HazardRatioStat statistic.") if (is.na(parameter[[2]])) method = "Log-Rank" else { if (!(parameter[[2]]$method %in% c("Log-Rank", "Cox"))) stop("Analysis model: HazardRatioStat statistic : the method must be Log-Rank or Cox.") method = parameter[[2]]$method } # Outcomes in Sample 1 outcome1 = sample.list[[1]][, "outcome"] # Remove the missing values due to dropouts/incomplete observations outcome1.complete = outcome1[stats::complete.cases(outcome1)] # Observed events in Sample 1 (negation of censoring indicators) event1 = !sample.list[[1]][, "patient.censor.indicator"] event1.complete = event1[stats::complete.cases(outcome1)] # Sample size in Sample 1 n1 = length(outcome1.complete) # Outcomes in Sample 2 outcome2 = sample.list[[2]][, "outcome"] # Remove the missing values due to dropouts/incomplete observations outcome2.complete = outcome2[stats::complete.cases(outcome2)] # Observed events in Sample 2 (negation of censoring indicators) event2 = !sample.list[[2]][, "patient.censor.indicator"] event2.complete = event2[stats::complete.cases(outcome2)] # Sample size in Sample 2 n2 = length(outcome2.complete) # Create combined samples of outcomes, censoring indicators (all events are observed) and treatment indicators outcome = c(outcome1.complete, outcome2.complete) event = c(event1.complete, event2.complete) treatment = c(rep(0, n1), rep(1, n2)) # Get the HR if (method == "Log-Rank"){ surv.test = survival::survdiff(survival::Surv(outcome, event) ~ treatment) result = (surv.test$obs[2]/surv.test$exp[2])/(surv.test$obs[1]/surv.test$exp[1]) } else if (method == "Cox"){ result = summary(survival::coxph(survival::Surv(outcome, event) ~ treatment))$coef[,"exp(coef)"] } } else if (call == TRUE) { if (is.na(parameter[[2]])) result = list("Hazard Ratio") else { result = list("Hazard Ratio", "method = ") } } return(result) } # End of HazardRatioStat Mediana/R/InfluencePower.R0000644000176200001440000000177213434027610015070 0ustar liggesusers############################################################################################################################ # Function: InfluencePower # Argument: Test results (p-values) across multiple simulation runs (vector or matrix), statistic results, # criterion parameter (Influence cutoff). # Description: Compute probability that the influence condition is met. InfluencePower = function(test.result, statistic.result, parameter) { # Error check if (is.null(parameter$alpha)) stop("Evaluation model: InfluencePower: alpha parameter must be specified.") if (is.null(parameter$cutoff)) stop("Evaluation model: InfluencePower: cutoff parameter must be specified.") alpha = parameter$alpha cutoff_influence = parameter$cutoff significant = ((test.result[,1] <= alpha & test.result[,2] <= alpha & statistic.result[,1] >= cutoff_influence) | (test.result[,1] <= alpha & test.result[,2] > alpha)) power = mean(significant) return(power) } # End of InfluencePowerMediana/R/EvaluationModel.R0000644000176200001440000000055313434027610015227 0ustar liggesusers###################################################################################################################### # Function: EvaluationModel. # Argument: .... # Description: This function is used to call the corresponding function according to the class of the argument. #' @export EvaluationModel = function(...) { UseMethod("EvaluationModel") }Mediana/R/ParallelGatekeepingAdj.R0000644000176200001440000001400313434027610016451 0ustar liggesusers###################################################################################################################### # Function: ParallelGatekeepingAdj. # Argument: rawp, Raw p-value. # par, List of procedure parameters: vector of family (1 x m) Vector of component procedure labels ('BonferroniAdj.global' or 'HolmAdj.global' or 'HochbergAdj.global' or 'HommelAdj.global') (1 x nfam) Vector of truncation parameters for component procedures used in individual families (1 x nfam) # Description: Computation of adjusted p-values for mixture parallel gatekeeping (ref Dmitrienko et al. (2011)) ParallelGatekeepingAdj = function(rawp, par) { # Determine the function call, either to generate the p-value or to return description call = (par[[1]] == "Description") if (any(call == FALSE) | any(is.na(call))) { #Error check if (is.null(par[[2]]$family)) stop("Analysis model: Parallel gatekeeping procedure: Hypothesis families must be specified.") if (is.null(par[[2]]$proc)) stop("Analysis model: Parallel gatekeeping procedure: Procedures must be specified.") if (is.null(par[[2]]$gamma)) stop("Analysis model: Parallel gatekeeping procedure: Gamma must be specified.") # Number of p-values nhyp = length(rawp) # Extract the vector of family (1 x m) family = par[[2]]$family # Number of families in the multiplicity problem nfam = length(family) # Number of null hypotheses per family nperfam = rep(0, nfam) for (j in 1:nfam) { nperfam[j] = length(family[[j]]) } # Extract the vector of procedures (1 x m) proc = paste(unlist(par[[2]]$proc), ".global", sep = "") # Extract the vector of truncation parameters (1 x m) gamma = unlist(par[[2]]$gamma) # Simple error checks if (nhyp != length(unlist(family))) stop("Parallel gatekeeping adjustment: Length of the p-value vector must be equal to the number of hypotheses.") if (length(proc) != nfam) stop("Parallel gatekeeping adjustment: Length of the procedure vector must be equal to the number of families.") else { for (i in 1:nfam) { if (proc[i] %in% c("BonferroniAdj.global", "HolmAdj.global", "HochbergAdj.global", "HommelAdj.global") == FALSE) stop("Parallel gatekeeping adjustment: Only Bonferroni (BonferroniAdj), Holm (HolmAdj), Hochberg (HochbergAdj) and Hommel (HommelAdj) component procedures are supported.") } } if (length(gamma) != nfam) stop("Mixture parallel gatekeeping: Length of the gamma vector must be equal to the number of families.") else { for (i in 1:nfam) { if (gamma[i] < 0 | gamma[i] > 1) stop("Parallel gatekeeping adjustment: Gamma must be between 0 (included) and 1 (included).") if (proc[i] == "bonferroni.global" & gamma[i] != 0) stop("Parallel gatekeeping adjustment: Gamma must be set to 0 for the Bonferroni procedure.") } } # Number of intersection hypotheses in the closed family nint = 2^nhyp - 1 # Construct the intersection index sets (int_orig) before the logical restrictions are applied. Each row is a vector of binary indicators (1 if the hypothesis is # included in the original index set and 0 otherwise) int_orig = matrix(0, nint, nhyp) for (i in 1:nhyp) { for (j in 0:(nint - 1)) { k = floor(j/2^(nhyp - i)) if (k/2 == floor(k/2)) int_orig[j + 1, i] = 1 } } # Number of null hypotheses from each family included in each intersection before the logical restrictions are applied korig = matrix(0, nint, nfam) # Compute korig for (j in 1:nfam) { # Index vector in the current family # index = which(family == j) index = family[[j]] korig[, j] = apply(as.matrix(int_orig[, index]), 1, sum) } # Vector of intersection p-values pint = rep(1, nint) # Matrix of component p-values within each intersection pcomp = matrix(0, nint, nfam) # Matrix of family weights within each intersection c = matrix(0, nint, nfam) # P-value for each hypothesis within each intersection p = matrix(0, nint, nhyp) # Compute the intersection p-value for each intersection hypothesis for (i in 1:nint) { # Compute component p-values for (j in 1:nfam) { # Restricted index set in the current family int = int_orig[i, family[[j]]] # Set of p-values in the current family pv = rawp[family[[j]]] # Select raw p-values included in the restricted index set pselected = pv[int == 1] # Total number of hypotheses used in the computation of the component p-value tot = nperfam[j] pcomp[i, j] = do.call(proc[j], list(pselected, tot, gamma[j])) } # Compute family weights c[i, 1] = 1 for (j in 2:nfam) { c[i, j] = c[i, j - 1] * (1 - errorfrac(korig[i, j - 1], nperfam[j - 1], gamma[j - 1])) } # Compute the intersection p-value for the current intersection hypothesis pint[i] = pmin(1, min(ifelse(c[i,]>0, pcomp[i, ]/c[i, ], NA), na.rm = TRUE)) # Compute the p-value for each hypothesis within the current intersection p[i, ] = int_orig[i, ] * pint[i] } # Compute adjusted p-values adjustedp = apply(p, 2, max) result = adjustedp } else if (call == TRUE) { family = par[[2]]$family nfam = length(family) proc = unlist(par[[2]]$proc) gamma = unlist(par[[2]]$gamma) test.id = unlist(par[[3]]) proc.par = data.frame(nrow = nfam, ncol = 4) for (i in 1:nfam){ proc.par[i,1] = i proc.par[i,2] = paste0("{",paste(test.id[family[[i]]], collapse = ", "),"}") proc.par[i,3] = proc[i] proc.par[i,4] = gamma[i] } colnames(proc.par) = c("Family", "Tests", "Component procedure", "Truncation parameter") result=list(list("Parallel gatekeeping"), list(proc.par)) } return(result) } # End of ParallelGatekeepingAdj Mediana/R/TTest.R0000644000176200001440000000361713434027610013206 0ustar liggesusers###################################################################################################################### # Function: TTest . # Argument: Data set and parameter (call type). # Description: Computes one-sided p-value based on two-sample t-test. TTest = function(sample.list, parameter) { # Determine the function call, either to generate the p-value or to return description call = (parameter[[1]] == "Description") if (call == FALSE | is.na(call)) { # No parameters are defined if (is.na(parameter[[2]])) { larger = TRUE } else { # Check the name of arguments if (!all(names(parameter[[2]]) %in% c("larger"))) stop("Analysis model: Ttest test: this function accepts only one argument (larger)") # larger argument if (!is.logical(parameter[[2]]$larger)) stop("Analysis model: TTest test: the larger argument must be logical (TRUE or FALSE).") larger = parameter[[2]]$larger } # Sample list is assumed to include two data frames that represent two analysis samples # Outcomes in Sample 1 outcome1 = sample.list[[1]][, "outcome"] # Remove the missing values due to dropouts/incomplete observations outcome1.complete = outcome1[stats::complete.cases(outcome1)] # Outcomes in Sample 2 outcome2 = sample.list[[2]][, "outcome"] # Remove the missing values due to dropouts/incomplete observations outcome2.complete = outcome2[stats::complete.cases(outcome2)] # One-sided p-value (treatment effect in sample 2 is expected to be greater than in sample 1) if (larger) result = stats::t.test(outcome2.complete, outcome1.complete, alternative = "greater")$p.value else result = stats::t.test(outcome2.complete, outcome1.complete, alternative = "less")$p.value } else if (call == TRUE) { result=list("Student's t-test") } return(result) } # End of TTest Mediana/R/EffectSizePropStat.R0000644000176200001440000000234613434027610015665 0ustar liggesusers###################################################################################################################### # Compute the effect size for binary data based on non-missing values in the combined sample EffectSizePropStat = function(sample.list, parameter) { # Determine the function call, either to generate the statistic or to return description call = (parameter[[1]] == "Description") if (call == FALSE | is.na(call)) { # Error checks if (length(sample.list)!=2) stop("Analysis model: Two samples must be specified in the EffectSizePropStat statistic.") # Merge the samples in the sample list sample1 = sample.list[[1]] # Merge the samples in the sample list sample2 = sample.list[[2]] # Select the outcome column and remove the missing values due to dropouts/incomplete observations outcome1 = sample1[, "outcome"] outcome2 = sample2[, "outcome"] prop1 = mean(stats::na.omit(outcome1)) prop2 = mean(stats::na.omit(outcome2)) prop = (prop2 + prop1) / 2 result = (prop2 - prop1) / sqrt(prop * (1-prop)) } else if (call == TRUE) { result = list("Effect size (proportion)") } return(result) } # End of EffectSizePropStatMediana/R/EffectSizeCoxEventStat.R0000644000176200001440000000415213434027610016475 0ustar liggesusers###################################################################################################################### # Compute the log hazard ratio based on non-missing values in the combined sample EffectSizeCoxEventStat = function(sample.list, parameter) { # Determine the function call, either to generate the statistic or to return description call = (parameter[[1]] == "Description") if (call == FALSE | is.na(call)) { # Error checks if (length(sample.list)!=2) stop("Analysis model: Two samples must be specified in the EffectSizeCoxEventStat statistic.") # Outcomes in Sample 1 outcome1 = sample.list[[1]][, "outcome"] # Remove the missing values due to dropouts/incomplete observations outcome1.complete = outcome1[stats::complete.cases(outcome1)] # Observed events in Sample 1 (negation of censoring indicators) event1 = !sample.list[[1]][, "patient.censor.indicator"] event1.complete = event1[stats::complete.cases(outcome1)] # Sample size in Sample 1 n1 = length(outcome1.complete) # Outcomes in Sample 2 outcome2 = sample.list[[2]][, "outcome"] # Remove the missing values due to dropouts/incomplete observations outcome2.complete = outcome2[stats::complete.cases(outcome2)] # Observed events in Sample 2 (negation of censoring indicators) event2 = !sample.list[[2]][, "patient.censor.indicator"] event2.complete = event2[stats::complete.cases(outcome2)] # Sample size in Sample 2 n2 = length(outcome2.complete) # Create combined samples of outcomes, censoring indicators (all events are observed) and treatment indicators outcome = c(outcome1.complete, outcome2.complete) event = c(event1.complete, event2.complete) treatment = c(rep(0, n1), rep(1, n2)) # Get the log HR from the Log-Rank method result = log(summary(survival::coxph(survival::Surv(outcome, event) ~ treatment))$coef[,"exp(coef)"]) } else if (call == TRUE) { result = list("Effect size (event Cox)") } return(result) } # End of EffectSizeCoxEventStatMediana/R/OutcomeDist.R0000644000176200001440000000157413434027610014402 0ustar liggesusers###################################################################################################################### # Function: OutcomeDist. # Argument: Outcome Distribution and Outcome Type # Description: This function is used to create an object of class Outcome. #' @export OutcomeDist = function(outcome.dist, outcome.type = NULL) { # Error checks if (!is.character(outcome.dist)) stop("Outcome: outcome distribution must be character.") if (!is.null(outcome.type)) { if (!is.character(unlist(outcome.type))) stop("Outcome: outcome must be character.") if(!all((unlist(outcome.type) %in% c("event","standard")))==TRUE) stop("Outcome: outcome type must be event or standard") } outcome = list(outcome.dist = outcome.dist, outcome.type = outcome.type) class(outcome) = "OutcomeDist" return(outcome) invisible(outcome) }Mediana/R/parameters.R0000644000176200001440000000074213434027611014303 0ustar liggesusers# Function: parameters # Argument: Multiple character strings. # Description: This function is used mostly for the user's convenience. It simply creates a list of character strings and # can be used in cases where multiple parameters need to be specified. #' @export parameters = function(...) { args = list(...) nargs = length(args) if (nargs <= 0) stop("Parameters function: At least one parameter must be specified.") return(args) invisible(args) }Mediana/R/MultAdjProc.R0000644000176200001440000000161313434027610014321 0ustar liggesusers###################################################################################################################### # Function: MultAdjProc. # Argument: .... # Description: This function is used to call the corresponding function according to the class of the argument. #' @export MultAdjProc = function(proc, par = NULL, tests = NULL) { # Error checks if (!is.na(proc) & !is.character(proc)) stop("MultAdj: multiplicity adjustment procedure must be character.") if (!is.null(par) & !is.list(par)) stop("MultAdj: par must be wrapped in a list.") if (!is.null(tests) & !is.list(tests)) stop("MultAdj: tests must be wrapped in a list.") if (any(lapply(tests, is.character) == FALSE)) stop("MultAdj: tests must be character.") mult.adjust = list(proc = proc, par = par, tests = tests) class(mult.adjust) = "MultAdjProc" return(mult.adjust) invisible(mult.adjust) }Mediana/R/Criterion.R0000644000176200001440000000253213434027610014074 0ustar liggesusers###################################################################################################################### # Function: Criterion. # Argument: Criterion ID, method, tests, statistics, parameters, labels. # Description: This function is used to create an object of class Criterion #' @export Criterion = function(id, method, tests = NULL, statistics = NULL, par = NULL, labels) { # Error checks if (!is.character(id)) stop("Criterion: ID must be character.") if (!is.character(method)) stop("Criterion: method must be character.") if (!is.null(tests) & !is.list(tests)) stop("Criterion: tests must be wrapped in a list.") if (any(lapply(tests, is.character) == FALSE)) stop("Criterion: tests must be character.") if (!is.null(statistics) & !is.list(statistics)) stop("Criterion: statistics must be wrapped in a list.") if (any(lapply(statistics, is.character) == FALSE)) stop("Criterion: statistics must be character.") if (is.null(tests) & is.null(statistics )) stop("Criterion: tests and/or statistics must be provided") criterion = list(id = id , method = method , tests = tests , statistics = statistics , par = par , labels = labels) class(criterion) = "Criterion" return(criterion) invisible(criterion) }Mediana/R/FallbackAdj.R0000644000176200001440000000451613434027610014260 0ustar liggesusers###################################################################################################################### # Function: FallbackAdj. # Argument: p, Vector of p-values (1 x m) # par, List of procedure parameters: vector of hypothesis weights (1 x m) matrix of transition parameters (m x m) # Description: Fixed sequence procedure. FallbackAdj = function(p, par) { # Determine the function call, either to generate the p-value or to return description call = (par[[1]] == "Description") # Number of p-values m = length(p) # Extract the vector of hypothesis weights (1 x m) if (!any(is.na(par[[2]]))) { if (is.null(par[[2]]$weight)) stop("Analysis model: Fallback procedure: Hypothesis weights must be specified.") w = par[[2]]$weight } else { w = rep(1/m, m) } if (any(call == FALSE) | any(is.na(call))) { # Error checks if (length(w) != m) stop("Analysis model: Fallback procedure: Length of the weight vector must be equal to the number of hypotheses.") if (sum(w)!=1) stop("Analysis model: Fallback procedure: Hypothesis weights must add up to 1.") if (any(w < 0)) stop("Analysis model: Fallback procedure: Hypothesis weights must be greater than 0.") # number of intersection nbint <- 2^m - 1 # matrix of intersection hypotheses int <- matrix(0, nbint, m) for (i in 1:m) { for (j in 0:(nbint - 1)) { k <- floor(j/2^(m - i)) if (k/2 == floor(k/2)) int[j + 1, i] <- 1 } } #int = as.matrix(expand.grid(rep(list(0:1),m)))[-1,] # calculate all intersection local p-values int.all.pval = t(apply(int, 1, function(x) p/fallback_weight(w,x))) # calculate the intersection p-values #int.pval = int*apply(int.all.pval, 1, min) # calculate the adjusted p-values result = pmin(1, apply(int*apply(int.all.pval, 1, min), 2, max)) } else if (call == TRUE) { weight = paste0("Weight={",paste(round(w,2), collapse = ","),"}") result=list(list("Fallback procedure"),list(weight)) } return(result) } # End of FallbackAdj # add-on function used in the FallbackAdj function fallback_weight = function(w,int){ v = rep(0,length(w)) v[1] = int[1]*w[1] for (i in 2:length(w)){ v[i] = int[i] * (sum(w[1:i])-sum(v[1:(i-1)])) } v } Mediana/R/AdjustPvalues.R0000644000176200001440000000163413434027610014732 0ustar liggesusers############################################################################################################################################## # Function: AdjustPvalues # Argument: pval (vector) and proc and par (list of parameters). # Description: This function returns adjusted pvalues according to the multiple testing procedure specified in the multadj argument #' @export AdjustPvalues = function(pval, proc, par=NA){ # Check if the multiplicity adjustment procedure is specified, check if it exists if (!exists(proc)) { stop(paste0("AdjustPvalues: Multiplicity adjustment procedure function '", proc, "' does not exist.")) } else if (!is.function(get(as.character(proc), mode = "any"))) { stop(paste0("AdjustPvalues: Multiplicity adjustment procedure function '", proc, "' does not exist.")) } adjustpval = do.call(proc, list(pval, list("Analysis", par))) return(adjustpval) } Mediana/R/SampleSize.R0000644000176200001440000000131713434027610014212 0ustar liggesusers###################################################################################################################### # Function: SampleSize. # Argument: A list or vector of numeric. # Description: This function is used to create an object of class SampleSize. #' @export SampleSize = function(sample.size) { # Error checks if (any(!is.numeric(unlist(sample.size)))) stop("SampleSize: sample size must be numeric.") if (any(unlist(sample.size) %% 1 !=0)) stop("SampleSize: sample size must be integer.") if (any(unlist(sample.size) <=0)) stop("SampleSize: sample size must be strictly positive.") class(sample.size) = "SampleSize" return(unlist(sample.size)) invisible(sample.size) }Mediana/R/SimParameters.R0000644000176200001440000000314113434027610014707 0ustar liggesusers###################################################################################################################### # Function: SimParameters # Argument: Multiple character strings. # Description: This function is called by default. #' @export SimParameters = function(n.sims, seed, proc.load = 1) { # Error checks if (!is.numeric(n.sims)) stop("SimParameters: Number of simulation runs must be an integer.") if (length(n.sims) > 1) stop("SimParameters: Number of simulations runs: Only one value must be specified.") if (n.sims%%1 != 0) stop("SimParameters: Number of simulations runs must be an integer.") if (n.sims <= 0) stop("SimParameters: Number of simulations runs must be positive.") if (!is.numeric(seed)) stop("Seed must be an integer.") if (length(seed) > 1) stop("Seed: Only one value must be specified.") if (nchar(as.character(seed)) > 10) stop("Length of seed must be inferior to 10.") if (is.numeric(proc.load)){ if (length(proc.load) > 1) stop("SimParameters: Processor load only one value must be specified.") if (proc.load %%1 != 0) stop("SimParameters: Processor load must be an integer.") if (proc.load <= 0) stop("SimParameters: Processor load must be positive.") } else if (is.character(proc.load)){ if (!(proc.load %in% c("low", "med", "high", "full"))) stop("SimParameters: Processor load not valid") } sim.parameters = list(n.sims = n.sims, seed = seed, proc.load = proc.load) class(sim.parameters) = "SimParameters" return(sim.parameters) invisible(sim.parameters) }Mediana/R/MeanStat.R0000644000176200001440000000157613434027610013661 0ustar liggesusers###################################################################################################################### # Compute the mean based on non-missing values in the combined sample MeanStat = function(sample.list, parameter) { # Determine the function call, either to generate the statistic or to return description call = (parameter[[1]] == "Description") if (call == FALSE | is.na(call)) { # Error checks if (length(sample.list)!=1) stop("Analysis model : Only one sample must be specified in the MeanStat statistic.") sample = sample.list[[1]] # Select the outcome column and remove the missing values due to dropouts/incomplete observations outcome = sample[, "outcome"] result = mean(stats::na.omit(outcome)) } else if (call == TRUE) { result = list("Mean") } return(result) } # End of MeanStatMediana/R/tests.R0000644000176200001440000000071113434027611013276 0ustar liggesusers# Function: tests # Argument: Multiple character strings. # Description: This function is used mostly for the user's convenience. It simply creates a list of character strings and # can be used in cases where multiple tests need to be specified. #' @export tests = function(...) { args = list(...) nargs = length(args) if (nargs <= 0) stop("Tests function: At least one test must be specified.") return(args) invisible(args) }Mediana/R/Section.R0000644000176200001440000000123213434027610013536 0ustar liggesusers###################################################################################################################### # Function: Section. # Argument: by. # Description: This function is used to create an object of class Section. #' @export Section = function(by) { # Error checks if (!is.character(by)) stop("Section: by must be character.") if (!any(by %in% c("sample.size", "event", "outcome.parameter", "design.parameter", "multiplicity.adjustment"))) stop("Section: the variables included in by are invalid.") section.report = list(by = by) class(section.report) = "Section" return(section.report) invisible(section.report) }Mediana/R/MultAdj.default.R0000644000176200001440000000063213434027610015120 0ustar liggesusers###################################################################################################################### # Function: MultAdj.MultAdjProc. # Argument: MultAdjProc object # Description: This function is used to create an object of class MultAdj. #' @export MultAdj.default = function(...) { stop("MultAdj: this function only accepts object of class MultAdjProc or MultAdjStrategy") }Mediana/R/DataModel.OutcomeDist.R0000644000176200001440000000107613434027610016230 0ustar liggesusers###################################################################################################################### # Function: DataModel.OutcomeDist # Argument: OutcomeDist object. # Description: This function is called by default if the class of the argument is an OutcomeDist object. #' @export DataModel.OutcomeDist = function(outcome, ...) { datamodel = DataModel() datamodel = datamodel + outcome args = list(...) if (length(args)>0) { for (i in 1:length(args)){ datamodel = datamodel + args[[i]] } } return(datamodel) }Mediana/R/HochbergAdj.global.R0000644000176200001440000000215213434027610015533 0ustar liggesusers###################################################################################################################### # Function: HochbergAdj.global. # Argument: p, Vector of p-values (1 x m) # n, Total number of testable hypotheses (in the case of modified mixture procedure) (1 x 1) # gamma, Vector of truncation parameter (1 x 1) # Description: Compute global p-value for the truncated Hochberg multiple testing procedure. The function returns the global adjusted pvalue (1 x 1) HochbergAdj.global = function(p, n, gamma) { # Number of p-values k = length(p) if (k > 0 & n > 0) { if (gamma == 0) { adjp = n * min(p) } # Bonferonni procedure else if (gamma <= 1) { # Truncated Hochberg procedure Index of ordered pvalue ind = order(p, decreasing = TRUE) # Denominator (1 x m) seq = k:1 denom = gamma/(k - seq + 1) + (1 - gamma)/n # Adjusted p-values sortp = sort(p, decreasing = TRUE) adjp = min(cummin(sortp/denom)[order(ind)]) } } else adjp = 1 return(adjp) } # End of HochbergAdj.globalMediana/R/MVNormalDist.R0000644000176200001440000000571713434027610014465 0ustar liggesusers###################################################################################################################### # Function: MVNormalDist. # Argument: List of parameters (number of observations, list(list (mean, SD), correlation matrix)). # Description: This function is used to generate correlated multivariate normal outcomes. MVNormalDist = function(parameter) { # Error checks if (missing(parameter)) stop("Data model: MVNormalDist distribution: List of parameters must be provided.") if (is.null(parameter[[2]]$par)) stop("Data model: MVNormalDist distribution: Parameter list (means and SDs) must be specified.") if (is.null(parameter[[2]]$corr)) stop("Data model: MVNormalDist distribution: Correlation matrix must be specified.") par = parameter[[2]]$par corr = parameter[[2]]$corr # Number of endpoints m = length(par) if (ncol(corr) != m) stop("Data model: MVNormalDist distribution: The size of the mean vector is different to the dimension of the correlation matrix.") if (sum(dim(corr) == c(m, m)) != 2) stop("Data model: MVNormalDist distribution: Correlation matrix is not correctly defined.") if (det(corr) <= 0) stop("Data model: MVNormalDist distribution: Correlation matrix must be positive definite.") if (any(corr < -1 | corr > 1)) stop("Data model: MVNormalDist distribution: Correlation values must be between -1 and 1.") # Determine the function call, either to generate distribution or to return description call = (parameter[[1]] == "description") # Generate random variables if (call == FALSE) { # Error checks n = parameter[[1]] if (n%%1 != 0) stop("Data model: MVNormalDist distribution: Number of observations must be an integer.") if (n <= 0) stop("Data model: MVNormalDist distribution: Number of observations must be positive.") # Generate multivariate normal variables multnorm = mvtnorm::rmvnorm(n = n, mean = rep(0, m), sigma = corr) # Store resulting multivariate variables mv = matrix(0, n, m) for (i in 1:m) { if (is.null(par[[i]]$mean)) stop("Data model: MVNormalDist distribution: Mean must be specified.") if (is.null(par[[i]]$sd)) stop("Data model: MVNormalDist distribution: SD must be specified.") mean = as.numeric(par[[i]]$mean) sd = as.numeric(par[[i]]$sd) if (sd <= 0) stop("Data model: MVNormalDist distribution: Standard deviations must be positive.") mv[, i] = mean + sd * multnorm[, i] } result = mv } else { # Provide information about the distribution function if (call == TRUE) { # Labels of distributional parameters par.labels = list() for (i in 1:m) { par.labels[[i]] = list(mean = "mean", sd = "SD") } result = list(list(par = par.labels, corr = "corr"),list("Multivariate Normal")) } } return(result) } #End of MVNormalDistMediana/vignettes/0000755000176200001440000000000013464544414013631 5ustar liggesusersMediana/vignettes/figures/0000755000176200001440000000000013440027504015263 5ustar liggesusersMediana/vignettes/figures/CaseStudy04-fig1.png0000644000176200001440000001161513440027504020671 0ustar liggesusersPNG  IHDRT-sRGBgAMA a pHYs%%IR$"IDATx^k\FK- !"!{aNEPJȘ"HlhBNCx2$R;uخ ǒbI>sthFs_fsEG<{CB F$4hDB F$4hDB F$4hDB F$4hDB F$4hDB F$4hDB F$4hDB F$4hDB F$4hDB F$4hDBkTZj/N( Rbce{^97rC'N;A Ii廅k r=7~m> V?~k_5w7{6";?zRaqL P9Ih ]Jӯ:F [J W>8:eHhHh uJ7'g7d6!ǬE  ]Q-Z!Hk}&7^QHZ!H;!"!Z!H;!")* 5EF)CB+DB)d}rO_l^:LD񁽀 )-}z>2=Y!X]w/jW?|`Kg?;)$+>0pdqG{3^on}"vxs Μ}#7t,V.M#Hh4"Ј@#Hh4"Ј@#Hh4"Ј@#Hh4"Ј@#Hh4"ЈXCÓ7 (RZP\xxq> }Hh@\ggo&{##[޽{SSS|^ۿ4~Ǐcccbl&X9N )m)p)s)v)yS4 r]sѰ 펄n Iel6kK0)`)ԅZh?$t HTq=DZhS$tSIM^xr옞_* gT݆H%+u-huj튄nb(efjTW>Gp;\' ݎHT+̊I:}- 4hT 6#Hꤢlm23T^t Ne6KJ!Y7K3R<n,)K[:lܦr\^~t; ۈ'14)^_n@RfX$%$?ǐP$tVf߶Ƈ6@B7hۄg){g4 |R>g"?i'MPFh:~333Dޓ_z:IEB'.ih<3jFBlllVVΕIH' dxnU nkU$ 'UpzN$:,e>377gކ H.CB'x6#=3!Ʊ$!HφΦ u"c8k7- @f =lCqV8$oυ4:N%uUl۲u=TCBG>j!>=$X[z!sJ^ZRb=n%GBG"g>z-2ܷƵN6/vz.mgB7M;8v%CgzCBG*'zК q\W,3??o*= s/W!y,DqIם$txnXRܒ9h nXR6vK  Oj/ Xtp$0֏M%kυ+N rgsO_;u Χ-0ם+/ڧ >KL/FqqqwXa9 ;MTZ@:ڎwߛ~|jwb)me#Fb Ԥ|^]3qhJq|~8{e5C+\q_ uC/fbʑЁs=b7&^*Ğ_]Z};=<37k'b"f ϛ:^ p4H4f+[x#U{S7c<<mq#G;US{:ƯU;q8W% wg6JH Ս LF!s;m?:{N5;h 7?Kf.He"tBwT~Z3`3;9Bя< F@Dl |#|]7?;Lt_6C?H1_YٟSp$t0e6vG3?[+-N-Ooe^9TF:{/i=??o-SG#~H`':4lI0skB:2)+JwK7  t0zmqj3xu^ҕOTZqj/]< D֮fU)3}],P_$w  L\oXڸ6YePu𼡩5H@$4Pkl(3 Jc_N^YօSxN9 #q鐷~r =\9yxMB1l<3dBGlJ7dp˗ܥG<;7}'@@BGB>f֟:u4WetSho fQa'\BGm$3G&+Dvf+up"j@-Y7Af wœVtxvv#fW?x=^i3mkpk{֏}DSdcvM5EAhsss2LB s16>KΊn\BiHg:LQxn54"%gD8wH(g珳p{X`kU@o/ 1UGm?: ЈWӁsw> |ڴ2 }H%ן;SMi_)?Ѕ/ GJh7zH}ܘwv=xokAϾ44#s{wؔutup.65g:̾'yvZ^qv s/Z6،NN]{,Fn&4#[հ13fVD;L|[>ɺrsl֌ì߄".I 4T}YpOG[o5 1쎄u?[[j򅽞wlGJZssiM!¬=o.goSǸuNbw$tW?ѫ\^^65w"2P?ϢvnzR!8#sܘ"rՔVn7tH\q&k( @!ԵhXn]!bzX⤤]4#s]D NJTRR@B<R@#$'@BG捳٬O|.|k;e r-- 633ck> $tT2$u=V.;w 7G IoN:-}]ǙmqsڦӤQrYd!!5i @*AOH4ۇ%,]CB-H+ i<~ڌ4V|qͻY#:6w[%[^mMi!1+H8CZd*M 浀&t>oI4mTͬ 5l3jn n(V9\Y-$UQ4gI~ZґЍ"h}J4f')Kw&m*8!%K)wpcU Hj?-bim"_ h߶)`iiRmnmi#Ԥtf!OY Uv֥x#ץqg EbC~?Ve,73(W9%!HKU':l k3$tHT|)] lJPe*чIxב2be8dyh[*ҤDB /^X-!} @j=zH"]IDBDT۲Bu1Yچ 9\H u6"*2cCʐhDB F$4hDB F$4hDB F$4hDB F$4hDB F$4hDB F$4hDB F$4hDB F$4hDB F$4hDB F$4hDB F$4hDB F$4hDB F$4hDB F$4hDB F$4hDB F$4hDB F$4hDBϏ?koLIENDB`Mediana/vignettes/figures/CaseStudy04-fig2.png0000644000176200001440000000522413440027504020671 0ustar liggesusersPNG  IHDR9[eCsRGBgAMA a pHYs%%IR$ )IDATx^Ky$} "`郊AˉHUIJ]m].G7Ku.'޵bk)jr͝(IL$6;y=KUs@VZ0 `h*9Us@VZU9F"yda$yf6ٌF D7ɋEU51?qcijc~'# ?3~3/@jyWy%ZVgVՠUcU5hU.cE|57qfHxI@Tč:y1gw`lC*yHܪeߤ5xJ!:;Nd_ ?OX#O ~'F;YWѪЪ翘x, ?cwk]{k8Ypt4bޒ_EKDn[J]KGXWhgiKKП[ / CeYkl5X?okm][A?h3χ>+%p{9F\JC׷exVMYk3V=-3D"I/4 N|G&0V2{l7zݏ21jdiWA/ w˭*t0m1?+ōę圸tߚ=Ty1;oK|T]*OUZWfG/{C40hULMt6[ׅxwC<b7pj?+U٪JgpE #R MzbejU)Իܱ3Eyv"m)uٶnY{NXgVAjxz[ׂ_x"m9bjw[I_%j8w:ofyHތxf2{rڻX@jJXª|pA׿ޘsė3_}V3aQ٩钨V0k56sI^F01o}wF<͠gRkb$,}-yOBp2FЪfw'.fG$w|MXKX̡ou焔ZDtU6q*U٪K,\ 9J(o3$|b_'.[U6Ed~:VDz%̐TuxJ4NZ MF< 7 X^Xrvm,Jd:hU ZUVVՠU5cŸmK uضTshNJvqR7=2t ZG=VK[)_^ 垗V2GSAJc9 L|lZ} =2 ZG?Vފ'G~.թ6h|lT*=-cE=| wh,!q*hƱZ*=j,ABjЪj,ABjЪj,ABjЪyF~M#->&|7bV lFw[xFlhQ:H? kYrОt90 5Kh ~G3,USS"~#ٜ<1wad:(O t(ź=4x]*B,~($ b(eo_Pi{ySW0(3KKK2e *î=ބJԥ2}uiZCKA ޔ;D$=5oPetY FIgR (bI%;+0CJ$ ~١h4h4NC;_JN )C[ קdw5{M9{Ҳ P:9ŋNoLTfh[_E(^|@*&yb˞m%6f*_ DWLcȞe D0N.TV8C3WrhgNZ m_-I'J=N߸"^52~䩌'^rav\:0!~D/̯d wcT6P/Ŋ=-..f .:K޼ ƎL涄bkmXcW~㖇̴yJѕB̥ ߊ=4 w[_;<xi }xk3.Yq(+{ʲЀlܛ~Sy޼e;n8d #ɠ0s']tEy.Port~҉<9 .B7@T^M-A']].*o}8ر_K(Kǚoҡ/.VeKDj^)[~1aOzmuS۵[sGpH8갦zdWotWTįv4jϝt~{.ޕ/` {ʒ˺C>{]]ڡQ(坫Ѕ_PB`~{Y%C>ܒPfg_W%l-="tj voOIY삾;lJ{*SiQ"= !ԢwTn45kjSGm6EAIF*ЋwA ?H7ܸ4|}uNࣅãVI|2{tMЅ `k W>Cvȹ-_.+{=ixGpm~;wZml㣵ʦ<# ]/}||d_m[oܫ߂4<κËok۷_)OS %yy ؾ8>~4]4ڛ4L=Jj7eܪDZǞx_K Y~7$?/x .%!̞damo /nwpS[1;`|W=|621? LɕdIw پ7뾁(1<޸4 \j[}M,ddI-݅v:dO+gD ^T(_jhmߑɟar=`U1aO7|pDsݡvj^{зgO:k-Ȟ ~~8j399)n,#N޶^9LSRewwzߣ1znLz}k+ynk^FtA_Wdvи}}Z觧O0KzS)P$ Rx#YXқړ`ff* E./#4>?ױ o(uD@}js o:Ȟ*О zk0w't\=Xp* NA{j Rx{1N%_wFu7 Sµ/ם 4)Ia.U9#̩@8 f 0PoIS8˹ۑ +=b)HGh(+\eR`aY3v;Y;.=uK֜wP^P?QE;w[$q \vB{l{3YӎUK_ZHqefqq)F-wd<pRahOqa?!! 2"LJ*~H080)MH"p( ¤4"wHáҴ!! 2"LJ*~H080)MH"p( ¤4"wHáҴ!! 2"LJ*~H080)MH"p( ¤4"wHáҴ!! 2"LJ*~H080)MH"p( ¤4"wHáҴ!! 2"LJ*~H080)MH"p( ¤4"wHáҴ!! 2"LJ*~H080)MH"p( ¤4"wHáҴ!! 2"LJ*~H080)MH"p( ¤4"wHáҴ!! 2"LJ*~H080)MH"p( ¤4"wHáҴ!! 2"LJ*~H080)MH"p( ¤4"wHáҴ!! 2"LJ*~H080)MH"p( ¤4"wHáҴ!! 2"LJ*~H080)MH"p( ¤4"wHáҴ!! 2"LJ*~H080)MH"p( ¤4"wHáҴ!! 2"LJ*~H080)MH"p( ¤4"wHáҴ!! 2"LJ*~H080)MH"p( ¤4"wHáҴ!! 2"LJ*~H080)MH"p( ¤4"wHáҴ!! 2"LJ*~H080)MH"p( ¤4"wHáҴ!! 2"LJ*~H080)MH"p( ¤4"wHáҴ!! 2"LJ*~H080)MH"p( ¤4"wHáҴ!! 2"LJ*~H080)MH"p( ¤4"wHáҴ!! 2"LJ*~H080)MH"p( ¤4"wHáҴ!! 2"LJ*~H080)MH"p( ¤4"wHáҴ!! 2"LJ*~H080)MH"p( ¤4"wHáҴ!! 8=~aH֭[dDKYT̡DNKKK/^t!gaaaff?Hx[o)w))$~ajd4Jtg)9*$RdބuGtY}E=)rS(ОB$wX\\)K=>GJxbr'O3J& JxRA,w})ҒD%K"sDhOY%D%sDhOqȊgC]sfO"ОOJ2>DžĄ< &A9.7v=AÅm% jt ٓNh%<@EA[ 5#a2dOju"q*m*ԉ `Ş ,ǽN߸"^82~䩌'^rav\:0!~D/̯dF7^ EGdVpؓ.3퐷7}y#S}-Z8<ؕ}!jՇ2se @@Y'"^ڡ NL/=PKNOވMU.&}JnRpLؓJsqoS{ 㐱K7}9 @gvE2Dڢižt{[QW!N7G.; ]sL ])zmežpcwP5ߤC_]7n&G/~-vh>i?JkE/}/z)B!K}%<+ξ~$K"i`M{ڛ%.ۺn=@؅D@{B4E[͕ƹf]}u~]|x{hV_#GN:uW^mWFcnqo ƹ/\\piP{7.7_aZ;4XO_?;Zw\ѠhO 0WgFkS7%= _ͺCvAz?X߹:wypϻchO 0<āgp, SκËok۷_E=l9;_<8y|uxMtfgm[{JxJ8 # g=۪8V T4:-ۥd}(P0bȞCP3aAw6@ZV+caȞv"UiۀţIJ ʣ-{f|NZP N=my:E{RN\ N=UEZ6+w)}>&B6?Iҟ m~{8k}Ǣ=UDjZu}Mi]Q{BXRiyyYll NGAIS=YI# 7$P ~kO gvIp9֠t;iaaA1mO_)KzSYB(q30hӛJOŋ^Sߡ֤BX3KğXA j7&ړ`zmN`,\8<'9|y8hOO &44Pe"l螸 M 'Wt3J@{k(-7D psS`…^F2䀛D)!v9c l"Y9hO [^\ 7)e"}j $133Spl:@{Lj6anA.c)2TZ@=5Ba!x ఖ/1uio6n L FQVxMnM8 N%'%{A{Ĭֺ Wb4\PZŖ Y\6$vz~qAY|<ȃFO9DI!$QhOD=BDI!$QhOD=BDI!$QhOD=BDI!$QhOD=BDI!$QhOD=BDI!$QhOD=BDI!$QhOD=BDI!$QhOD=BDI!$QhOD=BDI!$QhOD=BDI!$QhO$GIENDB`Mediana/vignettes/case-studies.Rmd0000644000176200001440000026203313440027504016662 0ustar liggesusers--- title: "Case studies" author: "Gautier Paux and Alex Dmitrienko" date: "`r Sys.Date()`" output: rmarkdown::html_vignette vignette: > %\VignetteIndexEntry{Case studies} %\VignetteEngine{knitr::rmarkdown} \usepackage[utf8]{inputenc} --- # Introduction Several case studies have been created to facilitate the implementation of simulation-based Clinical Scenario Evaluation (CSE) approaches in multiple settings and help the user understand individual features of the Mediana package. Case studies are arranged in terms of increasing complexity of the underlying clinical trial setting (i.e., trial design and analysis methodology). For example, [Case study 1](#case-study-1-1) deals with a number of basic settings and increasingly more complex settings are considered in the subsequent case studies. ## Case study 1 This case study serves a good starting point for users who are new to the Mediana package. It focuses on clinical trials with simple designs and analysis strategies where power and sample size calculations can be performed using analytical methods. 1. [Trial with two treatment arms and single endpoint (normally distributed endpoint).](#normally-distributed-endpoint) 2. [Trial with two treatment arms and single endpoint (binary endpoint).](#binary-endpoint) 3. [Trial with two treatment arms and single endpoint (survival-type endpoint).](#survival-type-endpoint) 4. [Trial with two treatment arms and single endpoint (survival-type endpoint with censoring).](#survival-type-endpoint-with-censoring) 5. [Trial with two treatment arms and single endpoint (count-type endpoint).](#count-type-endpoint) ## Case study 2 This case study is based on a **clinical trial with three or more treatment arms**. A multiplicity adjustment is required in this setting and no analytical methods are available to support power calculations. This example also illustrates a key feature of the Mediana package, namely, a useful option to define custom functions, for example, it shows how the user can define a new criterion in the Evaluation Model. [Clinical trial in patients with schizophrenia](#case-study-2-1) ## Case study 3 This case study introduces a **clinical trial with several patient populations** (marker-positive and marker-negative patients). It demonstrates how the user can define independent samples in a data model and then specify statistical tests in an analysis model based on merging several samples, i.e., merging samples of marker-positive and marker-negative patients to carry out a test that evaluated the treatment effect in the overall population. [Clinical trial in patients with asthma](#case-study-3-1) ## Case study 4 This case study illustrates CSE simulations in a **clinical trial with several endpoints** and helps showcase the package's ability to model multivariate outcomes in clinical trials. [Clinical trial in patients with metastatic colorectal cancer](#case-study-4-1) ## Case study 5 This case study is based on a **clinical trial with several endpoints and multiple treatment arms** and illustrates the process of performing complex multiplicity adjustments in trials with several clinical objectives. [Clinical trial in patients with rheumatoid arthritis](#case-study-5-1) ## Case study 6 This case study is an extension of [Case study 2](#case-study-2-1) and illustrates how the package can be used to assess the performance of several multiplicity adjustments. The case study also walks the reader through the process of defining customized simulation reports. [Clinical trial in patients with schizophrenia](#case-study-6-1) # Case study 1 Case study 1 deals with a simple setting, namely, a clinical trial with two treatment arms (experimental treatment versus placebo) and a single endpoint. Power calculations can be performed analytically in this setting. Specifically, closed-form expressions for the power function can be derived using the central limit theorem or other approximations. Several distribution will be illustrated in this case study: - [Normally distributed endpoint](#normally-distributed-endpoint) - [Binary endpoint](#binary-endpoint) - [Survival-type endpoint](#survival-type-endpoint) - [Survival-type endpoint (with censoring)](#survival-type-endpoint-with-censoring) - [Count-type endpoint](#count-type-endpoint) ## Normally distributed endpoint Suppose that a sponsor is designing a Phase III clinical trial in patients with pulmonary arterial hypertension (PAH). The efficacy of experimental treatments for PAH is commonly evaluated using a six-minute walk test and the primary endpoint is defined as the change from baseline to the end of the 16-week treatment period in the six-minute walk distance. ### Define a Data Model The first step is to initialize the data model: ```r case.study1.data.model = DataModel() ``` After the initialization, components of the data model can be added to the `DataModel` object incrementally using the `+` operator. The change from baseline in the six-minute walk distance is assumed to follow a normal distribution. The distribution of the primary endpoint is defined in the `OutcomeDist` object: ```r case.study1.data.model = case.study1.data.model + OutcomeDist(outcome.dist = "NormalDist") ``` The sponsor would like to perform power evaluation over a broad range of sample sizes in each treatment arm: ```r case.study1.data.model = case.study1.data.model + SampleSize(c(50, 55, 60, 65, 70)) ``` As a side note, the `seq` function can be used to compactly define sample sizes in a data model: ```r case.study1.data.model = case.study1.data.model + SampleSize(seq(50, 70, 5)) ``` The sponsor is interested in performing power calculations under two treatment effect scenarios (standard and optimistic scenarios). Under these scenarios, the experimental treatment is expected to improve the six-minute walk distance by 40 or 50 meters compared to placebo, respectively, with the common standard deviation of 70 meters. Therefore, the mean change in the placebo arm is set to μ = 0 and the mean changes in the six-minute walk distance in the experimental arm are set to μ = 40 (standard scenario) or μ = 50 (optimistic scenario). The common standard deviation is σ = 70. ```r # Outcome parameter set 1 (standard scenario) outcome1.placebo = parameters(mean = 0, sd = 70) outcome1.treatment = parameters(mean = 40, sd = 70) # Outcome parameter set 2 (optimistic scenario) outcome2.placebo = parameters(mean = 0, sd = 70) outcome2.treatment = parameters(mean = 50, sd = 70) ``` Note that the mean and standard deviation are explicitly identified in each list. This is done mainly for the user's convenience. After having defined the outcome parameters for each sample, two `Sample` objects that define the two treatment arms in this trial can be created and added to the `DataModel` object: ```r case.study1.data.model = case.study1.data.model + Sample(id = "Placebo", outcome.par = parameters(outcome1.placebo, outcome2.placebo)) + Sample(id = "Treatment", outcome.par = parameters(outcome1.treatment, outcome2.treatment)) ``` ### Define an Analysis Model Just like the data model, the analysis model needs to be initialized as follows: ```r case.study1.analysis.model = AnalysisModel() ``` Only one significance test is planned to be carried out in the PAH clinical trial (treatment versus placebo). The treatment effect will be assessed using the one-sided two-sample *t*-test: ```r case.study1.analysis.model = case.study1.analysis.model + Test(id = "Placebo vs treatment", samples = samples("Placebo", "Treatment"), method = "TTest") ``` According to the specifications, the two-sample t-test will be applied to Sample 1 (Placebo) and Sample 2 (Treatment). These sample IDs come from the data model defied earlier. As explained in the manual, see [Analysis Model](http://gpaux.github.io/Mediana/AnalysisModel.html), the sample order is determined by the expected direction of the treatment effect. In this case, an increase in the six-minute walk distance indicates a beneficial effect and a numerically larger value of the primary endpoint is expected in Sample 2 (Treatment) compared to Sample 1 (Placebo). This implies that the list of samples to be passed to the t-test should include Sample 1 followed by Sample 2. It is of note that from version 1.0.6, it is possible to specify an option to indicate if a larger numeric values is expected in the Sample 2 (`larger = TRUE`) or in Sample 1 (`larger = FALSE`). By default, this argument is set to `TRUE`. To illustrate the use of the `Statistic` object, the mean change in the six-minute walk distance in the treatment arm can be computed using the `MeanStat` statistic: ```r case.study1.analysis.model = case.study1.analysis.model + Statistic(id = "Mean Treatment", method = "MeanStat", samples = samples("Treatment")) ``` ### Define an Evaluation Model The data and analysis models specified above collectively define the Clinical Scenarios to be examined in the PAH clinical trial. The scenarios are evaluated using success criteria or metrics that are aligned with the clinical objectives of the trial. In this case it is most appropriate to use regular power or, more formally, *marginal power*. This success criterion is specified in the evaluation model. First of all, the evaluation model must be initialized: ```r case.study1.evaluation.model = EvaluationModel() ``` Secondly, the success criterion of interest (marginal power) is defined using the `Criterion` object: ```r case.study1.evaluation.model = case.study1.evaluation.model + Criterion(id = "Marginal power", method = "MarginalPower", tests = tests("Placebo vs treatment"), labels = c("Placebo vs treatment"), par = parameters(alpha = 0.025)) ``` The `tests` argument lists the IDs of the tests (defined in the analysis model) to which the criterion is applied (note that more than one test can be specified). The test IDs link the evaluation model with the corresponding analysis model. In this particular case, marginal power will be computed for the t-test that compares the mean change in the six-minute walk distance in the placebo and treatment arms (Placebo vs treatment). In order to compute the average value of the mean statistic specified in the analysis model (i.e., the mean change in the six-minute walk distance in the treatment arm) over the simulation runs, another `Criterion` object needs to be added: ```r case.study1.evaluation.model = case.study1.evaluation.model + Criterion(id = "Average Mean", method = "MeanSumm", statistics = statistics("Mean Treatment"), labels = c("Average Mean Treatment")) ``` The `statistics` argument of this `Criterion` object lists the ID of the statistic (defined in the analysis model) to which this metric is applied (e.g., `Mean Treatment`). ### Perform Clinical Scenario Evaluation After the clinical scenarios (data and analysis models) and evaluation model have been defined, the user is ready to evaluate the success criteria specified in the evaluation model by calling the `CSE` function. To accomplish this, the simulation parameters need to be defined in a `SimParameters` object: ```r # Simulation parameters case.study1.sim.parameters = SimParameters(n.sims = 1000, proc.load = "full", seed = 42938001) ``` The function call for `CSE` specifies the individual components of Clinical Scenario Evaluation in this case study as well as the simulation parameters: ```r # Perform clinical scenario evaluation case.study1.results = CSE(case.study1.data.model, case.study1.analysis.model, case.study1.evaluation.model, case.study1.sim.parameters) ``` The simulation results are saved in an `CSE` object (`case.study1.results`). This object contains complete information about this particular evaluation, including the data, analysis and evaluation models specified by the user. The most important component of this object is the data frame contained in the list named *simulation.results* (`case.study1.results$simulation.results`). This data frame includes the values of the success criteria and metrics defined in the evaluation model. ### Summarize the Simulation Results #### Summary of simulation results in R console To facilitate the review of the simulation results produced by the `CSE` function, the user can invoke the `summary` function. This function displays the data frame containing the simulation results in the R console: ```r # Print the simulation results in the R console summary(case.study1.results) ``` If the user is interested in generate graphical summaries of the simulation results (using the the [ggplot2](https://ggplot2.tidyverse.org/) package or other packages), this data frame can also be saved to an object: ```r # Print the simulation results in the R console case.study1.simulation.results = summary(case.study1.results) ``` #### General a Simulation Report ##### Presentation Model A very useful feature of the Mediana package is generation of a Microsoft Word-based report to provide a summary of Clinical Scenario Evaluation Report. To generate a simulation report, the user needs to define a presentation model by creating a `PresentationModel` object. This object must be initialized as follows: ```r case.study1.presentation.model = PresentationModel() ``` Project information can be added to the presentation model using the `Project` object: ```r case.study1.presentation.model = case.study1.presentation.model + Project(username = "[Mediana's User]", title = "Case study 1", description = "Clinical trial in patients with pulmonary arterial hypertension") ``` The user can easily customize the simulation report by defining report sections and specifying properties of summary tables in the report. The code shown below creates a separate section within the report for each set of outcome parameters (using the `Section` object) and sets the sorting option for the summary tables (using the `Table` object). The tables will be sorted by the sample size. Further, in order to define descriptive labels for the outcome parameter scenarios and sample size scenarios, the `CustomLabel` object needs to be used: ```r case.study1.presentation.model = case.study1.presentation.model + Section(by = "outcome.parameter") + Table(by = "sample.size") + CustomLabel(param = "sample.size", label= paste0("N = ",c(50, 55, 60, 65, 70))) + CustomLabel(param = "outcome.parameter", label=c("Standard", "Optismistic")) ``` ##### Report generation Once the presentation model has been defined, the simulation report is ready to be generated using the `GenerateReport` function: ```r # Report Generation GenerateReport(presentation.model = case.study1.presentation.model, cse.results = case.study1.results, report.filename = "Case study 1 (normally distributed endpoint).docx") ``` ### Download The R code and the report that summarizes the results of Clinical Scenario Evaluation for this case study can be downloaded on the Mediana website: - [R code](http://gpaux.github.io/Mediana/Case%20study%201%20(normally%20distributed%20endpoint).R) - [CSE report](http://gpaux.github.io/Mediana/Case%20study%201%20(normally%20distributed%20endpoint).docx) ## Binary endpoint Consider a Phase III clinical trial for the treatment of rheumatoid arthritis (RA). The primary endpoint is the response rate based on the American College of Rheumatology (ACR) definition of improvement. The trial's sponsor in interested in performing power calculations using several treatment effect assumptions (Placebo 30% - Treatment 50%, Placebo 30% - Treatment 55% and Placebo 30% - Treatment 60%) ### Define a Data Model The three outcome parameter sets displayed in the table are combined with four sample size sets (`SampleSize(c(80, 90, 100, 110))`) and the distribution of the primary endpoint (`OutcomeDist(outcome.dist = "BinomDist")`) is specified in the `DataModel` object `case.study1.data.model`: ```r # Outcome parameter set 1 outcome1.placebo = parameters(prop = 0.30) outcome1.treatment = parameters(prop = 0.50) # Outcome parameter set 2 outcome2.placebo = parameters(prop = 0.30) outcome2.treatment = parameters(prop = 0.55) # Outcome parameter set 3 outcome3.placebo = parameters(prop = 0.30) outcome3.treatment = parameters(prop = 0.60) # Data model case.study1.data.model = DataModel() + OutcomeDist(outcome.dist = "BinomDist") + SampleSize(c(80, 90, 100, 110)) + Sample(id = "Placebo", outcome.par = parameters(outcome1.placebo, outcome2.placebo, outcome3.placebo)) + Sample(id = "Treatment", outcome.par = parameters(outcome1.treatment, outcome2.treatment, outcome3.treatment)) ``` ### Define an Analysis Model The analysis model uses a standard two-sample test for comparing proportions (`method = "PropTest"`) to assess the treatment effect in this clinical trial example: ```r # Analysis model case.study1.analysis.model = AnalysisModel() + Test(id = "Placebo vs treatment", samples = samples("Placebo", "Treatment"), method = "PropTest") ``` ### Define an Evaluation Model Power evaluations are easily performed in this clinical trial example using the same evaluation model utilized in the case of a normally distributed endpoint, i.e., evaluations rely on marginal power: ```r # Evaluation model case.study1.evaluation.model = EvaluationModel() + Criterion(id = "Marginal power", method = "MarginalPower", tests = tests("Placebo vs treatment"), labels = c("Placebo vs treatment"), par = parameters(alpha = 0.025)) ``` An extension of this clinical trial example is provided in [Case study 5](#case-study-5-1). The extension deals with a more complex setting involving several trial endpoints and multiple treatment arms. ### Download The R code and the report that summarizes the results of Clinical Scenario Evaluation for this case study can be downloaded on the Mediana website: - [R code](http://gpaux.github.io/Mediana/Case%20study%201%20(binary%20endpoint).R) - [CSE report](http://gpaux.github.io/Mediana/Case%20study%201%20(binary%20endpoint).docx) ## Survival-type endpoint If the trial's primary objective is formulated in terms of analyzing the time to a clinically important event (progression or death in an oncology setting), data and analysis models can be set up based on an exponential distribution and the log-rank test. As an illustration, consider a Phase III trial which will be conducted to evaluate the efficacy of a new treatment for metastatic colorectal cancer (MCC). Patients will be randomized in a 2:1 ratio to an experimental treatment or placebo (in addition to best supportive care). The trial's primary objective is to assess the effect of the experimental treatment on progression-free survival (PFS). ### Define a Data Model A single treatment effect scenario is considered in this clinical trial example. Specifically, the median time to progression is assumed to be: - Placebo : t0 = 6 months, - Treatment: t1 = 9 months. Under an exponential distribution assumption (which is specified using the `ExpoDist` distribution), the median times correspond to the following hazard rates: - λ0 = log(2)/t0 = 0.116, - λ1 = log(2)/t1 = 0.077, and the resulting hazard ratio (HR) is 0.077/0.116 = 0.67. ```r # Outcome parameters median.time.placebo = 6 rate.placebo = log(2)/median.time.placebo outcome.placebo = parameters(rate = rate.placebo) median.time.treatment = 9 rate.treatment = log(2)/median.time.treatment outcome.treatment = parameters(rate = rate.treatment) ``` It is important to note that, if no censoring mechanisms are specified in a data model with a time-to-event endpoint, all patients will reach the endpoint of interest (e.g., progression) and thus the number of patients will be equal to the number of events. Using this property, power calculations can be performed using either the `Event` object or `SampleSize` object. For the purpose of illustration, the `Event` object will be used in this example. To define a data model in the MCC clinical trial, the total event count in the trial is assumed to range between 270 and 300. Since the trial's design is not balanced, the randomization ratio needs to be specified in the `Event` object: ```r # Number of events parameters event.count.total = c(210, 300) randomization.ratio = c(1,2) # Data model case.study1.data.model = DataModel() + OutcomeDist(outcome.dist = "ExpoDist") + Event(n.events = event.count.total, rando.ratio = randomization.ratio) + Sample(id = "Placebo", outcome.par = parameters(outcome.placebo)) + Sample(id = "Treatment", outcome.par = parameters(outcome.treatment)) ``` It is worth noting that the primary endpoint's type (i.e., the`outcome.type` argument in the `OutcomeDist` object) is not specified. By default, the outcome type is set to `fixed`, which means that a design with a fixed follow-up is assumed even though the primary endpoint in this clinical trial is clearly a time-to-event endpoint. This is due to the fact that, as was explained earlier in this case study, there is no censoring in this design and all patients are followed until the event of interest is observed. It is easy to verify that the same results are obtained if the outcome type is set to `event`. ### Define an Analysis Model The analysis model in this clinical trial is very similar to the analysis models defined in the case studies with normal and binomial outcome variables. The only difference is the choice of the statistical method utilized in the primary analysis (`method = "LogrankTest"`): ```r # Analysis model case.study1.analysis.model = AnalysisModel() + Test(id = "Placebo vs treatment", samples = samples("Placebo", "Treatment"), method = "LogrankTest") ``` To illustrate the specification of a `Statistic` object, the hazard ratio will be computed using the Cox method. This can be accomplished by adding a `Statistic` object to the `AnalysisModel` such presented below. ```r # Analysis model case.study1.analysis.model = case.study1.analysis.model + Statistic(id = "Hazard Ratio", samples = samples("Placebo", "Treatment"), method = "HazardRatioStat", par = parameters(method = "Cox")) ``` ### Define an Evaluation Model An evaluation model identical to that used earlier in the case studies with normal and binomial distribution can be applied to compute the power function at the selected event counts. Moreover, the average hazard ratio accross the simulations will be computed. ```r # Evaluation model case.study1.evaluation.model = EvaluationModel() + Criterion(id = "Marginal power", method = "MarginalPower", tests = tests("Placebo vs treatment"), labels = c("Placebo vs treatment"), par = parameters(alpha = 0.025)) + Criterion(id = "Hazard Ratio", method = "MeanSumm", statistics = tests("Hazard Ratio"), labels = c("Average Hazard Ratio")) ``` ### Download The R code and the report that summarizes the results of Clinical Scenario Evaluation for this case study can be downloaded on the Mediana website: - [R code](http://gpaux.github.io/Mediana/Case%20study%201%20(survival-type%20endpoint).R) - [CSE report](http://gpaux.github.io/Mediana/Case%20study%201%20(survival-type%20endpoint).docx) ## Survival-type endpoint (with censoring) The power calculations presented in the previous case study assume an idealized setting where each patient is followed until the event of interest (e.g., progression) is observed. In this case, the sample size (number of patients) in each treatment arm is equal to the number of events. In reality, events are often censored and a sponsor is generally interested in determining the number of patients to be recruited in order to ensure a target number of events, which translates into desirable power. The Mediana package can be used to perform power calculations in event-driven trials in the presence of censoring. This is accomplished by setting up design parameters such as the length of the enrollment and follow-up periods in a data model using a `Design` object. In general, even though closed-form solutions have been derived for sample size calculations in event-driven designs, the available approaches force clinical trial researchers to make a variety of simplifying assumptions, e.g., assumptions on the enrollment distribution are commonly made, see, for example, Julious (2009, Chapter 15). A general simulation-based approach to power and sample size calculations implemented in the Mediana package enables clinical trial sponsors to remove these artificial restrictions and examine a very broad set of plausible design parameters. ### Define a Data Model Suppose, for example, that a standard design with a variable follow-up will be used in the MCC trial introduced in the previous case study. The total study duration will be 21 months, which includes a 9-month enrollment (accrual) period and a minimum follow-up of 12 months. The patients are assumed to be recruited at a uniform rate. The set of design parameters also includes the dropout distribution and its parameters. In this clinical trial, the dropout distribution is exponential with a rate determined from historical data. These design parameters are specified in a `Design` object: ```r # Dropout parameters dropout.par = parameters(rate = 0.0115) # Design parameters case.study1.design = Design(enroll.period = 9, study.duration = 21, enroll.dist = "UniformDist", dropout.dist = "ExpoDist", dropout.dist.par = dropout.par) ``` Finally, the primary endpoint's type is set to `event` in the `OutcomeDist` object to indicate that a variable follow-up will be utilized in this clinical trial. The complete data model in this case study is defined as follows: ```r # Number of events parameters event.count.total = c(390, 420) randomization.ratio = c(1,2) # Outcome parameters median.time.placebo = 6 rate.placebo = log(2)/median.time.placebo outcome.placebo = parameters(rate = rate.placebo) median.time.treatment = 9 rate.treatment = log(2)/median.time.treatment outcome.treatment = parameters(rate = rate.treatment) # Dropout parameters dropout.par = parameters(rate = 0.0115) # Data model case.study1.data.model = DataModel() + OutcomeDist(outcome.dist = "ExpoDist", outcome.type = "event") + Event(n.events = event.count.total, rando.ratio = randomization.ratio) + Design(enroll.period = 9, study.duration = 21, enroll.dist = "UniformDist", dropout.dist = "ExpoDist", dropout.dist.par = dropout.par) + Sample(id = "Placebo", outcome.par = parameters(outcome.placebo)) + Sample(id = "Treatment", outcome.par = parameters(outcome.treatment)) ``` ### Define an Analysis Model Since the number of events has been fixed in this clinical trial example and some patients will not reach the event of interest, it will be important to estimate the number of patients required to accrue the required number of events. In the Mediana package, this can be accomplished by specifying a descriptive statistic named `PatientCountStat` (this statistic needs to be specified in a `Statistic` object). Another descriptive statistic that would be of interest is the event count in each sample. To compute this statistic, `EventCountStat` needs to be included in a `Statistic` object. ```r # Analysis model case.study1.analysis.model = AnalysisModel() + Test(id = "Placebo vs treatment", samples = samples("Placebo", "Treatment"), method = "LogrankTest") + Statistic(id = "Events Placebo", samples = samples("Placebo"), method = "EventCountStat") + Statistic(id = "Events Treatment", samples = samples("Treatment"), method = "EventCountStat") + Statistic(id = "Patients Placebo", samples = samples("Placebo"), method = "PatientCountStat") + Statistic(id = "Patients Treatment", samples = samples("Treatment"), method = "PatientCountStat") ``` ### Define an Evaluation Model In order to compute the average values of the two statistics (`PatientCountStat` and `EventCountStat`) in each sample over the simulation runs, two `Criterion` objects need to be specified, in addition to the `Criterion` object defined to obtain marginal power. The IDs of the corresponding `Statistic` objects will be included in the `statistics` argument of the two `Criterion` objects: ```r # Evaluation model case.study1.evaluation.model = EvaluationModel() + Criterion(id = "Marginal power", method = "MarginalPower", tests = tests("Placebo vs treatment"), labels = c("Placebo vs treatment"), par = parameters(alpha = 0.025)) + Criterion(id = "Mean Events", method = "MeanSumm", statistics = statistics("Events Placebo", "Events Treatment"), labels = c("Mean Events Placebo", "Mean Events Treatment")) + Criterion(id = "Mean Patients", method = "MeanSumm", statistics = statistics("Patients Placebo", "Patients Treatment"), labels = c("Mean Patients Placebo", "Mean Patients Treatment")) ``` ### Download The R code and the report that summarizes the results of Clinical Scenario Evaluation for this case study can be downloaded on the Mediana website: - [R code](http://gpaux.github.io/Mediana/Case%20study%201%20(survival-type%20endpoint%20with%20censoring).R) - [CSE report](http://gpaux.github.io/Mediana/Case%20study%201%20(survival-type%20endpoint%20with%20censoring).docx) ## Count-type endpoint The last clinical trial example within Case study 1 deals with a Phase III clinical trial in patients with relapsing-remitting multiple sclerosis (RRMS). The trial aims at assessing the safety and efficacy of a single dose of a novel treatment compared to placebo. The primary endpoint is the number of new gadolinium enhancing lesions seen during a 6-month period on monthly MRIs of the brain and a smaller number indicates treatment benefit. The distribution of such endpoints has been widely studied in the literature and Sormani et al. ([1999a](http://www.jns-journal.com/article/S0022-510X(99)00015-5/abstract), [1999b](http://jnnp.bmj.com/content/66/4/465.long)) showed that a negative binomial distribution provides a fairly good fit. The list below gives the expected treatment effect in the experimental treatment and placebo arms (note that the negative binomial distribution is parameterized using the mean rather than the probability of success in each trial). The mean number of new lesions is set to 13 in the Treament arm and 7.8 in the Placebo arm, with a common dispersion parameter of 0.5. The corresponding treatment effect, i.e., the relative reduction in the mean number of new lesions counts, is 100 * (13 − 7.8)/13 = 40%. The assumptions in the table define a single outcome parameter set. ### Define a Data Model The `OutcomeDist` object defines the distribution of the trial endpoint (`NegBinomDist`). Further, a balanced design is utilized in this clinical trial and the range of sample sizes is defined in the `SampleSize` object (it is convenient to do this using the `seq` function). The `Sample` object includes the parameters required by the negative binomial distribution (dispersion and mean). ```r # Outcome parameters outcome.placebo = parameters(dispersion = 0.5, mean = 13) outcome.treatment = parameters(dispersion = 0.5, mean = 7.8) # Data model case.study1.data.model = DataModel() + OutcomeDist(outcome.dist = "NegBinomDist") + SampleSize(seq(100, 150, 10)) + Sample(id = "Placebo", outcome.par = parameters(outcome.placebo)) + Sample(id = "Treatment", outcome.par = parameters(outcome.treatment)) ``` ### Define an Analysis Model The treatment effect will be assessed in this clinical trial example using a negative binomial generalized linear model (NBGLM). In the Mediana package, the corresponding test is carrying out using the `GLMNegBinomTest` method which is specified in the `Test` object. It should be noted that as a smaller value indicates a treatment benefit, the first sample defined in the `samples` argument must be `Treatment`. ```r # Analysis model case.study1.analysis.model = AnalysisModel() + Test(id = "Treatment vs Placebo", samples = samples( "Treatment", "Placebo"), method = "GLMNegBinomTest") ``` Alternatively, from version 1.0.6, it is possible to specify the argument `lower` in the parameters of the method. If set to `FALSE` a numerically lower value is expected in Sample 2. ```r # Analysis model case.study1.analysis.model = AnalysisModel() + Test(id = "Treatment vs Placebo", samples = samples( "Placebo", "Treatment"), method = "GLMNegBinomTest", par = parameters(larger = FALSE)) ``` ### Define an Evaluation Model The objective of this clinical trial is identical to that of the clinical trials presented earlier on this page, i.e., evaluation will be based on marginal power of the primary endpoint test. As a consequence, the same evaluation model can be applied. ```r # Evaluation model case.study1.evaluation.model = EvaluationModel() + Criterion(id = "Marginal power", method = "MarginalPower", tests = tests("Treatment vs Placebo"), labels = c("Treatment vs Placebo"), par = parameters(alpha = 0.025)) ``` ### Download The R code and the report that summarizes the results of Clinical Scenario Evaluation for this case study can be downloaded on the Mediana website: - [R code](http://gpaux.github.io/Mediana/Case%20study%201%20(count-type%20endpoint).R) - [CSE report](http://gpaux.github.io/Mediana/Case%20study%201%20(count-type%20endpoint).docx) # Case study 2 ## Summary This clinical trial example deals with settings where no analytical methods are available to support power calculations. However, as demonstrated below, simulation-based approaches are easily applied to perform а comprehensive assessment of the relevant operating characteristics within the clinical scenario evaluation framework. Case study 2 is based on a clinical trial example introduced in Dmitrienko and D'Agostino (2013, Section 10). This example deals with a Phase III clinical trial in a schizophrenia population. Three doses of a new treatment, labelled Dose L, Dose M and Dose H, will be tested versus placebo. The trial will be declared successful if a beneficial treatment effect is demonstrated in any of the three dosing groups compared to the placebo group. The primary endpoint is defined as the reduction in the Positive and Negative Syndrome Scale (PANSS) total score compared to baseline and a larger reduction in the PANSS total score indicates treatment benefit. This endpoint is normally distributed and the treatment effect assumptions in the four treatment arms are displayed in the next table. ```{r, results = "asis", echo = FALSE} pander::pandoc.table(data.frame(Arm = c("Placebo", "Dose L", "Dose M", "Dose H"), Mean = c(16, 19.5, 21, 21), SD = rep(18,4))) ``` ## Define a Data Model The treatment effect assumptions presented in the table above define a single outcome parameter set and the common sample size is set to 260 patients. These parameters are specified in the following data model: ```r # Outcome parameters outcome.pl = parameters(mean = 16, sd = 18) outcome.dosel = parameters(mean = 19.5, sd = 18) outcome.dosem = parameters(mean = 21, sd = 18) outcome.doseh = parameters(mean = 21, sd = 18) # Data model case.study2.data.model = DataModel() + OutcomeDist(outcome.dist = "NormalDist") + SampleSize(seq(220, 260, 20)) + Sample(id = "Placebo", outcome.par = parameters(outcome.pl)) + Sample(id = "Dose L", outcome.par = parameters(outcome.dosel)) + Sample(id = "Dose M", outcome.par = parameters(outcome.dosem)) + Sample(id = "Dose H", outcome.par = parameters(outcome.doseh)) ``` ## Define an Analysis Model The analysis model, shown below, defines the three individual tests that will be carried out in the schizophrenia clinical trial. Each test corresponds to a dose-placebo comparison such as: - H1: Null hypothesis of no difference between Dose L and placebo. - H2: Null hypothesis of no difference between Dose M and placebo. - H3: Null hypothesis of no difference between Dose H and placebo. Each comparison will be carried out based on a one-sided two-sample *t*-test (`TTest` method defined in each `Test` object). As indicated earlier, the overall success criterion in the trial is formulated in terms of demonstrating a beneficial effect at any of the three doses. Due to multiple opportunities to claim success, the overall Type I error rate will be inflated and the Hochberg procedure is introduced to protect the error rate at the nominal level. Since no procedure parameters are defined, the three significance tests (or, equivalently, three null hypotheses of no effect) are assumed to be equally weighted. The corresponding analysis model is defined below: ```r # Analysis model case.study2.analysis.model = AnalysisModel() + MultAdjProc(proc = "HochbergAdj") + Test(id = "Placebo vs Dose L", samples = samples("Placebo", "Dose L"), method = "TTest") + Test(id = "Placebo vs Dose M", samples = samples ("Placebo", "Dose M"), method = "TTest") + Test(id = "Placebo vs Dose H", samples = samples("Placebo", "Dose H"), method = "TTest") ``` To request the Hochberg procedure with unequally weighted hypotheses, the user needs to assign a list of hypothesis weights to the `par` argument of the `MultAdjProc` object. The weights typically reflect the relative importance of the individual null hypotheses. Assume, for example, that 60% of the overall weight is assigned to H3 and the remainder is split between H1 and H2. In this case, the `MultAdjProc` object should be defined as follow: ```r MultAdjProc(proc = "HochbergAdj", par = parameters(weight = c(0.2, 0.2, 0.6))) ``` It should be noted that the order of the weights must be identical to the order of the `Test` objects defined in the analysis model. ## Define an Evaluation Model An evaluation model specifies clinically relevant criteria for assessing the performance of the individual tests defined in the corresponding analysis model or composite measures of success. In virtually any setting, it is of interest to compute the probability of achieving a significant outcome in each individual test, e.g., the probability of a significant difference between placebo and each dose. This is accomplished by requesting a `Criterion` object with `method = "MarginalPower"`. Since the trial will be declared successful if at least one dose-placebo comparison is significant, it is natural to compute the overall success probability, which is defined as the probability of demonstrating treatment benefit in one of more dosing groups. This is equivalent to evaluating disjunctive power in the trial (`method = "DisjunctivePower"`). In addition, the user can easily define a custom evaluation criterion. Suppose that, based on the results of the previously conducted trials, the sponsor expects a much larger treatment treatment difference at Dose H compared to Doses L and M. Given this, the sponsor may be interested in evaluating the probability of observing a significant treatment effect at Dose H and at least one other dose. The associated evaluation criterion is implemented in the following function: ```r # Custom evaluation criterion (Dose H and at least one of the two other doses are significant) case.study2.criterion = function(test.result, statistic.result, parameter) { alpha = parameter significant = ((test.result[,3] <= alpha) & ((test.result[,1] <= alpha) | (test.result[,2] <= alpha))) power = mean(significant) return(power) } ``` The function's first argument (`test.result`) is a matrix of p-values produced by the `Test` objects defined in the analysis model and the second argument (`statistic.result`) is a matrix of results produced by the `Statistic` objects defined in the analysis model. In this example, the criteria will only use the `test.result` argument, which will contain the p-values produced by the tests associated with the three dose-placebo comparisons. The last argument (`parameter`) contains the optional parameter(s) defined by the user in the `Criterion` object. In this example, the `par` argument contains the overall alpha level. The `case.study2.criterion` function computes the probability of a significant treatment effect at Dose H (`test.result[,3] <= alpha`) and a significant treatment difference at Dose L or Dose M (`(test.result[,1] <= alpha) | (test.result[,2]<= alpha)`). Since this criterion assumes that the third test is based on the comparison of Dose H versus Placebo, the order in which the tests are included in the evaluation model is important. The following evaluation model specifies marginal and disjunctive power as well as the custom evaluation criterion defined above: ```r # Evaluation model case.study2.evaluation.model = EvaluationModel() + Criterion(id = "Marginal power", method = "MarginalPower", tests = tests("Placebo vs Dose L", "Placebo vs Dose M", "Placebo vs Dose H"), labels = c("Placebo vs Dose L", "Placebo vs Dose M", "Placebo vs Dose H"), par = parameters(alpha = 0.025)) + Criterion(id = "Disjunctive power", method = "DisjunctivePower", tests = tests("Placebo vs Dose L", "Placebo vs Dose M", "Placebo vs Dose H"), labels = "Disjunctive power", par = parameters(alpha = 0.025)) + Criterion(id = "Dose H and at least one dose", method = "case.study2.criterion", tests = tests("Placebo vs Dose L", "Placebo vs Dose M", "Placebo vs Dose H"), labels = "Dose H and at least one of the two other doses are significant", par = parameters(alpha = 0.025)) ``` Another potential option is to apply the conjunctive criterion which is met if a significant treatment difference is detected simultaneously in all three dosing groups (`method = "ConjunctivePower"`). This criterion helps characterize the likelihood of a consistent treatment effect across the doses. The user can also use the `metric.tests` parameter to choose the specific tests to which the disjunctive and conjunctive criteria are applied (the resulting criteria are known as subset disjunctive and conjunctive criteria). To illustrate, the following statement computes the probability of a significant treatment effect at Dose M or Dose H (Dose L is excluded from this calculation): ```r Criterion(id = "Disjunctive power", method = "DisjunctivePower", tests = tests("Pl vs Dose M", "Pl vs Dose H"), labels = "Disjunctive power", par = parameters(alpha = 0.025)) ``` ## Download The R code and the report that summarizes the results of Clinical Scenario Evaluation for this case study can be downloaded on the Mediana website: - [R code](http://gpaux.github.io/Mediana/Case%20study%202.R) - [CSE report](http://gpaux.github.io/Mediana/Case%20study%202.docx) # Case study 3 ## Summary This case study deals with a Phase III clinical trial in patients with mild or moderate asthma (it is based on a clinical trial example from [Millen et al., 2014, Section 2.2](http://dij.sagepub.com/content/48/4/453.abstract)). The trial is intended to support a tailoring strategy. In particular, the treatment effect of a single dose of a new treatment will be compared to that of placebo in the overall population of patients as well as a pre-specified subpopulation of patients with a marker-positive status at baseline (for compactness, the overall population is denoted by OP, marker-positive subpopulation is denoted by M+ and marker- negative subpopulation is denoted by M−). Marker-positive patients are more likely to receive benefit from the experimental treatment. The overall objective of the clinical trial accounts for the fact that the treatment's effect may, in fact, be limited to the marker-positive subpopulation. The trial will be declared successful if the treatment's beneficial effect is established in the overall population of patients or, alternatively, the effect is established only in the subpopulation. The primary endpoint in the clinical trial is defined as an increase from baseline in the forced expiratory volume in one second (FEV1). This endpoint is normally distributed and improvement is associated with a larger change in FEV1. ## Define a Data Model To set up a data model for this clinical trial, it is natural to define samples (mutually exclusive groups of patients) as follows: - **Sample 1:** Marker-negative patients in the placebo arm. - **Sample 2:** Marker-positive patients in the placebo arm. - **Sample 3:** Marker-negative patients in the treatment arm. - **Sample 4:** Marker-positive patients in the treatment arm. Using this definition of samples, the trial's sponsor can model the fact that the treatment's effect is most pronounced in patients with a marker-positive status. The treatment effect assumptions in the four samples are summarized in the next table (expiratory volume in FEV1 is measured in liters). As shown in the table, the mean change in FEV1 is constant across the marker-negative and marker-positive subpopulations in the placebo arm (Samples 1 and 2). A positive treatment effect is expected in both subpopulations in the treatment arm but marker-positive patients will experience most of the beneficial effect (Sample 4). ```{r, results = "asis", echo = FALSE} pander::pandoc.table(data.frame(Sample = c("Placebo M-", "Placebo M+", "Treament M-", "Treatment M+"), Mean = c(0.12, 0.12, 0.24, 0.30), SD = rep(0.45,4))) ``` The following data model incorporates the assumptions listed above by defining a single set of outcome parameters. The data model includes three sample size sets (total sample size is set to 330, 340 and 350 patients). The sizes of the individual samples are computed based on historic information (40% of patients in the population of interest are expected to have a marker-positive status). In order to define specific sample size for each sample, they will be specified within each `Sample` object. ```r # Outcome parameters outcome.placebo.minus = parameters(mean = 0.12, sd = 0.45) outcome.placebo.plus = parameters(mean = 0.12, sd = 0.45) outcome.treatment.minus = parameters(mean = 0.24, sd = 0.45) outcome.treatment.plus = parameters(mean = 0.30, sd = 0.45) # Sample size parameters sample.size.total = c(330, 340, 350) sample.size.placebo.minus = as.list(0.3 * sample.size.total) sample.size.placebo.plus = as.list(0.2 * sample.size.total) sample.size.treatment.minus = as.list(0.3 * sample.size.total) sample.size.treatment.plus = as.list(0.2 * sample.size.total) # Data model case.study3.data.model = DataModel() + OutcomeDist(outcome.dist = "NormalDist") + Sample(id = "Placebo M-", sample.size = sample.size.placebo.minus, outcome.par = parameters(outcome.placebo.minus)) + Sample(id = "Placebo M+", sample.size = sample.size.placebo.plus, outcome.par = parameters(outcome.placebo.plus)) + Sample(id = "Treatment M-", sample.size = sample.size.treatment.minus, outcome.par = parameters(outcome.treatment.minus)) + Sample(id = "Treatment M+", sample.size = sample.size.treatment.plus, outcome.par = parameters(outcome.treatment.plus)) ``` ## Define an Analysis Model The analysis model in this clinical trial example is generally similar to that used in [Case study 2](#case-study-2-1) but there is an important difference which is described below. As in [Case study 2](#case-study-2-1), the primary endpoint follows a normal distribution and thus the treatment effect will be assessed using the two-sample *t*-test. Since two null hypotheses are tested in this trial (null hypotheses of no effect in the overall population of patients and subpopulation of marker-positive patients), a multiplicity adjustment needs to be applied. The Hochberg procedure with equally weighted null hypotheses will be used for this purpose. A key feature of the analysis strategy in this case study is that the samples defined in the data model are different from the samples used in the analysis of the primary endpoint. As shown in the Table, four samples are included in the data model. However, from the analysis perspective, the sponsor in interested in examining the treatment effect in two samples, namely, the overall population and marker-positive subpopulation. As shown below, to perform a comparison in the overall population, the *t*-test is applied to the following analysis samples: - **Placebo arm:** Samples 1 and 2 (`Placebo M-` and `Placebo M+`) are merged. - **Treatment arm:** Samples 3 and 4 (`Treatment M-` and `Treatment M+`) are merged. Further, the treatment effect test in the subpopulation of marker-positive patients is carried out based on these analysis samples: - **Placebo arm:** Sample 2 (`Placebo M+`). - **Treatment arm:** Sample 4 (`Treatment M+`). These analysis samples are specified in the analysis model below. The samples defined in the data model are merged using `c()` or `list()` function, e.g., `c("Placebo M-", "Placebo M+")`defines the placebo arm and `c("Treatment M-", "Treatment M+")` defines the experimental treatment arm in the overall population test. ```r # Analysis model case.study3.analysis.model = AnalysisModel() + MultAdjProc(proc = "HochbergAdj") + Test(id = "OP test", samples = samples(c("Placebo M-", "Placebo M+"), c("Treatment M-", "Treatment M+")), method = "TTest") + Test(id = "M+ test", samples = samples("Placebo M+", "Treatment M+"), method = "TTest") ``` ## Define an Evaluation Model It is reasonable to consider the following success criteria in this case study: - **Marginal power:** Probability of a significant outcome in each patient population. - **Disjunctive power:** Probability of a significant treatment effect in the overall population (OP) or marker-positive subpopulation (M+). This metric defines the overall probability of success in this clinical trial. - **Conjunctive power:** Probability of simultaneously achieving significance in the overall population and marker-positive subpopulation. This criterion will be useful if the trial's sponsor is interested in pursuing an enhanced efficacy claim ([Millen et al., 2012](http://dij.sagepub.com/content/46/6/647.abstract)). The following evaluation model applies the three criteria to the two tests listed in the analysis model: ```r # Evaluation model case.study3.evaluation.model = EvaluationModel() + Criterion(id = "Marginal power", method = "MarginalPower", tests = tests("OP test", "M+ test"), labels = c("OP test", "M+ test"), par = parameters(alpha = 0.025)) + Criterion(id = "Disjunctive power", method = "DisjunctivePower", tests = tests("OP test", "M+ test"), labels = "Disjunctive power", par = parameters(alpha = 0.025)) + Criterion(id = "Conjunctive power", method = "ConjunctivePower", tests = tests("OP test", "M+ test"), labels = "Conjunctive power", par = parameters(alpha = 0.025)) ``` ## Download The R code and the report that summarizes the results of Clinical Scenario Evaluation for this case study can be downloaded on the Mediana website: - [R code](http://gpaux.github.io/Mediana/Case%20study%203.R) - [CSE report](http://gpaux.github.io/Mediana/Case%20study%203.docx) # Case study 4 ## Summary Case study 4 serves as an extension of the oncology clinical trial example presented in [Case study 1](#case-study-1-1). Consider again a Phase III trial in patients with metastatic colorectal cancer (MCC). The same general design will be assumed in this section; however, an additional endpoint (overall survival) will be introduced. The case of two endpoints helps showcase the package's ability to model complex design and analysis strategies in trials with multivariate outcomes. Progression-free survival (PFS) is the primary endpoint in this clinical trial and overall survival (OS) serves as the key secondary endpoint, which provides supportive evidence of treatment efficacy. A hierarchical testing approach will be utilized in the analysis of the two endpoints. The PFS analysis will be performed first at α = 0.025 (one-sided), followed by the OS analysis at the same level if a significant effect on PFS is established. The resulting testing procedure is equivalent to the fixed-sequence procedure and controls the overall Type I error rate ([Dmitrienko and D’Agostino, 2013](http://onlinelibrary.wiley.com/doi/10.1002/sim.5990/abstract)). The treatment effect assumptions that will be used in clinical scenario evaluation are listed in the table below. The table shows the hypothesized median times along with the corresponding hazard rates for the primary and secondary endpoints. It follows from the table that the expected effect size is much larger for PFS compared to OS (PFS hazard ratio is lower than OS hazard ratio). ```{r, results = "asis", echo = FALSE} pander::pandoc.table(data.frame(Endpoint = c("Progression-free survival", "", "", "Overall survival", "", ""), Statistic = c(rep(c("Median time (months)", "Hazard rate", "Hazard ratio"),2)), Placebo = c(6, 0.116, 0.67, 15, 0.046, 0.79), Treatment = c(9, 0.077,"",19,0.036,""))) ``` ## Define a Data Model In this clinical trial two endpoints are evaluated for each patient (PFS and OS) and thus their joint distribution needs to be listed in the general set. A bivariate exponential distribution will be used in this example and samples from this bivariate distribution will be generated by the `MVExpoPFSOSDist` function which implements multivariate exponential distributions. The function utilizes the copula method, i.e., random variables that follow a bivariate normal distribution will be generated and then converted into exponential random variables. The next several statements specify the parameters of the bivariate exponential distribution: - Parameters of the marginal exponential distributions, i.e., the hazard rates. - Correlation matrix of the underlying multivariate normal distribution used in the copula method. The hazard rates for PFS and OS in each treatment arm are defined based on the information presented in the table above (`placebo.par` and `treatment.par`) and the correlation matrix is specified based on historical information (`corr.matrix`). These parameters are combined to define the outcome parameter sets (`outcome.placebo` and `outcome.treatment`) that will be included in the sample-specific set of data model parameters (`Sample` object). ```r # Outcome parameters: Progression-free survival median.time.pfs.placebo = 6 rate.pfs.placebo = log(2)/median.time.pfs.placebo outcome.pfs.placebo = parameters(rate = rate.pfs.placebo) median.time.pfs.treatment = 9 rate.pfs.treatment = log(2)/median.time.pfs.treatment outcome.pfs.treatment = parameters(rate = rate.pfs.treatment) hazard.pfs.ratio = rate.pfs.treatment/rate.pfs.placebo # Outcome parameters: Overall survival median.time.os.placebo = 15 rate.os.placebo = log(2)/median.time.os.placebo outcome.os.placebo = parameters(rate = rate.os.placebo) median.time.os.treatment = 19 rate.os.treatment = log(2)/median.time.os.treatment outcome.os.treatment = parameters(rate = rate.os.treatment) hazard.os.ratio = rate.os.treatment/rate.os.placebo # Parameter lists placebo.par = parameters(parameters(rate = rate.pfs.placebo), parameters(rate = rate.os.placebo)) treatment.par = parameters(parameters(rate = rate.pfs.treatment), parameters(rate = rate.os.treatment)) # Correlation between two endpoints corr.matrix = matrix(c(1.0, 0.3, 0.3, 1.0), 2, 2) # Outcome parameters outcome.placebo = parameters(par = placebo.par, corr = corr.matrix) outcome.treatment = parameters(par = treatment.par, corr = corr.matrix) ``` To define the sample-specific data model parameters, a 2:1 randomization ratio will be used in this clinical trial and thus the number of events as well as the randomization ratio are specified by the user in the `Event` object. Secondly, a separate sample ID needs to be assigned to each endpoint within the two samples (e.g. `Placebo PFS` and `Placebo OS`) corresponding to the two treatment arms. This will enable the user to construct analysis models for examining the treatment effect on each endpoint. ```r # Number of events event.count.total = c(270, 300) randomization.ratio = c(1, 2) # Data model case.study4.data.model = DataModel() + OutcomeDist(outcome.dist = "MVExpoPFSOSDist") + Event(n.events = event.count.total, rando.ratio = randomization.ratio) + Sample(id = list("Placebo PFS", "Placebo OS"), outcome.par = parameters(outcome.placebo)) + Sample(id = list("Treatment PFS", "Treatment OS"), outcome.par = parameters(outcome.treatment)) ``` ## Define an Analysis Model The treatment comparisons for both endpoints will be carried out based on the log-rank test (`method = "LogrankTest"`). Further, as was stated in the beginning of this page, the two endpoints will be tested hierarchically using a multiplicity adjustment procedure known as the fixed-sequence procedure. This procedure belongs to the class of chain procedures (`proc = "ChainAdj"`) and the following figure provides a visual summary of the decision rules used in this procedure.
![](figures/CaseStudy04-fig1.png)
The circles in this figure denote the two null hypotheses of interest: - H1: Null hypothesis of no difference between the two arms with respect to PFS. - H2: Null hypothesis of no difference between the two arms with respect to OS. The value displayed above a circle defines the initial weight of each null hypothesis. All of the overall α is allocated to H1 to ensure that the OS test will be carried out only after the PFS test is significant and the arrow indicates that H2 will be tested after H1 is rejected. More formally, a chain procedure is uniquely defined by specifying a vector of hypothesis weights (W) and matrix of transition parameters (G). Based on the figure, these parameters are given by
![](figures/CaseStudy04-fig2.png)
Two objects (named `chain.weight` and `chain.transition`) are defined below to pass the hypothesis weights and transition parameters to the multiplicity adjustment parameters. ```r # Parameters of the chain procedure (fixed-sequence procedure) # Vector of hypothesis weights chain.weight = c(1, 0) # Matrix of transition parameters chain.transition = matrix(c(0, 1, 0, 0), 2, 2, byrow = TRUE) # Analysis model case.study4.analysis.model = AnalysisModel() + MultAdjProc(proc = "ChainAdj", par = parameters(weight = chain.weight, transition = chain.transition)) + Test(id = "PFS test", samples = samples("Placebo PFS", "Treatment PFS"), method = "LogrankTest") + Test(id = "OS test", samples = samples("Placebo OS", "Treatment OS"), method = "LogrankTest") ``` As shown above, the two significance tests included in the analysis model reflect the two-fold objective of this trial. The first test focuses on a PFS comparison between the two treatment arms (`id = "PFS test"`) whereas the other test is carried out to assess the treatment effect on OS (`test.id = "OS test"`). Alternatively, the fixed-sequence procedure can be implemented using the method `FixedSeqAdj` introduced from version 1.0.4. This implementation is facilitated as no parameters have to be specified. ```r # Analysis model case.study4.analysis.model = AnalysisModel() + MultAdjProc(proc = "FixedSeqAdj") + Test(id = "PFS test", samples = samples("Placebo PFS", "Treatment PFS"), method = "LogrankTest") + Test(id = "OS test", samples = samples("Placebo OS", "Treatment OS"), method = "LogrankTest") ``` ## Define an Evaluation Model The evaluation model specifies the most basic criterion for assessing the probability of success in the PFS and OS analyses (marginal power). A criterion based on disjunctive power could be considered but it would not provide additional information. Due to the hierarchical testing approach, the probability of detecting a significant treatment effect on at least one endpoint (disjunctive power) is simply equal to the probability of establishing a significant PFS effect. ```r # Evaluation model case.study4.evaluation.model = EvaluationModel() + Criterion(id = "Marginal power", method = "MarginalPower", tests = tests("PFS test", "OS test"), labels = c("PFS test", "OS test"), par = parameters(alpha = 0.025)) ``` ## Download The R code and the report that summarizes the results of Clinical Scenario Evaluation for this case study can be downloaded on the Mediana website: - [R code](http://gpaux.github.io/Mediana/Case%20study%204.R) - [CSE report](http://gpaux.github.io/Mediana/Case%20study%204.docx) # Case study 5 ## Summary This case study extends the straightforward setting presented in [Case study 1](#case-study-1-1) to a more complex setting involving two trial endpoints and three treatment arms. Case study 5 illustrates the process of performing power calculations in clinical trials with multiple, hierarchically structured objectives and "multivariate" multiplicity adjustment strategies (gatekeeping procedures). Consider a three-arm Phase III clinical trial for the treatment of rheumatoid arthritis (RA). Two co-primary endpoints will be used to evaluate the effect of a novel treatment on clinical response and on physical function. The endpoints are defined as follows: - Endpoint 1: Response rate based on the American College of Rheumatology definition of improvement (ACR20). - Endpoint 2: Change from baseline in the Health Assessment Questionnaire-Disability Index (HAQ-DI). The two endpoints have different marginal distributions. The first endpoint is binary whereas the second one is continuous and follows a normal distribution. The efficacy profile of two doses of a new treatment (Doses L and Dose H) will be compared to that of a placebo and a successful outcome will be defined as a significant treatment effect at either or both doses. A hierarchical structure has been established within each dose so that Endpoint 2 will be tested if and only if there is evidence of a significant effect on Endpoint 1. Three treatment effect scenarios for each endpoint are displayed in the table below. The scenarios define three outcome parameter sets. The first set represents a rather conservative treatment effect scenario, the second set is a standard (most plausible) scenario and the third set represents an optimistic scenario. Note that a reduction in the HAQ-DI score indicates a beneficial effect and thus the mean changes are assumed to be negative for Endpoint 2. ```{r, results = "asis", echo = FALSE} pander::pandoc.table(data.frame(Endpoint = c("ACR20 (%)", "", "", "HAQ-DI (mean (SD))", "", ""), "Outcome parameter set" = c(rep(c("Conservative", "Standard", "Optimistic"),2)), Placebo = c("30%", "30%", "30%", "−0.10 (0.50)", "−0.10 (0.50)", "−0.10 (0.50)"), "Dose L" = c("40%", "45%", "50%", "−0.20 (0.50)", "−0.25 (0.50)", "−0.30 (0.50)"), "Dose H" = c("50%", "55%", "60%", "−0.30 (0.50)", "−0.35 (0.50)", "−0.40 (0.50)"))) ``` ## Define a Data Model As in [Case study 4](#case-study-4-1), two endpoints are evaluated for each patient in this clinical trial example, which means that their joint distribution needs to be specified. The `MVMixedDist` method will be utilized for specifying a bivariate distribution with binomial and normal marginals (`var.type = list("BinomDist", "NormalDist")`). In general, this function is used for modeling correlated normal, binomial and exponential endpoints and relies on the copula method, i.e., random variables are generated from a multivariate normal distribution and converted into variables with pre-specified marginal distributions. Three parameters must be defined to specify the joint distribution of Endpoints 1 and 2 in this clinical trial example: - Variable types (binomial and normal). - Outcome distribution parameters (proportion for Endpoint 1, mean and SD for Endpoint 2) based on the assumptions listed in the Table above. - Correlation matrix of the multivariate normal distribution used in the copula method. These parameters are combined to define three outcome parameter sets (e.g., `outcome1.plac `, `outcome1.dosel ` and `outcome1.doseh `) that will be included in the `Sample` object in the data model. ```r # Variable types var.type = list("BinomDist", "NormalDist") # Outcome distribution parameters placebo.par = parameters(parameters(prop = 0.3), parameters(mean = -0.10, sd = 0.5)) dosel.par1 = parameters(parameters(prop = 0.40), parameters(mean = -0.20, sd = 0.5)) dosel.par2 = parameters(parameters(prop = 0.45), parameters(mean = -0.25, sd = 0.5)) dosel.par3 = parameters(parameters(prop = 0.50), parameters(mean = -0.30, sd = 0.5)) doseh.par1 = parameters(parameters(prop = 0.50), parameters(mean = -0.30, sd = 0.5)) doseh.par2 = parameters(parameters(prop = 0.55), parameters(mean = -0.35, sd = 0.5)) doseh.par3 = parameters(parameters(prop = 0.60), parameters(mean = -0.40, sd = 0.5)) # Correlation between two endpoints corr.matrix = matrix(c(1.0, 0.5, 0.5, 1.0), 2, 2) # Outcome parameter set 1 outcome1.placebo = parameters(type = var.type, par = placebo.par, corr = corr.matrix) outcome1.dosel = parameters(type = var.type, par = dosel.par1, corr = corr.matrix) outcome1.doseh = parameters(type = var.type, par = doseh.par1, corr = corr.matrix) # Outcome parameter set 2 outcome2.placebo = parameters(type = var.type, par = placebo.par, corr = corr.matrix) outcome2.dosel = parameters(type = var.type, par = dosel.par2, corr = corr.matrix) outcome2.doseh = parameters(type = var.type, par = doseh.par2, corr = corr.matrix) # Outcome parameter set 3 outcome3.placebo = parameters(type = var.type, par = placebo.par, corr = corr.matrix) outcome3.doseh = parameters(type = var.type, par = doseh.par3, corr = corr.matrix) outcome3.dosel = parameters(type = var.type, par = dosel.par3, corr = corr.matrix) ``` These outcome parameter set are then combined within each `Sample` object and the common sample size per treatment arm ranges between 100 and 120: ```r # Data model case.study5.data.model = DataModel() + OutcomeDist(outcome.dist = "MVMixedDist") + SampleSize(c(100, 120)) + Sample(id = list("Placebo ACR20", "Placebo HAQ-DI"), outcome.par = parameters(outcome1.placebo, outcome2.placebo, outcome3.placebo)) + Sample(id = list("DoseL ACR20", "DoseL HAQ-DI"), outcome.par = parameters(outcome1.dosel, outcome2.dosel, outcome3.dosel)) + Sample(id = list("DoseH ACR20", "DoseH HAQ-DI"), outcome.par = parameters(outcome1.doseh, outcome2.doseh, outcome3.doseh)) ``` ## Define an Analysis Model To set up the analysis model in this clinical trial example, note that the treatment comparisons for Endpoints 1 and 2 will be carried out based on two different statistical tests: - Endpoint 1: Two-sample test for comparing proportions (`method = "PropTest"`). - Endpoint 2: Two-sample t-test (`method = "TTest"`). It was pointed out earlier in this page that the two endpoints will be tested hierarchically within each dose. The figure below provides a visual summary of the testing strategy used in this clinical trial. The circles in this figure denote the four null hypotheses of interest: H1: Null hypothesis of no difference between Dose L and placebo with respect to Endpoint 1. H2: Null hypothesis of no difference between Dose H and placebo with respect to Endpoint 1. H3: Null hypothesis of no difference between Dose L and placebo with respect to Endpoint 2. H4: Null hypothesis of no difference between Dose H and placebo with respect to Endpoint 2.
![](figures/CaseStudy05-fig1.png)
A multiple testing procedure known as the multiple-sequence gatekeeping procedure will be applied to account for the hierarchical structure of this multiplicity problem. This procedure belongs to the class of mixture-based gatekeeping procedures introduced in [Dmitrienko et al. (2015)](http://www.tandfonline.com/doi/abs/10.1080/10543406.2015.1074917). This gatekeeping procedure is specified by defining the following three parameters: - Families of null hypotheses (`family`). - Component procedures used in the families (`component.procedure`). - Truncation parameters used in the families (`gamma`). ```r # Parameters of the gatekeeping procedure procedure (multiple-sequence gatekeeping procedure) # Tests to which the multiplicity adjustment will be applied test.list = tests("Placebo vs DoseH - ACR20", "Placebo vs DoseL - ACR20", "Placebo vs DoseH - HAQ-DI", "Placebo vs DoseL - HAQ-DI") # Families of hypotheses family = families(family1 = c(1, 2), family2 = c(3, 4)) # Component procedures for each family component.procedure = families(family1 ="HolmAdj", family2 = "HolmAdj") # Truncation parameter for each family gamma = families(family1 = 0.8, family2 = 1) ``` These parameters are included in the `MultAdjProc` object defined below. The tests to which the multiplicity adjustment will be applied are defined in the `tests` argument. The use of this argument is optional if all tests included in the analysis model are to be included. The argument `family` states that the null hypotheses will be grouped into two families: - Family 1: H1 and H2. - Family 2: H3 and H4. It is to be noted that the order corresponds to the order of the tests defined in the analysis model, except if the tests are specifically specified in the `tests` argument of the `MultAdjProc` object. The families will be tested sequentially and a truncated Holm procedure will be applied within each family (`component.procedure`). Lastly, the truncation parameter will be set to 0.8 in Family 1 and to 1 in Family 2 (`gamma`). The resulting parameters are included in the `par` argument of the `MultAdjProc` object and, as before, the `proc` argument is used to specify the multiple testing procedure (`MultipleSequenceGatekeepingAdj`). The test are then specified in the analysis model and the overall analysis model is defined as follows: ```r # Analysis model case.study5.analysis.model = AnalysisModel() + MultAdjProc(proc = "MultipleSequenceGatekeepingAdj", par = parameters(family = family, proc = component.procedure, gamma = gamma), tests = test.list) + Test(id = "Placebo vs DoseL - ACR20", method = "PropTest", samples = samples("Placebo ACR20", "DoseL ACR20")) + Test(id = "Placebo vs DoseH - ACR20", method = "PropTest", samples = samples("Placebo ACR20", "DoseH ACR20")) + Test(id = "Placebo vs DoseL - HAQ-DI", method = "TTest", samples = samples("DoseL HAQ-DI", "Placebo HAQ-DI")) + Test(id = "Placebo vs DoseH - HAQ-DI", method = "TTest", samples = samples("DoseH HAQ-DI", "Placebo HAQ-DI")) ``` Recall that a numerically lower value indicates a beneficial effect for the HAQ-DI score and, as a result, the experimental treatment arm must be defined prior to the placebo arm in the test.samples parameters corresponding to the HAQ-DI tests, e.g., `samples = samples("DoseL HAQ-DI", "Placebo HAQ-DI")`. ## Define an Evaluation Model In order to assess the probability of success in this clinical trial, a hybrid criterion based on the conjunctive criterion (both trial endpoints must be significant) and disjunctive criterion (at least one dose-placebo comparison must be significant) can be considered. This criterion will be met if a significant effect is established at one or two doses on Endpoint 1 (ACR20) and also at one or two doses on Endpoint 2 (HAQ-DI). However, due to the hierarchical structure of the testing strategy (see Figure), this is equivalent to demonstrating a significant difference between Placebo and at least one dose with respect to Endpoint 2. The corresponding criterion is a subset disjunctive criterion based on the two Endpoint 2 tests (subset disjunctive power was briefly mentioned in [Case study 2](CaseStudy02)). In addition, the sponsor may also be interested in evaluating marginal power as well as subset disjunctive power based on the Endpoint 1 tests. The latter criterion will be met if a significant difference between Placebo and at least one dose is established with respect to Endpoint 1. Additionally, as in [Case study 2](CaseStudy02), the user could consider defining custom evaluation criteria. The three resulting evaluation criteria (marginal power, subset disjunctive criterion based on the Endpoint 1 tests and subset disjunctive criterion based on the Endpoint 2 tests) are included in the following evaluation model. ```r # Evaluation model case.study5.evaluation.model = EvaluationModel() + Criterion(id = "Marginal power", method = "MarginalPower", tests = tests("Placebo vs DoseL - ACR20", "Placebo vs DoseH - ACR20", "Placebo vs DoseL - HAQ-DI", "Placebo vs DoseH - HAQ-DI"), labels = c("Placebo vs DoseL - ACR20", "Placebo vs DoseH - ACR20", "Placebo vs DoseL - HAQ-DI", "Placebo vs DoseH - HAQ-DI"), par = parameters(alpha = 0.025)) + Criterion(id = "Disjunctive power - ACR20", method = "DisjunctivePower", tests = tests("Placebo vs DoseL - ACR20", "Placebo vs DoseH - ACR20"), labels = "Disjunctive power - ACR20", par = parameters(alpha = 0.025)) + Criterion(id = "Disjunctive power - HAQ-DI", method = "DisjunctivePower", tests = tests("Placebo vs DoseL - HAQ-DI", "Placebo vs DoseH - HAQ-DI"), labels = "Disjunctive power - HAQ-DI", par = parameters(alpha = 0.025)) ``` ## Download The R code and the report that summarizes the results of Clinical Scenario Evaluation for this case study can be downloaded on the Mediana website: - [R code](http://gpaux.github.io/Mediana/Case%20study%205.R) - [CSE report](http://gpaux.github.io/Mediana/Case%20study%205.docx) # Case study 6 ## Summary Case study 6 is an extension of [Case study 2](#case-study-2-1) where the objective of the sponsor is to compare several Multiple Testing Procedures (MTPs). The main difference is in the specification of the analysis model. ## Define a Data Model The same data model as in [Case study 2](#case-study-2-1) will be used in this case study. However, as shown in the table below, a new set of outcome parameters will be added in this case study (an optimistic set of parameters). ```{r, results = "asis", echo = FALSE} pander::pandoc.table(data.frame("Outcome parameter set" = c("Standard", "", "", "", "Optimistic", "", "", ""), "Arm" = c(rep(c("Placebo", "Dose L", "Dose M", "Dose H"),2)), "Mean" = c(16, 19.5, 21, 21, 16, 20, 21, 22), "SD" = c(rep(18,8)))) ``` ```r # Standard outcome1.placebo = parameters(mean = 16, sd = 18) outcome1.dosel = parameters(mean = 19.5, sd = 18) outcome1.dosem = parameters(mean = 21, sd = 18) outcome1.doseh = parameters(mean = 21, sd = 18) # Optimistic outcome2.placebo = parameters(mean = 16, sd = 18) outcome2.dosel = parameters(mean = 20, sd = 18) outcome2.dosem = parameters(mean = 21, sd = 18) outcome2.doseh = parameters(mean = 22, sd = 18) # Data model case.study6.data.model = DataModel() + OutcomeDist(outcome.dist = "NormalDist") + SampleSize(seq(220, 260, 20)) + Sample(id = "Placebo", outcome.par = parameters(outcome1.placebo, outcome2.placebo)) + Sample(id = "Dose L", outcome.par = parameters(outcome1.dosel, outcome2.dosel)) + Sample(id = "Dose M", outcome.par = parameters(outcome1.dosem, outcome2.dosem)) + Sample(id = "Dose H", outcome.par = parameters(outcome1.doseh, outcome2.doseh)) ``` ## Define an Analysis Model As in [Case study 2](#case-study-2-1), each dose-placebo comparison will be performed using a one-sided two-sample *t*-test (`TTest` method defined in each `Test` object). The same nomenclature will be used to define the hypotheses, i.e.: - H1: Null hypothesis of no difference between Dose L and placebo. - H2: Null hypothesis of no difference between Dose M and placebo. - H3: Null hypothesis of no difference between Dose H and placebo. In this case study, as in [Case study 2](#case-study-2-1), the overall success criterion in the trial is formulated in terms of demonstrating a beneficial effect at any of the three doses, inducing an inflation of the overall Type I error rate. In this case study, the sponsor is interested in comparing several Multiple Testing Procedures, such as the weighted Bonferroni, Holm and Hochberg procedures. These MTPs are defined as below: ```r # Multiplicity adjustments # No adjustment mult.adj1 = MultAdjProc(proc = NA) # Bonferroni adjustment (with unequal weights) mult.adj2 = MultAdjProc(proc = "BonferroniAdj", par = parameters(weight = c(1/4,1/4,1/2))) # Holm adjustment (with unequal weights) mult.adj3 = MultAdjProc(proc = "HolmAdj", par = parameters(weight = c(1/4,1/4,1/2))) # Hochberg adjustment (with unequal weights) mult.adj4 = MultAdjProc(proc = "HochbergAdj", par = parameters(weight = c(1/4,1/4,1/2))) ``` The `mult.adj1` object, which specified that no adjustment will be used, is defined in order to observe the decrease in power induced by each MTPs. It should be noted that for each weighted procedure, a higher weight is assigned to the test of Placebo vs Dose H (1/2), and the remaining weight is equally assigned to the two other tests (i.e. 1/4 for each test). These parameters are specified in the `par` argument of each MTP. The analysis model is defined as follows: ```r # Analysis model case.study6.analysis.model = AnalysisModel() + MultAdj(mult.adj1, mult.adj2, mult.adj3, mult.adj4) + Test(id = "Placebo vs Dose L", samples = samples("Placebo", "Dose L"), method = "TTest") + Test(id = "Placebo vs Dose M", samples = samples ("Placebo", "Dose M"), method = "TTest") + Test(id = "Placebo vs Dose H", samples = samples("Placebo", "Dose H"), method = "TTest") ``` For the sake of compactness, all MTPs are combined using a `MultAdj` object, but it is worth mentioning that each MTP could have been directly added to the `AnalysisModel` object using the `+` operator. ## Define an Evaluation Model As for the data model, the same evaluation model as in [Case study 2](#case-study-2-1) will be used in this case study. Refer to [Case study 2](#case-study-2-1) for more information. ```r # Evaluation model case.study6.evaluation.model = EvaluationModel() + Criterion(id = "Marginal power", method = "MarginalPower", tests = tests("Placebo vs Dose L", "Placebo vs Dose M", "Placebo vs Dose H"), labels = c("Placebo vs Dose L", "Placebo vs Dose M", "Placebo vs Dose H"), par = parameters(alpha = 0.025)) + Criterion(id = "Disjunctive power", method = "DisjunctivePower", tests = tests("Placebo vs Dose L", "Placebo vs Dose M", "Placebo vs Dose H"), labels = "Disjunctive power", par = parameters(alpha = 0.025)) + Criterion(id = "Dose H and at least one dose", method = "case.study6.criterion", tests = tests("Placebo vs Dose L", "Placebo vs Dose M", "Placebo vs Dose H"), labels = "Dose H and at least one of the two other doses are significant", par = parameters(alpha = 0.025)) ``` The last `Criterion` object specifies the custom criterion which computes the probability of a significant treatment effect at Dose H and a significant treatment difference at Dose L or Dose M. ## Perform Clinical Scenario Evaluation Using the data, analysis and evaluation models, simulation-based Clinical Scenario Evaluation is performed by calling the `CSE` function: ```r # Simulation Parameters case.study6.sim.parameters = SimParameters(n.sims = 1000, proc.load = "full", seed = 42938001) # Perform clinical scenario evaluation case.study6.results = CSE(case.study6.data.model, case.study6.analysis.model, case.study6.evaluation.model, case.study6.sim.parameters) ``` ## Generate a Simulation Report This case study will also illustrate the process of customizing a Word-based simulation report. This can be accomplished by defining custom sections and subsections to provide a structured summary of the complex set of simulation results. ### Create a Customized Simulation Report #### Define a Presentation Model Several presentation models will be used produce customized simulation reports: - A report without subsections. - A report with subsections. - A report with combined sections. First of all, a default `PresentationModel` object (`case.study6.presentation.model.default`) will be created. This object will include the common components of the report that are shared across the presentation models. The project information (`Project` object), sorting options in summary tables (`Table` object) and specification of custom labels (`CustomLabel` objects) are included in this object: ```r case.study6.presentation.model.default = PresentationModel() + Project(username = "[Mediana's User]", title = "Case study 6", description = "Clinical trial in patients with schizophrenia - Several MTPs") + Table(by = "sample.size") + CustomLabel(param = "sample.size", label = paste0("N = ", seq(220, 260, 20))) + CustomLabel(param = "multiplicity.adjustment", label = c("No adjustment", "Bonferroni adjustment", "Holm adjustment", "Hochberg adjustment")) ``` #### Report without subsections The first simulation report will include a section for each outcome parameter set. To accomplish this, a `Section` object is added to the default `PresentationModel` object and the report is generated: ```r # Reporting 1 - Without subsections case.study6.presentation.model1 = case.study6.presentation.model.default + Section(by = "outcome.parameter") # Report Generation GenerateReport(presentation.model = case.study6.presentation.model1, cse.results = case.study6.results, report.filename = "Case study 6 - Without subsections.docx") ``` #### Report with subsections The second report will include a section for each outcome parameter set and, in addition, a subsection will be created for each multiplicity adjustment procedure. The `Section` and `Subsection` objects are added to the default `PresentationModel` object as shown below and the report is generated: ```r # Reporting 2 - With subsections case.study6.presentation.model2 = case.study6.presentation.model.default + Section(by = "outcome.parameter") + Subsection(by = "multiplicity.adjustment") # Report Generation GenerateReport(presentation.model = case.study6.presentation.model2, cse.results = case.study6.results, report.filename = "Case study 6 - With subsections.docx") ``` #### Report with combined sections Finally, the third report will include a section for each combination of outcome parameter set and each multiplicity adjustment procedure. This is accomplished by adding a `Section` object to the default `PresentationModel` object and specifying the outcome parameter and multiplicity adjustment in the section's `by` argument. ```r # Reporting 3 - Combined sections case.study6.presentation.model3 = case.study6.presentation.model.default + Section(by = c("outcome.parameter", "multiplicity.adjustment")) # Report Generation GenerateReport(presentation.model = case.study6.presentation.model3, cse.results = case.study6.results, report.filename = "Case study 6 - Combined Sections.docx") ``` ## Download The R code and the report that summarizes the results of Clinical Scenario Evaluation for this case study can be downloaded on the Mediana website: - [R code](http://gpaux.github.io/Mediana/Case%20study%206.R) - [CSE report without subsections](http://gpaux.github.io/Mediana/Case%20study%206%20-%20Without%20subsections.docx) - [CSE report with subsections](http://gpaux.github.io/Mediana/Case%20study%206%20-%20With%20subsections.docx) - [CSE report with combined subsections](http://gpaux.github.io/Mediana/Case%20study%206%20-%20Combined%20Sections.docx) Mediana/vignettes/adjusted-pvalues.Rmd0000644000176200001440000003032213434027611017545 0ustar liggesusers--- title: "Adjusted p-values and one-sided simultaneous confidence limits" author: "Gautier Paux and Alex Dmitrienko" date: "`r Sys.Date()`" output: rmarkdown::html_vignette vignette: > %\VignetteIndexEntry{Adjusted p-values and one-sided simultaneous confidence limits} %\VignetteEngine{knitr::rmarkdown} \usepackage[utf8]{inputenc} --- # Introduction Along with the clinical trial simulations feature, the Mediana R package can be used to obtain adjusted *p*-values and one-sided simultaneous confidence limits. # `AdjustPvalues` function The `AdjustPvalues` function can be used to get adjusted *p*-values for commonly used multiple testing procedures based on univariate p-values (Bonferroni, Holm, Hommel, Hochberg, fixed-sequence and Fallback procedures), commonly used parametric multiple testing procedures (single-step and step-down Dunnett procedures) and multistage gatepeeking procedure. ## Description ### Inputs The `AdjustPvalues` function requires the input of two pre-specified objects defined in the following two arguments: - `pval` defines the raw *p*-values. - `proc` defines the multiple testing procedure. Several procedures are already implemented in the Mediana package (listed below, along with the required or optional parameters to specify in the par argument): - `BonferroniAdj`: Bonferroni procedure. Optional parameter: `weight`. - `HolmAdj`: Holm procedure. Optional parameter: `weight`. - `HochbergAdj`: Hochberg procedure. Optional parameter: `weight`. - `HommelAdj`: Hommel procedure. Optional parameter: `weight`. - `FixedSeqAdj`: Fixed-sequence procedure. - `FallbackAdj`: Fallback procedure. Required parameters: `weight`. - `DunnettAdj`: Single-step Dunnett procedure. Required parameters: `n`. - `StepDownDunnettAdj`: Step-down Dunnett procedure. Required parameters: `n`. - `ChainAdj`: Family of chain procedures. Required parameters: `weight` and `transition`. - `NormalParamAdj`: Parametric multiple testing procedure derived from a multivariate normal distribution. Required parameter: `corr`. Optional parameter: `weight`. - `ParallelGatekeepingAdj`: Family of parallel gatekeeping procedures. Required parameters: `family`, `proc`, `gamma`. - `MultipleSequenceGatekeepingAdj`: Family of multiple-sequence gatekeeping procedures. Required parameters: `family`, `proc`, `gamma`. - `MixtureGatekeepingAdj`: Family of mixture-based gatekeeping procedures. Required parameters: `family`, `proc`, `gamma`, `serial`, `parallel`. - `par` defines the parameters associated to the multiple testing procedure. ### Outputs The `AdjustPvalues` function returns a vector of adjusted *p*-values. ## Example The following example illustrates the use of the `AdjustedPvalues` function to get adjusted *p*-values for traditional nonparametric, semi-parametric and parametric procedures, as well as more complex multiple testing procedures. ### Traditional nonparametric and semiparametric procedures For the illustration of adjustedment of raw *p*-values with the traditional nonparametric and semiparametric procedures, we will consider the following three raw *p*-values: ```r rawp = c(0.012, 0.009, 0.023) ``` These *p*-values will be adjusted with several multiple testing procedures as specified below: ```r # Bonferroni, Holm, Hochberg, Hommel and Fixed-sequence procedure proc = c("BonferroniAdj", "HolmAdj", "HochbergAdj", "HommelAdj", "FixedSeqAdj", "FallbackAdj") ``` In order to obtain the adjusted *p*-values for all these procedures, the `sapply` function can be used as follows. Note that as no `weight` parameter is defined, the equally weighted procedures are used to adjust the *p*-values. Finally, for the fixed-sequence procedure (`FixedSeqAdj`), the order of the testing sequence is based on the order of the *p*-values in the vector. ```r # Equally weighted sapply(proc, function(x) {AdjustPvalues(rawp, proc = x)}) ``` The output is as follows: ```r BonferroniAdj HolmAdj HochbergAdj HommelAdj FixedSeqAdj FallbackAdj [1,] 0.036 0.027 0.023 0.023 0.012 0.0360 [2,] 0.027 0.027 0.023 0.018 0.012 0.0270 [3,] 0.069 0.027 0.023 0.023 0.023 0.0345 ``` In order to specify unequal weights for the three raw *p*-values, the `weight` parameter can be defined as follows. Note that this parameter has no effect on the adjustment with the fixed-sequence procedure. ```r # Unequally weighted (no effect on the fixed-sequence procedure) sapply(proc, function(x) {AdjustPvalues(rawp, proc = x, par = parameters(weight = c(1/2, 1/4, 1/4)))}) ``` The output is as follows: ```r BonferroniAdj HolmAdj HochbergAdj HommelAdj FixedSeqAdj FallbackAdj [1,] 0.024 0.024 0.018 0.018 0.012 0.024 [2,] 0.036 0.024 0.018 0.018 0.012 0.024 [3,] 0.092 0.024 0.023 0.023 0.023 0.024 ``` ### Traditional parametric procedures Consider a clinical trials comparing three doses with a Placebo based on a normally distributed endpoints. Let H1, H2 and H3 be the three null hypotheses of no effect tested in the trial: - H1: No difference between Dose 1 and Placebo - H2: No difference between Dose 2 and Placebo - H3: No difference between Dose 3 and Placebo The treatment effect estimates, corresponding to the mean dose-placebo difference are specified below, as well as the pooled standard deviation, the sample size, the standard errors and the *T*-statistics associated with the three dose-placebo tests ```r # Treatment effect estimates (mean dose-placebo differences) est = c(2.3,2.5,1.9) # Pooled standard deviation sd = 9.5 # Study design is balanced with 180 patients per treatment arm n = 180 # Standard errors stderror = rep(sd*sqrt(2/n),3) # T-statistics associated with the three dose-placebo tests stat = est/stderror ``` Based on the *T*-statistics, the raw *p*-values can be easily obtained: ```r # One-sided pvalue rawp = 1-pt(stat,2*(n-1)) ``` The adjusted *p*-values based on the single step Dunnett and step-down Dunnett procedures are obtained as follows. ```r # Adjusted p-values based on the Dunnett procedures # (assuming that each test statistic follows a t distribution) AdjustPvalues(rawp,proc = "DunnettAdj", par = parameters(n = n)) AdjustPvalues(rawp,proc = "StepDownDunnettAdj", par = parameters(n = n)) ``` The outputs are presented below. ```r > AdjustPvalues(rawp,proc = "DunnettAdj",par = parameters(n = n)) [1] 0.02887019 0.01722656 0.07213393 > AdjustPvalues(rawp,proc = "StepDownDunnettAdj",par = parameters(n = n)) [1] 0.02043820 0.01722544 0.02909082 ``` ### Gatekeeping procedures For illustration, we will consider a clinical trial with two families of null hypotheses. The first family contains the null hypotheses associated with the Endpoints 1 and 2, that are considered as primary endpoints, and the second family the null hypotheses associated with the Endpoints 3 and 4 (key secondary endpoints). The null hypotheses of the secondary family will be tested if and only if at least one null hypothesis from the first family is rejected. Let H1, H2, H3 and H4 be the four null hypotheses of no effect on Endpoint 1, 2, 3 and 4 respectively tested in the trial: - H1: No difference between Drug and Placebo on Endpoint 1 (Family 1) - H2: No difference between Drug and Placebo on Endpoint 2 (Family 1) - H3: No difference between Drug and Placebo on Endpoint 3 (Family 2) - H4: No difference between Drug and Placebo on Endpoint 4 (Family 2) The raw *p*-values are specified below: ```r # One-sided raw p-values (associated respectively with H1, H2, H3 and H4) rawp<-c(0.0082, 0.0174, 0.0042, 0.0180) ``` The parameters of the parallel gatekeeping procedure are specified using the three arguments `family` which specifies the hypotheses included in each family, `proc` which specifies the component procedure associated with each family and `gamma` which specifies the truncation parameter of each family. ```r # Define hypothesis included in each family (index of the raw p-value vector) family = families(family1 = c(1, 2), family2 = c(3, 4)) # Define component procedure of each family component.procedure = families(family1 ="HolmAdj", family2 = "HolmAdj") # Truncation parameter of each family gamma = families(family1 = 0.5, family2 = 1) ``` The adjusted *p*-values are obtained using the `AdjustedPvalues` function as specified below: ```r AdjustPvalues(rawp, proc = "ParallelGatekeepingAdj", par = parameters(family = family, proc = component.procedure, gamma = gamma)) [1] 0.0164 0.0232 0.0232 0.0232 ``` # `AdjustCIs` function The `AdjustCIs` function can be used to get simultaneous confidence intervals for selected multiple testing procedures based on univariate p-values (Bonferroni, Holm and fixed-sequence procedures) and commonly used parametric multiple testing procedures (single-step and step-down Dunnett procedures). ## Description ### Inputs The `AdjustPvalues` function requires the input of two pre-specified objects defined in the following two arguments: - `est` defines the point estimates. - `proc` defines the multiple testing procedure. Several procedures are already implemented in the Mediana package (listed below, along with the required or optional parameters to specify in the par argument): - `BonferroniAdj`: Bonferroni procedure. Required parameters: `n`, `sd` and `covprob`. Optional parameter: `weight`. - `HolmAdj`: Holm procedure. Required parameters: `n`, `sd` and `covprob`. Optional parameter: `weight`. - `FixedSeqAdj`: Fixed-sequence procedure. Required parameters: `n`, `sd` and `covprob`. - `DunnettAdj`: Single-step Dunnett procedure. Required parameters: `n`, `sd` and `covprob`. - `StepDownDunnettAdj`: Step-down Dunnett procedure. Required parameters: `n`, `sd` and `covprob`. - `par` defines the parameters associated to the multiple testing procedure. ### Outputs The `AdjustCIs` function returns a vector lower simultaneous confidence limits. ## Example Consider a clinical trials comparing three doses with a Placebo based on a normally distributed endpoints. Let H1, H2 and H3 be the three null hypotheses of no effect tested in the trial: - H1: No difference between Dose 1 and Placebo - H2: No difference between Dose 2 and Placebo - H3: No difference between Dose 3 and Placebo The treatment effect estimates, corresponding to the mean dose-placebo difference are specified below, as well as the pooled standard deviation, the sample size. ```r # Null hypotheses of no treatment effect are equally weighted weight<-c(1/3,1/3,1/3) # Treatment effect estimates (mean dose-placebo differences) est = c(2.3,2.5,1.9) # Pooled standard deviation sd = 9.5 # Study design is balanced with 180 patients per treatment arm n = 180 ``` The one-sided simultaneous confidence limits for several multiple testing procedures are obtained using the `AdjustCIs` function wrapped in a `sapply` function. ```r # Bonferroni, Holm, Hochberg, Hommel and Fixed-sequence procedure proc = c("BonferroniAdj", "HolmAdj", "FixedSeqAdj", "DunnettAdj", "StepDownDunnettAdj") # Equally weighted sapply(proc, function(x) {AdjustCIs(est, proc = x, par = parameters(sd = sd, n = n, covprob = 0.975, weight = weight))}) ``` The output obtained is presented below: ```r BonferroniAdj HolmAdj FixedSeqAdj DunnettAdj StepDownDunnettAdj [1,] -0.09730247 0.00000000 0.00000000 -0.05714354 0.00000000 [2,] 0.10269753 0.00000000 0.00000000 0.14285646 0.00000000 [3,] -0.49730247 -0.06268427 -0.06268427 -0.45714354 -0.06934203 ``` Mediana/vignettes/mediana.Rmd0000644000176200001440000016462513434027611015701 0ustar liggesusers--- title: "Mediana: an R package for clinical trial simulations" author: "Gautier Paux and Alex Dmitrieniko" date: "`r Sys.Date()`" output: rmarkdown::html_vignette vignette: > %\VignetteIndexEntry{Mediana: an R package for clinical trial simulations} %\VignetteEngine{knitr::rmarkdown} \usepackage[utf8]{inputenc} --- # Introduction ## About Mediana is an R package which provides a general framework for clinical trial simulations based on the Clinical Scenario Evaluation approach. The package supports a broad class of data models (including clinical trials with continuous, binary, survival-type and count-type endpoints as well as multivariate outcomes that are based on combinations of different endpoints), analysis strategies and commonly used evaluation criteria. ## Expert and development teams **Package design**: Alex Dmitrienko (Mediana Inc.). **Core development team**: Gautier Paux (Servier), Alex Dmitrienko (Mediana Inc.). **Extended development team**: Thomas Brechenmacher (Novartis), Fei Chen (Johnson and Johnson), Ilya Lipkovich (Quintiles), Ming-Dauh Wang (Lilly), Jay Zhang (MedImmune), Haiyan Zheng (Osaka University). **Expert team**: Keaven Anderson (Merck), Frank Harrell (Vanderbilt University), Mani Lakshminarayanan (Pfizer), Brian Millen (Lilly), Jose Pinheiro (Johnson and Johnson), Thomas Schmelter (Bayer). ## Installation ### Latest release Install the latest version of the Mediana package from CRAN using the `install.packages` command in R: ```r install.packages("Mediana") ``` Alternatively, you can download the package from the [CRAN website](https://cran.r-project.org/package=Mediana). ### Development version The up-to-date development version can be found and installed directly from the GitHub web site. You need to install the `devtools` package and then call the `install_github` function in R: ```r # install.packages("devtools") devtools::install_github("gpaux/Mediana") ``` ## Clinical Scenario Evaluation Framework The Mediana R package was developed to provide a general software implementation of the Clinical Scenario Evaluation (CSE) framework. This framework introduced by [Benda et al. (2010)](http://dij.sagepub.com/content/44/3/299.abstract) and [Friede et al. (2010)](http://dij.sagepub.com/content/44/6/713.abstract) recognizes that sample size calculation and power evaluation in clinical trials are high-dimensional statistical problems. This approach helps decompose this complex problem by identifying key elements of the evaluation process. These components are termed models: - [Data models](#data-model) define the process of generating trial data (e.g., sample sizes, outcome distributions and parameters). - [Analysis models](#analysis-model) define the statistical methods applied to the trial data (e.g., statistical tests, multiplicity adjustments). - [Evaluation models](#evaluation-model) specify the measures for evaluating the performance of the analysis strategies (e.g., traditional success criteria such as marginal power or composite criteria such as disjunctive power). Find out more about the role of each model and how to specify the three models to perform Clinical Scenario Evaluation by reviewing the dedicated pages (click on the links above). ## Case studies Multiple case studies are provided on the [web site's package](http://gpaux.github.io/Mediana/CaseStudies.html) to facilitate the implementation of Clinical Scenario Evaluation in different clinical trial settings using the Mediana package. These case studies will be updated on a regular basis. Another vignette accessible with the following command is also available presenting these case studies. ```r vignette("case-studies", package = "Mediana") ``` The Mediana package has been successfully used in multiple clinical trials to perform power calculations as well as optimally select trial designs and analysis strategies (clinical trial optimization). For more information on applications of the Mediana package, download the following papers: - [Dmitrienko, A., Paux, G., Brechenmacher, T. (2016). Power calculations in clinical trials with complex clinical objectives. Journal of the Japanese Society of Computational Statistics. 28, 15-50.](https://www.jstage.jst.go.jp/article/jjscs/28/1/28_1411001_213/_article) - [Dmitrienko, A., Paux, G., Pulkstenis, E., Zhang, J. (2016). Tradeoff-based optimization criteria in clinical trials with multiple objectives and adaptive designs. Journal of Biopharmaceutical Statistics. 26, 120-140.](http://www.tandfonline.com/doi/abs/10.1080/10543406.2015.1092032?journalCode=lbps20) # Data model Data models define the process of generating patient data in clinical trials. ## Initialization A data model can be initialized using the following command ```r # DataModel initialization data.model = DataModel() ``` It is highly recommended to use this command as it will simplify the process of specifying components of the data model, e.g., `OutcomeDist`, `Sample`, `SampleSize`, `Event` and `Design` objects. ## Components of a data model Once the `DataModel` object has been initialized, components of the data model can be specified by adding objects to the model using the '+' operator as shown below. ```r # Outcome parameter set 1 outcome1.placebo = parameters(mean = 0, sd = 70) outcome1.treatment = parameters(mean = 40, sd = 70) # Outcome parameter set 2 outcome2.placebo = parameters(mean = 0, sd = 70) outcome2.treatment = parameters(mean = 50, sd = 70) # Data model case.study1.data.model = DataModel() + OutcomeDist(outcome.dist = "NormalDist") + SampleSize(c(50, 55, 60, 65, 70)) + Sample(id = "Placebo", outcome.par = parameters(outcome1.placebo, outcome2.placebo)) + Sample(id = "Treatment", outcome.par = parameters(outcome1.treatment, outcome2.treatment)) ``` ### `OutcomeDist` object #### Description This object specifies the distribution of patient outcomes in a data model. An `OutcomeDist` object is defined by two arguments: - `outcome.dist` defines the outcome distribution. - `outcome.type` defines the outcome type (optional). There are two acceptable values of this argument: `standard` (fixed-design setting) and `event` (event-driven design setting). Several distributions that can be specified using the `outcome.dist` argument are already implemented in the Mediana package. These distributions are listed below along with the required parameters to be included in the `outcome.par` argument of the `Sample` object: - `UniformDist`: generate data following a **univariate distribution**. Required parameter: `max`. - `NormalDist`: generate data following a **normal distribution**. Required parameters: `mean` and `sd`. - `BinomDist`: generate data following a **binomial distribution**. Required parameter: `prop`. - `BetaDist`: generate data following a **beta distribution**. Required parameter: `a` and `b`. - `ExpoDist`: generate data following an **exponential distribution**. Required parameter: `rate`. - `WeibullDist`: generate data following a **weibull distribution**. Required parameter: `shape` and `scale`. - `TruncatedExpoDist`: generate data following a **truncated exponential distribution**. Required parameter: `rate` an `trunc`. - `PoissonDist`: generate data following a **Poisson distribution**. Required parameter: `lambda`. - `NegBinomDist`: generate data following a **negative binomial distribution**. Required parameters: `dispersion` and `mean`. - `MultinomialDist`: generate data following a **multinomial distribution**. Required parameters: `prob`. - `MVNormalDist`: generate data following a **multivariate normal distribution**. Required parameters: `par` and `corr`. For each generated endpoint, the `par` parameter must contain the required parameters `mean` and `sd`. The `corr` parameter specifies the correlation matrix for the endpoints. - `MVBinomDist`: generate data following a **multivariate binomial distribution**. Required parameters: `par` and `corr`. For each generated endpoint, the `par` parameter must contain the required parameter `prop`. The `corr` parameter specifies the correlation matrix for the endpoints. - `MVExpoDist`: generate data following a **multivariate exponential distribution**. Required parameters: `par` and `corr`. For each generated endpoint, the `par` parameter must contain the required parameter `rate`. The `corr `parameter specifies the correlation matrix for the endpoints. - `MVExpoPFSOSDist`: generate data following a **multivariate exponential distribution to generate PFS and OS endpoints**. The PFS value is imputed to the OS value if the latter occurs earlier. Required parameters: `par` and `corr`. For each generated endpoint, the `par` parameter must contain the required parameter `rate`. The` corr` parameter specifies the correlation matrix for the endpoints. - `MVMixedDist`: generate data following a **multivariate mixed distribution**. Required parameters: `type`, `par` and `corr`. The `type` parameter assumes the following values: `NormalDist`, `BinomDist` and `ExpoDist`. For each generated endpoint, the `par` parameter must contain the required parameters according to the distribution type. The `corr` parameter specifies the correlation matrix for the endpoints. The `outcome.type` argument defines the outcome's type. This argument accepts only two values: - `standard`: for fixed design setting. - `event`: for event-driven design setting. The outcome's type must be defined for each endpoint in case of multivariate disribution, e.g. `c("event","event")` in case of multivariate exponential distribution. The `outcome.type` argument is essential to get censored events for time-to-event endpoints if the `SampleSize` object is used to specify the number of patients to generate. A single `OutcomeDist` object can be added to a `DataModel` object. For more information about the `OutcomeDist` object, see the documentation for [OutcomeDist](https://cran.r-project.org/package=Mediana/Mediana.pdf) on the CRAN web site. If a certain outcome distribution is not implemented in the Mediana package, the user can create a custom function and use it within the package (see the dedicated vignette `vignette("custom-functions", package = "Mediana")`). #### Example Examples of `OutcomeDist` objects: Specify popular univariate distributions: ```r # Normal distribution OutcomeDist(outcome.dist = "NormalDist") # Binomial distribution OutcomeDist(outcome.dist = "BinomDist") # Exponential distribution OutcomeDist(outcome.dist = "ExpoDist") ``` Specify a mixed multivariate distribution: ```r # Multivariate Mixed distribution OutcomeDist(outcome.dist = "MVMixedDist") ``` ### `Sample` object #### Description This object specifies parameters of a sample (e.g., treatment arm in a trial) in a data model. Samples are defined as mutually exclusive groups of patients, for example, treatment arms. A `Sample` object is defined by three arguments: - `id` defines the sample's unique ID (label). - `outcome.par` defines the parameters of the outcome distribution for the sample. - `sample.size` defines the sample's size (optional). The `sample.size` argument is optional but must be used to define the sample size only if an unbalanced design is considered (i.e., the sample size varies across the samples). The sample size must be either defined in the `Sample` object or in the `SampleSize` object, but not in both. Several `Sample` objects can be added to a `DataModel` object. For more information about the `Sample` object, see the documentation [Sample](https://cran.r-project.org/package=Mediana/Mediana.pdf) on the CRAN web site. #### Example Examples of `Sample` objects: Specify two samples with a continuous endpoint following a normal distribution: ```r # Outcome parameters set 1 outcome1.placebo = parameters(mean = 0, sd = 70) outcome1.treatment = parameters(mean = 40, sd = 70) # Outcome parameters set 2 outcome2.placebo = parameters(mean = 0, sd = 70) outcome2.treatment = parameters(mean = 50, sd = 70) # Placebo sample object Sample(id = "Placebo", outcome.par = parameters(outcome1.placebo, outcome2.placebo)) # Treatment sample object Sample(id = "Treatment", outcome.par = parameters(outcome1.treatment, outcome2.treatment)) ``` Specify two samples with a binary endpoint following a binomial distribution: ```r # Outcome parameters set outcome.placebo = parameters(prop = 0.30) outcome.treatment = parameters(prop = 0.50) # Placebo sample object Sample(id = "Placebo", outcome.par = parameters(outcome1.placebo)) # Treatment sample object Sample(id = "Treatment", outcome.par = parameters(outcome1.treatment)) ``` Specify two samples with a time-to-event (survival) endpoint following an exponential distribution: ```r # Outcome parameters median.time.placebo = 6 rate.placebo = log(2)/median.time.placebo outcome.placebo = parameters(rate = rate.placebo) median.time.treatment = 9 rate.treatment = log(2)/median.time.treatment outcome.treatment = parameters(rate = rate.treatment) # Placebo sample object Sample(id = "Placebo", outcome.par = parameters(outcome.placebo)) # Treatment sample object Sample(id = "Treatment", outcome.par = parameters(outcome.treatment)) ``` Specify three samples with two primary endpoints that follow a binomial and a normal distribution, respectively: ```r # Variable types var.type = list("BinomDist", "NormalDist") # Outcome distribution parameters placebo.par = parameters(parameters(prop = 0.3), parameters(mean = -0.10, sd = 0.5)) dosel.par = parameters(parameters(prop = 0.40), parameters(mean = -0.20, sd = 0.5)) doseh.par = parameters(parameters(prop = 0.50), parameters(mean = -0.30, sd = 0.5)) # Correlation between two endpoints corr.matrix = matrix(c(1.0, 0.5, 0.5, 1.0), 2, 2) # Outcome parameters set outcome.placebo = parameters(type = var.type, par = plac.par, corr = corr.matrix) outcome.dosel = parameters(type = var.type, par = dosel.par, corr = corr.matrix) outcome.doseh = parameters(type = var.type, par = doseh.par, corr = corr.matrix) # Placebo sample object Sample(id = list("Plac ACR20", "Plac HAQ-DI"), outcome.par = parameters(outcome.placebo)) # Low Dose sample object Sample(id = list("DoseL ACR20", "DoseL HAQ-DI"), outcome.par = parameters(outcome.dosel)) # High Dose sample object Sample(id = list("DoseH ACR20", "DoseH HAQ-DI"), outcome.par = parameters(outcome.doseh)) ``` ### `SampleSize` object #### Description This object specifies the sample size in a balanced trial design (all samples will have the same sample size). A `SampleSize` object is defined by one argument: - `sample.size` specifies a list or vector of sample size(s). A single `SampleSize` object can be added to a `DataModel` object. For more information about the `SampleSize` object, see the package's documentation [SampleSize](https://cran.r-project.org/package=Mediana/Mediana.pdf). #### Example Examples of `SampleSize` objects: Several equivalent specifications of the `SampleSize` object: ```r SampleSize(c(50, 55, 60, 65, 70)) SampleSize(list(50, 55, 60, 65, 70)) SampleSize(seq(50, 70, 5)) ``` ### `Event` object #### Description This object specifies the total number of events (total event count) among all samples in an event-driven clinical trial. An `Event` object is defined by two arguments: - `n.events` defines a vector of the required event counts. - `rando.ratio` defines a vector of randomization ratios for each `Sample` object defined in the `DataModel` object. A single `Event` object can be added to a `DataModel` object. For more information about the `Event` object, see the package's documentation [Event](https://cran.r-project.org/package=Mediana/Mediana.pdf). #### Example Examples of `Event` objects: Specify the required number of events in a trial with a 2:1 randomization ratio (Treatment:Placebo): ```r # Event parameters event.count.total = c(390, 420) randomization.ratio = c(1,2) # Event object Event(n.events = event.count.total, rando.ratio = randomization.ratio) ``` ### `Design` object #### Description This object specifies the design parameters used in event-driven designs if the user is interested in modeling the enrollment (or accrual) and dropout (or loss to follow up) processes. A `Design` object is defined by seven arguments: - `enroll.period` defines the length of the enrollment period. - `enroll.dist` defines the enrollment distribution. - `enroll.dist.par` defines the parameters of the enrollment distribution (optional). - `followup.period` defines the length of the follow-up period for each patient in study designs with a fixed follow-up period, i.e., the length of time from the enrollment to planned discontinuation is constant across patients. The user must specify either `followup.period` or `study.duration`. - `study.duration` defines the total study duration in study designs with a variable follow-up period. The total study duration is defined as the length of time from the enrollment of the first patient to the discontinuation of the last patient. - `dropout.dist` defines the dropout distribution. - `dropout.dist.par` defines the parameters of the dropout distribution. Several `Design` objects can be added to a `DataModel` object. For more information about the `Design` object, see the package's documentation [Design](https://cran.r-project.org/package=Mediana/Mediana.pdf). A convienient way to model non-uniform enrollment is to use a beta distribution (`BetaDist`). If `enroll.dist = "BetaDist"`, the `enroll.dist.par` should contain the parameter of the beta distribution (`a` and `b`). These parameters must be derived according to the expected enrollment at a specific timepoint. For example, if half the patients are expected to be enrolled at 75% of the enrollment period, the beta distribution is a `Beta(log(0.5)/log(0.75), 1)`. Generally, let `q` be the proportion of enrolled patients at `100p`% of the enrollment period, the Beta distribution can be derived as follows: - If `q < p`, the Beta distribution is `Beta(a,1)` with `a = log(q) / log(p)` - If `q > p`, the Beta distribution is `Beta (1,b)` with `b = log(1-q) / log(1-p)` - Otherwise the Beta distribution is `Beta(1,1)` #### Example Examples of `Design` objects: Specify parameters of the enrollment and dropout processes with a uniform enrollment distribution and exponential dropout distribution: ```r # Design parameters (in months) Design(enroll.period = 9, study.duration = 21, enroll.dist = "UniformDist", dropout.dist = "ExpoDist", dropout.dist.par = parameters(rate = 0.0115)) ``` # Analysis model Analysis models define statistical methods (e.g., significance tests or descriptive statistics) that are applied to the study data in a clinical trial. ## Initialization An analysis model can be initialized using the following command: ```r # AnalysisModel initialization analysis.model = AnalysisModel() ``` It is highly recommended to use this command to initialize an analysis model as it will simplify the process of specifying components of the data model, including the `MultAdj`, `MultAdjProc`, `MultAdjStrategy`, `Test`, `Statistic` objects. ## Components of an analysis model After an `AnalysisModel` object has been initialized, components of the analysis model can be specified by adding objects to the model using the '+' operator as shown below. ```r # Analysis model case.study1.analysis.model = AnalysisModel() + Test(id = "Placebo vs treatment", samples = samples("Placebo", "Treatment"), method = "TTest") + Statistic(id = "Mean Treatment", method = "MeanStat", samples = samples("Treatment")) ``` ### `Test` object #### Description This object specifies a significance test that will be applied to one or more samples defined in a data model. A `Test` object is defined by the following four arguments: - `id` defines the test's unique ID (label). - `method` defines the significance test. - `samples` defines the IDs of the samples (defined in the data model) that the significance test is applied to. - `par` defines the parameter(s) of the statistical test. Several commonly used significance tests are already implemented in the Mediana package. In addition, the user can easily define custom significance tests (see the dedicated vignette `vignette("custom-functions", package = "Mediana")`). The built-in tests are listed below along with the required parameters that need to be included in the `par` argument: - `TTest`: perform the **two-sample t-test** between the two samples defined in the `samples` argument. Optional parameter: `larger` (Larger value is expected in the second sample (`TRUE` or `FALSE`)). - `TTestNI`: perform the **non-inferiority two-sample t-test** between the two samples defined in the `samples` argument. Required parameter: `margin` (positive non-inferiority margin). Optional parameter: `larger` (Larger value is expected in the second sample (`TRUE` or `FALSE`)). - `WilcoxTest`: perform the **Wilcoxon-Mann-Whitney test** between the two samples defined in the `samples` argument. Optional parameter: `larger` (Larger value is expected in the second sample (`TRUE` or `FALSE`)). - `PropTest`: perform the **two-sample test for proportions** between the two samples defined in the `samples` argument. Optional parameter: `yates` (Yates' continuity correction flag that is set to `TRUE` or `FALSE`) and `larger` (Larger value is expected in the second sample (`TRUE` or `FALSE`)). - `PropTestNI`: perform the **non-inferiority two-sample test for proportions** between the two samples defined in the `samples` argument. Required parameter: `margin` (positive non-inferiority margin). Optional parameter: `yates` (Yates' continuity correction flag that is set to `TRUE` or `FALSE`) and `larger` (Larger value is expected in the second sample (`TRUE` or `FALSE`)). - `FisherTest`: perform the **Fisher exact test** between the two samples defined in the `samples` argument. Optional parameter: `larger` (Larger value is expected in the second sample (`TRUE` or `FALSE`)). - `GLMPoissonTest`: perform the **Poisson regression test** between the two samples defined in the `samples` argument. Optional parameter: `larger` (Larger value is expected in the second sample (`TRUE` or `FALSE`)). - `GLMNegBinomTest`: perform the **Negative-binomial regression test** between the two `samples` defined in the `samples` argument. Optional parameter: `larger` (Larger value is expected in the second sample (`TRUE` or `FALSE`)). - `LogrankTest`: perform the **Log-rank test** between the two samples defined in the `samples` argument. Optional parameter: `larger` (Larger value is expected in the second sample (`TRUE` or `FALSE`)). - `OrdinalLogisticRegTest`: perform an **ordinal logistic regression test** between the two samples defined in the `samples` argument. Optional parameter: `larger` (Larger value is expected in the second sample (`TRUE` or `FALSE`)). It needs to be noted that the significance tests listed above are implemented as **one-sided** tests and thus the sample order in the `samples` argument is important. In particular, the Mediana package assumes by default that a numerically larger value of the endpoint is expected in Sample 2 compared to Sample 1. Suppose, for example, that a higher treatment response indicates a beneficial effect (e.g., higher improvement rate). In this case Sample 1 should include control patients whereas Sample 2 should include patients allocated to the experimental treatment arm. The sample order needs to be reversed if a beneficial treatment effect is associated with a lower value of the endpoint (e.g., lower blood pressure), or alternatively (from version 1.0.6), the optional parameters `larger` must be set to FALSE to indicate that a larger value is expected on the first Sample. Several `Test` objects can be added to an `AnalysisModel`object. For more information about the `Test` object, see the package's documentation [Test](https://cran.r-project.org/package=Mediana/Mediana.pdf) on the CRAN web site. #### Example Examples of `Test` objects: Carry out the two-sample t-test: ```r # Placebo and Treatment samples were defined in the data model Test(id = "Placebo vs treatment", samples = samples("Placebo", "Treatment"), method = "TTest") ``` Carry out the two-sample t-test with larger values expected in the first sample (from v1.0.6): ```r # Placebo and Treatment samples were defined in the data model Test(id = "Placebo vs treatment", samples = samples("Treatment", "Placebo"), method = "TTest", par = parameters(larger = FALSE)) ``` Carry out the two-sample t-test for non-inferiority: ```r # Placebo and Treatment samples were defined in the data model Test(id = "Placebo vs treatment", samples = samples("Placebo", "Treatment"), method = "TTestNI", par = parameters(margin = 0.2)) ``` Carry out the two-sample t-test with pooled samples: ```r # Placebo M-, Placebo M+, Treatment M- and Treatment M+ samples were defined in the data model Test(id = "OP test", samples = samples(c("Placebo M-", "Placebo M+"), c("Treatment M-", "Treatment M+")), method = "TTest") ``` ### `Statistic` object #### Description This object specifies a descriptive statistic that will be computed based on one or more samples defined in a data model. A `Statistic` object is defined by four arguments: - `id` defines the descriptive statistic's unique ID (label). - `method` defines the type of statistic/method for computing the statistic. - `samples` defines the samples (pre-defined in the data model) to be used for computing the statistic. - `par` defines the parameter(s) of the statistic. Several methods for computing descriptive statistics are already implemented in the Mediana package and the user can also define custom functions for computing descriptive statistics (see the dedicated vignette `vignette("custom-functions", package = "Mediana")`). These methods are shown below along with the required parameters that need to be defined in the `par` argument: - `MedianStat`: compute the **median** of the sample defined in the `samples` argument. - `MeanStat`: compute the **mean** of the sample defined in the `samples` argument. - `SdStat`: compute the **standard deviation** of the sample defined in the `samples` argument. - `MinStat`: compute the **minimum** value in the sample defined in the `samples` argument. - `MaxStat`: compute the **maximum** value in the sample defined in the `samples` argument. - `DiffMeanStat`: compute the **difference of means** between the two samples defined in the `samples` argument. Two samples must be defined. - `EffectSizeContStat`: compute the **effect size** for a continuous endpoint. Two samples must be defined. - `RatioEffectSizeContStat`: compute the **ratio of two effect sizes** for a continuous endpoint. Four samples must be defined. - `PropStat`: generate the **proportion ** of the sample defined in the `samples` argument. - `DiffPropStat`: compute the **difference of the proportions** between the two samples defined in the `samples` argument. Two samples must be defined. - `EffectSizePropStat`: compute the **effect size** for a binary endpoint. Two samples must be defined. - `RatioEffectSizePropStat`: compute the **ratio of two effect sizes** for a binary endpoint. Four samples must be defined. - `HazardRatioStat`: compute the **hazard ratio** of the two samples defined in the samples argument. Two samples must be defined. By default the Log-Rank method is used. Optional argument: `method` taking as value `Log-Rank` or `Cox`. - `EffectSizeEventStat`: compute the **effect size** for a survival endpoint (log of the HR. Two samples must be defined. Two samples must be defined. By default the Log-Rank method is used. Optional argument: `method` taking as value `Log-Rank` or `Cox`. - `RatioEffectSizeEventStat`: compute the **ratio of two effect sizes** for a survival endpoint based on the Log-Rank method. Four samples must be defined. By default the Log-Rank method is used. Optional argument: `method` taking as value `Log-Rank` or `Cox`. - `EventCountStat`: compute the **number of events** observed in the sample(s) defined in the `samples` argument. - `PatientCountStat`: compute the **number of patients** observed in the sample(s) defined in the `samples` argument Several `Statistic` objects can be added to an `AnalysisModel` object. For more information about the `Statistic` object, see the R documentation [Statistic](https://cran.r-project.org/package=Mediana/Mediana.pdf). #### Example Examples of `Statistic` objects: Compute the mean of a single sample: ```r # Treatment sample was defined in the data model Statistic(id = "Mean Treatment", method = "MeanStat", samples = samples("Treatment")) ``` ### `MultAdjProc` object #### Description This object specifies a multiplicity adjustment procedure that will be applied to the significance tests in order to protect the overall Type I error rate. A `MultAdjProc` object is defined by three arguments: - `proc` defines a multiplicity adjustment procedure. - `par` defines the parameter(s) of the multiplicity adjustment procedure (optional). - `tests` defines the specific tests (defined in the analysis model) to which the multiplicity adjustment procedure will be applied. If no `tests` are defined, the multiplicity adjustment procedure will be applied to all tests defined in the `AnalysisModel` object. Several commonly used multiplicity adjustment procedures are included in the Mediana package. In addition, the user can easily define custom multiplicity adjustments. The built-in multiplicity adjustments are defined below along with the required parameters that need to be included in the `par` argument: - `BonferroniAdj`: **Bonferroni** procedure. Optional parameter: `weight` (vector of hypothesis weights). - `HolmAdj`: **Holm** procedure. Optional parameter: `weight` (vector of hypothesis weights). - `HochbergAdj`: **Hochberg** procedure. Optional parameter: `weight` (vector of hypothesis weights). - `HommelAdj`: **Hommel** procedure. Optional parameter: `weight` (vector of hypothesis weights). - `FixedSeqAdj`: **Fixed-sequence procedure**. - `ChainAdj`: Family of **chain procedures**. Required parameters: `weight` (vector of hypothesis weights) and `transition` (matrix of transition parameters). - `FallbackAdj`: **Fallback** procedure. Required parameters: `weight` (vector of hypothesis weights). - `NormalParamAdj`: **Parametric multiple testing procedure** derived from a multivariate normal distribution. Required parameter: `corr` (correlation matrix of the multivariate normal distribution). Optional parameter: `weight` (vector of hypothesis weights). - `ParallelGatekeepingAdj`: Family of **parallel gatekeeping procedures**. Required parameters: `family` (vectors of hypotheses included in each family), `proc` (vector of procedure names applied to each family), `gamma` (vector of truncation parameters). - `MultipleSequenceGatekeepingAdj`: Family of **multiple-sequence gatekeeping procedures**. Required parameters: `family` (vectors of hypotheses included in each family), `proc` (vector of procedure names applied to each family), `gamma` (vector of truncation parameters). - `MixtureGatekeepingAdj`: Family of **mixture-based gatekeeping procedures**. Required parameters: `family` (vectors of hypotheses included in each family), `proc` (vector of procedure names applied to each family), `gamma` (vector of truncation parameters), `serial` (matrix of indicators), `parallel` (matrix of indicators). Several `MultAdjProc` objects can be added to an `AnalysisModel`object using the '+' operator or by grouping them into a MultAdj object. For more information about the `MultAdjProc` object, see the package's documentation [MultAdjProc](https://cran.r-project.org/package=Mediana/Mediana.pdf). #### Example Examples of `MultAdjProc` objects: Apply a multiplicity adjustment based on the chain procedure: ```r # Parameters of the chain procedure (equivalent to a fixed-sequence procedure) # Vector of hypothesis weights chain.weight = c(1, 0) # Matrix of transition parameters chain.transition = matrix(c(0, 1, 0, 0), 2, 2, byrow = TRUE) # MultAdjProc MultAdjProc(proc = "ChainAdj", par = parameters(weight = chain.weight, transition = chain.transition)) ``` This procedure implementation is facilicated by the use of the `FixedSeqAdj` method intoduced in version 1.0.4. ```r # MultAdjProc MultAdjProc(proc = "FixedSeqAdj") ``` Apply a multiple-sequence gatekeeping procedure: ```r # Parameters of the multiple-sequence gatekeeping procedure # Tests to which the multiplicity adjustment will be applied (defined in the AnalysisModel) test.list = tests("Pl vs DoseH - ACR20", "Pl vs DoseL - ACR20", "Pl vs DoseH - HAQ-DI", "Pl vs DoseL - HAQ-DI") # Hypothesis included in each family (the number corresponds to the position of the test in the test.list vector) family = families(family1 = c(1, 2), family2 = c(3, 4)) # Component procedure of each family component.procedure = families(family1 ="HolmAdj", family2 = "HolmAdj") # Truncation parameter of each family gamma = families(family1 = 0.8, family2 = 1) # MultAdjProc MultAdjProc(proc = "MultipleSequenceGatekeepingAdj", par = parameters(family = family, proc = component.procedure, gamma = gamma), tests = test.list) ``` ### `MultAdjStrategy` object #### Description This object specifies a multiplicity adjustment strategy that can include several multiplicity adjustment procedures. A multiplicity adjustment strategy may be defined when the same Clinical Scenario Evaluation approach is applied to several clinical trials. A `MultAdjStrategy` object serves as a wrapper for several `MultAdjProc` objects. For more information about the `MultAdjStrategy` object, see the package's documentation [MultAdjStrategy](https://cran.r-project.org/package=Mediana/Mediana.pdf). #### Example Example of a `MultAdjStrategy` object: Perform complex multiplicity adjustments based on gatekeeping procedures in two clinical trials with three endpoints: ```r # Parallel gatekeeping procedure parameters family = families(family1 = c(1), family2 = c(2, 3)) component.procedure = families(family1 ="HolmAdj", family2 = "HolmAdj") gamma = families(family1 = 0.8, family2 = 1) # Multiple sequence gatekeeping procedure parameters for Trial A mult.adj.trialA = MultAdjProc(proc = "ParallelGatekeepingAdj", par = parameters(family = family, proc = component.procedure, gamma = gamma), tests = tests("Trial A Pla vs Trt End1", "Trial A Pla vs Trt End2", "Trial A Pla vs Trt End3")) mult.adj.trialB = MultAdjProc(proc = "ParallelGatekeepingAdj", par = parameters(family = family, proc = component.procedure, gamma = gamma), tests = tests("Trial B Pla vs Trt End1", "Trial B Pla vs Trt End2", "Trial B Pla vs Trt End3")) # Analysis model analysis.model = AnalysisModel() + MultAdjStrategy(mult.adj.trialA, mult.adj.trialB) + # Tests for study A Test(id = "Trial A Pla vs Trt End1", method = "PropTest", samples = samples("Trial A Plac End1", "Trial A Trt End1")) + Test(id = "Trial A Pla vs Trt End2", method = "TTest", samples = samples("Trial A Plac End2", "Trial A Trt End2")) + Test(id = "Trial A Pla vs Trt End3", method = "TTest", samples = samples("Trial A Plac End3", "Trial A Trt End3")) + # Tests for study B Test(id = "Trial B Pla vs Trt End1", method = "PropTest", samples = samples("Trial B Plac End1", "Trial B Trt End1")) + Test(id = "Trial B Pla vs Trt End2", method = "TTest", samples = samples("Trial B Plac End2", "Trial B Trt End2")) + Test(id = "Trial B Pla vs Trt End3", method = "TTest", samples = samples("Trial B Plac End3", "Trial B Trt End3")) ``` ### `MultAdj` object #### Description This object can be used to combine several `MultAdjProc` or `MultAdjStrategy` objects and add them as a single object to an `AnalysisModel` object . This object is provided mainly for convenience and its use is optional. Alternatively, `MultAdjProc` or `MultAdjStrategy` objects can be added to an `AnalysisModel` object incrementally using the '+' operator. For more information about the `MultAdj` object, see the package's documentation [MultAdj](https://cran.r-project.org/package=Mediana/Mediana.pdf). #### Example Example of a `MultAdj` object: Perform Clinical Scenario Evaluation to compare three candidate multiplicity adjustment procedures: ```r # Multiplicity adjustments to compare mult.adj1 = MultAdjProc(proc = "BonferroniAdj") mult.adj2 = MultAdjProc(proc = "HolmAdj") mult.adj3 = MultAdjProc(proc = "HochbergAdj") # Analysis model analysis.model = AnalysisModel() + MultAdj(mult.adj1, mult.adj2, mult.adj3) + Test(id = "Pl vs Dose L", samples = samples("Placebo", "Dose L"), method = "TTest") + Test(id = "Pl vs Dose M", samples = samples ("Placebo", "Dose M"), method = "TTest") + Test(id = "Pl vs Dose H", samples = samples("Placebo", "Dose H"), method = "TTest") # Note that the code presented above is equivalent to: analysis.model = AnalysisModel() + mult.adj1 + mult.adj2 + mult.adj3 + Test(id = "Pl vs Dose L", samples = samples("Placebo", "Dose L"), method = "TTest") + Test(id = "Pl vs Dose M", samples = samples ("Placebo", "Dose M"), method = "TTest") + Test(id = "Pl vs Dose H", samples = samples("Placebo", "Dose H"), method = "TTest") ``` # Evaluation model Evaluation models are used within the Mediana package to specify the success criteria or metrics for evaluating the performance of the selected clinical scenario (combination of data and analysis models). ## Initialization An evaluation model can be initialized using the following command: ```r # EvaluationModel initialization evaluation.model = EvaluationModel() ``` It is highly recommended to use this command to initialize an evaluation model because it simplifies the process of specifying components of the evaluation model such as `Criterion` objects. ## Components of an evaluation model After an `EvaluationModel` object has been initialized, components of the evaluation model can be specified by adding objects to the model using the '+' operator as shown below. ```r # Evaluation model case.study1.evaluation.model = EvaluationModel() + Criterion(id = "Marginal power", method = "MarginalPower", tests = tests("Placebo vs treatment"), labels = c("Placebo vs treatment"), par = parameters(alpha = 0.025)) + Criterion(id = "Average Mean", method = "MeanSumm", statistics = statistics("Mean Treatment"), labels = c("Average Mean Treatment")) ``` ### `Criterion` object #### Description This object specifies the success criteria that will be applied to a clinical scenario to evaluate the performance of selected analysis methods. A `Criterion` object is defined by six arguments: - `id` defines the criterion's unique ID (label). - `method` defines the criterion. - `tests` defines the IDs of the significance tests (defined in the analysis model) that the criterion is applied to. - `statistics` defines the IDs the descriptive statistics (defined in the analysis model) that the criterion is applied to. - `par` defines the parameter(s) of the criterion. - `label` defines the label(s) of the criterion values (the label(s) will be used in the simulation report). Several commonly used success criteria are implemented in the Mediana package. The user can also define custom significance criteria. The built-in success criteria are listed below along with the required parameters that need to be included in the `par` argument: - `MarginalPower`: compute the marginal power of all tests included in the `test` argument. Required parameter: `alpha` (significance level used in each test). - `WeightedPower`: compute the weighted power of all tests included in the `test` argument. Required parameters: `alpha` (significance level used in each test) and `weight` (vector of weights assigned to the significance tests). - `DisjunctivePower`: compute the disjunctive power (probability of achieving statistical significance in at least one test included in the `test` argument). Required parameter: `alpha` (significance level used in each test). - `ConjunctivePower`: compute the conjunctive power (probability of achieving statistical significance in all tests included in the `test` argument). Required parameter: `alpha` (significance level used in each test). - `ExpectedRejPower`: compute the expected number of statistical significant tests. Required parameter: `alpha`(significance level used in each test). Several `Criterion` objects can be added to an `EvaluationModel` object. For more information about the `Criterion` object, see the package's documentation [Criterion](https://cran.r-project.org/package=Mediana/Mediana.pdf). If a certain success criterion is not implemented in the Mediana package, the user can create a custom function and use it within the package (see the dedicated vignette `vignette("custom-functions", package = "Mediana")`). #### Examples Examples of `Criterion` objects: Compute marginal power with alpha = 0.025: ```r Criterion(id = "Marginal power", method = "MarginalPower", tests = tests("Placebo vs treatment"), labels = c("Placebo vs treatment"), par = parameters(alpha = 0.025)) ``` Compute weighted power with alpha = 0.025 and unequal test-specific weights: ```r Criterion(id = "Weighted power", method = "WeightedPower", tests = tests("Placebo vs treatment - Endpoint 1", "Placebo vs treatment - Endpoint 2"), labels = c("Weighted power"), par = parameters(alpha = 0.025, weight = c(2/3, 1/3))) ``` Compute disjunctive power with alpha = 0.025: ```r Criterion(id = "Disjunctive power", method = "DisjunctivePower", tests = tests("Placebo vs Dose H", "Placebo vs Dose M", "Placebo vs Dose L"), labels = c("Disjunctive power"), par = parameters(alpha = 0.025)) ``` # Clinical Scenario Evaluation Clinical Scenario Evaluation (CSE) is performed based on the data, analysis and evaluation models as well as simulation parameters specified by the user. The simulation parameters are defined using the `SimParameters` object. ## Clinical Scenario Evaluation objects ### `SimParameters` object #### Description The `SimParameters` object is a required argument of the `CSE` function and has the following arguments: - `n.sims` defines the number of simulations. - `seed` defines the seed to be used in the simulations. - `proc.load` defines the processor load in parallel computations. The `proc.load` argument is used to define the number of processor cores dedicated to the simulations. A numeric value can be defined as well as character value which automatically detects the number of cores: - `low`: 1 processor core. - `med`: Number of available processor cores / 2. - `high`: Number of available processor cores 1. - `full`: All available processor cores. #### Examples Examples of `SimParameters` object specification: Perform 10000 simulations using all available processor cores: ```r SimParameters(n.sims = 10000, proc.load = "full", seed = 42938001) ``` Perform 10000 simulations using 2 processor cores: ```r SimParameters(n.sims = 10000, proc.load = 2, seed = 42938001) ``` ### `CSE` function #### Description The `CSE` function is invoked to runs simulations under the Clinical Scenario Evaluation approach. This function uses four arguments: - `data` defines a `DataModel` object. - `analysis` defines an `AnalysisModel` object. - `evaluation` defines an `EvaluationModel` object. - `simulation` defines a `SimParameters` object. #### Examples The following example illustrates the use of the `CSE` function: ```r # Outcome parameter set 1 outcome1.placebo = parameters(mean = 0, sd = 70) outcome1.treatment = parameters(mean = 40, sd = 70) # Outcome parameter set 2 outcome2.placebo = parameters(mean = 0, sd = 70) outcome2.treatment = parameters(mean = 50, sd = 70) # Data model case.study1.data.model = DataModel() + OutcomeDist(outcome.dist = "NormalDist") + SampleSize(c(50, 55, 60, 65, 70)) + Sample(id = "Placebo", outcome.par = parameters(outcome1.placebo, outcome2.placebo)) + Sample(id = "Treatment", outcome.par = parameters(outcome1.treatment, outcome2.treatment)) # Analysis model case.study1.analysis.model = AnalysisModel() + Test(id = "Placebo vs treatment", samples = samples("Placebo", "Treatment"), method = "TTest") # Evaluation model case.study1.evaluation.model = EvaluationModel() + Criterion(id = "Marginal power", method = "MarginalPower", tests = tests("Placebo vs treatment"), labels = c("Placebo vs treatment"), par = parameters(alpha = 0.025)) # Simulation Parameters case.study1.sim.parameters = SimParameters(n.sims = 1000, proc.load = 2, seed = 42938001) # Perform clinical scenario evaluation case.study1.results = CSE(case.study1.data.model, case.study1.analysis.model, case.study1.evaluation.model, case.study1.sim.parameters) ``` ### Summary of results Once Clinical Scenario Evaluation-based simulations have been run, the `CSE` object returned by the `CSE` function contains a list with the following components: - `simulation.results`: a data frame containing the results of the simulations for each scenario. - `analysis.scenario.grid`: a data frame containing the grid of the combination of data and analysis scenarios. - `data.structure`: a list containing the data structure according to the `DataModel` object. - `analysis.structure`: a list containing the analysis structure according to the `AnalysisModel` object. - `evaluation.structure`: a list containing the evaluation structure according to the `EvaluationModel` object. - `sim.parameters`: a list containing the simulation parameters according to `SimParameters` object. - `timestamp`: a list containing information about the start time, end time and duration of the simulation runs. The simulation results can be summarized in the R console using the `summary` function: ```r summary(case.study1.results) ``` A Microsoft Word-based simulation report can be generated from the simulation results produced by the `CSE` function using the `GenerateReport` function, see [Simulation report](#simulation-report). # Simulation report The Mediana R package uses the [officer R package](http://davidgohel.github.io/officer/) package to generate a Microsoft Word-based report that summarizes the results of Clinical Scenario Evaluation-based simulations. The user can easily customize this simulation report by adding a description of the project as well as labels to each scenario, including data scenarios (sample size, outcome distribution parameters, design parameters) and analysis scenarios (multiplicity adjustment). The user can also customize the report's structure, e.g., create sections and subsections within the report and specify how the rows will be sorted within each table. In order to customize the report, the user has to use a `PresentationModel` object described below. Once a `PresentationModel` object has been defined, the `GenerateReport` function can be called to generate a Clinical Scenario Evaluation report. ## Initialization A presentation model can be initialized using the following command ```r # PresentationModel initialization presentation.model = PresentationModel() ``` Initialization with this command is highly recommended as it will simplify the process of adding related objects, e.g., the `Project`, `Section`, `Subsection`, `Table`, `CustomLabel` objects. ## Specific objects Once the `PresentationModel` object has been initialized, specific objects can be added by simply using the '+' operator as in data, analysis and evaluation models. ### `Project` object #### Description This object specifies a description of the project. The `Project` object is defined by three optional arguments: - `username` defines the username to be included in the report (by default, the username is "[Unknown User]"). - `title` defines the project's in the report (the default value is "[Unknown title]"). - `description` defines the project's description (the default value is "[No description]"). This information will be added in the report generated using the `GenerateReport` function. A single object of the `Project` class can be added to an object of the `PresentationModel` class. #### Examples A simple `Project` object can be created as follows: ```r Project(username = "Gautier Paux", title = "Case study 1", description = "Clinical trial in patients with pulmonary arterial hypertension") ``` ### `Section` object #### Description This object specifies the sections that will be created within the simulation report. A `Section` object is defined by a single argument: - `by` defines the rules for setting up sections. The `by` argument can contain several parameters from the following list: - `sample.size`: a separate section will be created for each sample size. - `event`: a separate section will be created for each event count. - `outcome.parameter`: a separate section will be created for each outcome parameter scenario. - `design.parameter`: a separate section will be created for each design parameter scenario. - `multiplicity.adjustment`: a separate section will be created for each multiplicity adjustment scenario. Note that, if a parameter is defined in the `by` argument, it must be defined only in this object (i.e., neither in the `Subection` object nor in the `Table` object). A single object of the `Section` class can be added to an object of the `PresentationModel` class. #### Examples A `Section` object can be defined as follows: Create a separate section within the report for each outcome parameter scenario: ```r Section(by = "outcome.parameter") ``` Create a separate section for each unique combination of the sample size and outcome parameter scenarios: ```r Section(by = c("sample.size", "outcome.parameter")) ``` ### `Subsection` object #### Description This object specifies the rules for creating subsections within the simulation report. A `Subsection` object is defined by a single argument: - `by` defines the rules for creating subsections. The `by` argument can contain several parameters from the following list: - `sample.size`: a separate subsection will be created for each sample size. - `event`: a separate subsection will be created for each number of events. - `outcome.parameter`: a separate subsection will be created for each outcome parameter scenario. - `design.parameter`: a separate subsection will be created for each design parameter scenario. - `multiplicity.adjustment`: a separate subsection will be created for each multiplicity adjustment scenario. As before, if a parameter is defined in the `by` argument, it must be defined only in this object (i.e., neither in the `Section` object nor in the `Table` object). A single object of the `Subsection` class can be added to an object of the `PresentationModel` class. #### Examples `Subsection` objects can be set up as follows: Create a separate subsection for each sample size scenario: ```r Subsection(by = "sample.size") ``` Create a separate subsection for each unique combination of the sample size and outcome parameter scenarios: ```r Subsection(by = c("sample.size", "outcome.parameter")) ``` ### `Table` object #### Description This object specifies how the summary tables will be sorted within the report. A `Table` object is defined by a single argument: - `by` defines how the tables of the report will be sorted. The `by` argument can contain several parameters, the value must be contain in the following list: - `sample.size`: the tables will be sorted by the sample size. - `event`: the tables will be sorted by the number of events. - `outcome.parameter`: the tables will be sorted by the outcome parameter scenario. - `design.parameter`: the tables will be sorted by the design parameter scenario. - `multiplicity.adjustment`: the tables will be sorted by the multiplicity adjustment scenario. If a parameter is defined in the `by` argument it must be defined only in this object (i.e., neither in the `Section` object nor in the `Subsection` object). A single object of class `Table` can be added to an object of class `PresentationModel`. #### Examples Examples of `Table` objects: Create a summary table sorted by sample size scenarios: ```r Table(by = "sample.size") ``` Create a summary table sorted by sample size and outcome parameter scenarios: ```r Table(by = c("sample.size", "outcome.parameter")) ``` ### `CustomLabel` object #### Description This object specifies the labels that will be assigned to sets of parameter values or simulation scenarios. These labels will be used in the section and subsection titles of the Clinical Scenario Evaluation Report as well as in the summary tables. A `CustomLabel` object is defined by two arguments: - `param` defines a parameter (scenario) to which the current set of labels will be assigned. - `label` defines the label(s) to assign to each value of the parameter. The `param` argument can contain several parameters from the following list: - `sample.size`: labels will be applied to sample size values. - `event`: labels will be applied to number of events values. - `outcome.parameter`: labels will be applied to outcome parameter scenarios. - `design.parameter`: labels will be applied to design parameter scenarios. - `multiplicity.adjustment`: labels will be applied to multiplicity adjustment scenarios. Several objects of the `CustomLabel` class can be added to an object of the `PresentationModel` class. #### Examples Examples of `CustomLabel` objects: Assign a custom label to the sample size values: ```r CustomLabel(param = "sample.size", label= paste0("N = ",c(50, 55, 60, 65, 70))) ``` Assign a custom label to the outcome parameter scenarios: ```r CustomLabel(param = "outcome.parameter", label=c("Pessimistic", "Expected", "Optimistic")) ``` ## `GenerateReport` function ### Description The Clinical Scenario Evaluation Report is generated using the `GenerateReport` function. This function has four arguments: - `presentation.model` defines a `PresentationModel` object. - `cse.result` defines a `CSE` object returned by the CSE function. - `report.filename` defines the filename of the Word-based report generated by this function. - `report.template` defines a Word-based template (it is an optional argument). The `GenerateReport` function requires the [officer R package](http://davidgohel.github.io/officer/) package to generate a Word-based simulation report. Optionally, a custom template can be selected by defining `report.template`, this argument specifies the name of a Word document located in the working directory. The Word-based simulation report is structured as follows: 1. GENERAL INFORMATION 1. PROJECT INFORMATION 2. SIMULATION PARAMETERS 1. DATA MODEL 1. DESIGN (if a `Design` object has been defined) 2. SAMPLE SIZE (or EVENT if an `Event` object has been defined) 2. OUTCOME DISTRIBUTION 3. DESIGN 1. ANALYSIS MODEL 1. TESTS 2. MULTIPLICITY ADJUSTMENT 1. EVALUATION MODEL 1. CRITERIA 1. RESULTS 1. SECTION (if a `Section` object has been defined) 1. SUBSECTION (if a `Subsection` object has been defined) 2. ... 1. ... ### Examples This example illustrates the use of the `GenerateReport` function: ```r # Define a presentation model case.study1.presentation.model = PresentationModel() + Section(by = "outcome.parameter") + Table(by = "sample.size") + CustomLabel(param = "sample.size", label= paste0("N = ",c(50, 55, 60, 65, 70))) + CustomLabel(param = "outcome.parameter", label=c("Standard 1", "Standard 2")) # Report Generation GenerateReport(presentation.model = case.study1.presentation.model, cse.results = case.study1.results, report.filename = "Case study 1 (normally distributed endpoint).docx") ``` Mediana/vignettes/style.css0000644000176200001440000000556513434027611015506 0ustar liggesusers body { position: relative; } ul.nav-pills { top: 20px; position: fixed; } .fixed { position: fixed; } /* sidebar */ .bs-docs-sidebar { padding-left: 50px; margin-top: 10px; margin-bottom: 10px; } /* all links */ .bs-docs-sidebar .nav>li>a { color: #999; border-left: 2px solid transparent; padding: 1px 20px; font-size: 12px; font-weight: 400; } /* nested links */ .bs-docs-sidebar .nav .nav>li>a { padding-top: 1px; padding-bottom: 1px; padding-left: 30px; font-size: 12px; } /* active & hover links */ .bs-docs-sidebar .nav>.active>a, .bs-docs-sidebar .nav>li>a:hover, .bs-docs-sidebar .nav>li>a:focus { color: #053552; text-decoration: none; background-color: transparent; border-left-color: #053552; } /* all active links */ .bs-docs-sidebar .nav>.active>a, .bs-docs-sidebar .nav>.active:hover>a, .bs-docs-sidebar .nav>.active:focus>a { font-weight: 700; } /* nested active links */ .bs-docs-sidebar .nav .nav>.active>a, .bs-docs-sidebar .nav .nav>.active:hover>a, .bs-docs-sidebar .nav .nav>.active:focus>a { font-weight: 500; } /* hide inactive nested list */ .bs-docs-sidebar .nav ul.nav { display: none; } /* show active nested list */ .bs-docs-sidebar .nav>.active>ul.nav { display: block; } /*Header formatting */ .MainContent { margin-top: -20px; } /*back to top */ .back-to-top { color: #999; border-left: 2px solid transparent; padding: 1px 20px; font-size: 12px; font-weight: 400; } .bs-docs-footer{ padding-top:40px; padding-bottom:40px; margin-top:100px; color:#777; text-align:center; border-top:1px solid #e5e5e5} .bs-docs-footer-links{ padding-left:0; margin-top:20px; color:#999} .bs-docs-footer-links li { display:inline; padding:0 2px} .bs-docs-footer-links li:first-child{ padding-left:0}@media (min-width:768px){.bs-docs-footer p{margin-bottom:0}}.bs-docs-social{margin-bottom:20px;text-align:center}.bs-docs-social-buttons{display:inline-block;padding-left:0;margin-bottom:0;list-style:none}.bs-docs-social-buttons li{display:inline-block;padding:5px 8px;line-height:1}.bs-docs-social-buttons .twitter-follow-button{width:225px!important}.bs-docs-social-buttons .twitter-share-button{width:98px!important}.github-btn{overflow:hidden;border:0} hr { display: block; height: 1px; border: 0; border-top: 1px solid #053552; margin: 1em 0; padding: 0; } h1 { color: #053552; font-size: 40px; } h2, h3, h4, h5 { color: #053552; } h2:after { content:' '; display:block; border:1px solid #053552; margin-top: 10px; } h3:after { content:' '; display:block; border:1px solid #eee; margin-top: 10px; } .product .img-responsive { margin: 0 auto; }Mediana/README.md0000644000176200001440000001263713464523137013110 0ustar liggesusers # Mediana [![CRAN\_Status\_Badge](http://www.r-pkg.org/badges/version/Mediana)](https://cran.r-project.org/package=Mediana) [![CRAN\_Logs\_Badge](http://cranlogs.r-pkg.org/badges/Mediana)](https://cran.r-project.org/package=Mediana) [![CRAN\_Logs\_Badge\_Total](http://cranlogs.r-pkg.org/badges/grand-total/Mediana)](https://cran.r-project.org/package=Mediana) `Mediana` is an R package which provides a general framework for clinical trial simulations based on the Clinical Scenario Evaluation approach. The package supports a broad class of data models (including clinical trials with continuous, binary, survival-type and count-type endpoints as well as multivariate outcomes that are based on combinations of different endpoints), analysis strategies and commonly used evaluation criteria. Find out more at and check out the case studies. # Installation Get the released version from CRAN: ``` r install.packages("Mediana") ``` Or the development version from github: ``` r # install.packages("devtools") devtools::install_github("gpaux/Mediana", build_opts = NULL) ``` ## Vignettes `Mediana` includes 3 vignettes. In particular, an introduction of the package and several case studies: ``` r vignette(topic = "mediana", package = "Mediana") vignette(topic = "case-studies", package = "Mediana") ``` # Online Manual A detailed online manual is accessible at # References ## Clinical trial optimization using R book [Clinical Trial Optimization Using R](https://www.crcpress.com/Clinical-Trial-Optimization-using-R/Dmitrienko/p/book/9781498735070) explores a unified and broadly applicable framework for optimizing decision making and strategy selection in clinical development, through a series of examples and case studies.It provides the clinical researcher with a powerful evaluation paradigm, as well as supportive R tools, to evaluate and select among simultaneous competing designs or analysis options. It is applicable broadly to statisticians and other quantitative clinical trialists, who have an interest in optimizing clinical trials, clinical trial programs, or associated analytics and decision making. This book presents in depth the Clinical Scenario Evaluation (CSE) framework, and discusses optimization strategies, including the quantitative assessment of tradeoffs. A variety of common development challenges are evaluated as case studies, and used to show how this framework both simplifies and optimizes strategy selection. Specific settings include optimizing adaptive designs, multiplicity and subgroup analysis strategies, and overall development decision-making criteria around Go/No-Go. After this book, the reader will be equipped to extend the CSE framework to their particular development challenges as well. `Mediana` R package has been widely used to implement the case studies presented in this book. The detailed description and R code of these case studies are available on this website. ## Publications The `Mediana` package has been successfully used in multiple clinical trials to perform power calculations as well as optimally select trial designs and analysis strategies (clinical trial optimization). For more information on applications of the `Mediana` package, download the following papers: - Dmitrienko, A., Paux, G., Brechenmacher, T. (2016). \[Power calculations in clinical trials with complex clinical objectives.\] Journal of the Japanese Society of Computational Statistics. 28, 15-50.\]() - Dmitrienko, A., Paux, G., Pulkstenis, E., Zhang, J. (2016). \[Tradeoff-based optimization criteria in clinical trials with multiple objectives and adaptive designs.\] Journal of Biopharmaceutical Statistics. 26, 120-140.\]() - Paux, G. and Dmitrienko A. (2018). \[Penalty-based approaches to evaluating multiplicity adjustments in clinical trials: Traditional multiplicity problems.\] Journal of Biopharmaceutical Statistics. 28, 146-168.() - Paux, G. and Dmitrienko A. (2018). \[Penalty-based approaches to evaluating multiplicity adjustments in clinical trials: Advanced multiplicity problems.\] Journal of Biopharmaceutical Statistics. 28, 169-188.() # Citation If you find `Mediana` useful, please cite it in your publications: ``` r citation("Mediana") #> #> To cite package 'Mediana' in publications use: #> #> Gautier Paux and Alex Dmitrienko. (2019). Mediana: Clinical Trial Simulations. R #> package version 1.0.8. http://gpaux.github.io/Mediana/ #> #> A BibTeX entry for LaTeX users is #> #> @Manual{, #> title = {Mediana: Clinical Trial Simulations}, #> author = {Gautier Paux and Alex Dmitrienko.}, #> year = {2019}, #> note = {R package version 1.0.8}, #> url = {http://gpaux.github.io/Mediana/}, #> } #> #> ATTENTION: This citation information has been auto-generated from the package DESCRIPTION #> file and may need manual editing, see 'help("citation")'. ``` Mediana/MD50000644000176200001440000003074613464553603012143 0ustar liggesusersb7417925cdb3c0239fd263b029fa4f05 *DESCRIPTION a12f9fe234c936c4e1b808632f57faf3 *NAMESPACE 3d28fada5127e442386baa1fd3381010 *NEWS.md 2df442161a8c4d86efb2cc5d9634f44a *R/AdjustCIs.R 659a039a2ab86f5ee3373be21f884596 *R/AdjustPvalues.R f5859a483ef7a00eb011154dd2898aa9 *R/AnalysisModel.Interim.R 2b8ba9d47023687e5849796fd74a6184 *R/AnalysisModel.MultAdj.R ffe2c4c78e1bed904ceeddfc9f3a17a2 *R/AnalysisModel.MultAdjProc.R 2abf6284971d0b3c063660df92689372 *R/AnalysisModel.MultAdjStrategy.R 5bb29aaf0d001b33b33e872207cb4db0 *R/AnalysisModel.R 7580bd8b5c4f2d0435d386f23eaf82d1 *R/AnalysisModel.Statistic.R 7273a12e964bdc7fb46f0df4d025cb68 *R/AnalysisModel.Test.R 2660f2e425fe089311ce2ad3d1f0d745 *R/AnalysisModel.default.R e54a2757daa5cddc56b56fe5a5eee231 *R/AnalysisStack.R 84102b8311c35f9336efa69c09e48950 *R/AnalysisStack.default.R 5f4449843a204e7b6a2cafa7b676e72a *R/BetaDist.R 9e6170168b3efdbe3f9b4e43ab6581a4 *R/BinomDist.R b43365fac33c5d96c9831fd83eb33102 *R/BonferroniAdj.CI.R 06c461976f7cc1e0368b474a9fbb8dec *R/BonferroniAdj.R 8601d3b14d852331586bfa5655f1c31a *R/BonferroniAdj.global.R 1352ca7df11f20f0e11abe46f1def294 *R/BroadClaimPower.R f724bd4a0e4d2c2911a4fa6e1850b9d3 *R/CDFDunnett.R 5f456c904c1d6a9997aa259e78411670 *R/CSE.R 5a88b11b82c8eba159e26ff2ec7e6448 *R/CSE.default.R 373aceb44e9d9c00d9b117ff00a11d3f *R/ChainAdj.R 5af73f0aaaf757c9b359358938cc787b *R/ConjunctivePower.R a25e290759ff85bd091f8028a85e322d *R/CreateAnalysisStructure.R 153937b2be0aba09f901f924625af8d3 *R/CreateDataScenarioEvent.R 0f3283040148652eb40be24b08f5c6a2 *R/CreateDataScenarioSampleSize.R 1262dba90aa53987124cd9c7d293bf24 *R/CreateDataSlice.R bead1fa8dcdc473c47ae507c7bb36f22 *R/CreateDataStack.R 91115e5b4c53e031e44e3cd14b133787 *R/CreateDataStructure.R 082e8dc4f38117243136d6fb21d0fbd0 *R/CreateEvaluationStructure.R 9dee378ebc89bb9211fef64558da9f09 *R/CreateReportStructure.R a9623a430f6d1bcc4b52f1429881477c *R/CreateSummaryTable.R 0d9b0e90127556dd51d8ba94160c0b74 *R/CreateTableCriterion.R e484bdd862f39a05ed907ec4a495c4c7 *R/CreateTableDesign.R 5ebce1b5b10a752c3cb777e3469da5a0 *R/CreateTableOutcome.R 0f368581fcfbb11f1fbc3b80251095df *R/CreateTableSampleSize.R 0fe2c6bcbd95e66495b5441c68d22587 *R/CreateTableStatistic.R ac28fa47f6013b0badf6b03bd966d45a *R/CreateTableStructure.R 3aa27b604928c8880fd901b9628f8b8b *R/CreateTableTest.R 85b0210eb1652394c8fb9eb2518349bb *R/Criterion.R 7e0f6dc1226ad888c42e2e8ea9a2d77d *R/CustomLabel.R 49d56a4058e995ce864bc1f1b3c51c59 *R/DataModel.Design.R ad741cf99785ff1605ced87885c598e7 *R/DataModel.Event.R 54b889d6f051d3d1f1e9d8a7b7523f46 *R/DataModel.OutcomeDist.R 56f865f8c92e1c7f98da8bc0a391b545 *R/DataModel.R 361c05fb775d030975eb31dac75309ce *R/DataModel.SampleSize.R a577c2d786bc04a39857612835883a36 *R/DataModel.default.R bcd7e69290ac6b5cc876ea384a2e0047 *R/DataStack.R a920d4816cbb2a01f88bb71ebdeeb43b *R/DataStack.default.R 0a717e19759424ff44dd013b8724792e *R/Design.R 9ad18873d96e308e37474e1c23aeade7 *R/DiffMeanStat.R b9e8fd7d8c917e3efc51d4fe104cf096 *R/DiffPropStat.R 1317d8efc0cc19e1029d7faa67ad68a4 *R/DisjunctivePower.R c42a7ba2085483817d51c694c424726e *R/DunnettAdj.CI.R f1d1fe8670a7eb94037fc41dec528658 *R/DunnettAdj.R 7efb199e97fe10b9116b5bdf0616a051 *R/EffectSizeContStat.R d4bce7a0be558a205210eb229512542a *R/EffectSizeCoxEventStat.R 1a4d1a2cb2e60d3ccd748421a6623e4f *R/EffectSizeEventStat.R e0c09534f54a09aeaa27194786a6fc76 *R/EffectSizePropStat.R a95db75fde946b05d5899d00da63d621 *R/EnhancedClaimPower.R 3711a1555bf525acba68d72b4a58d120 *R/EvaluationModel.Criterion.R 77141c079aea13738e348d257628ce8f *R/EvaluationModel.R 665e404436d70ed0967a6f77e2a87b46 *R/EvaluationModel.default.R 267deee7c563db8b786087c118543500 *R/Event.R 6de864aa76adc7a9e61aa19274f556ef *R/EventCountStat.R 3d2684559fdfb1f450a92a38f7b34943 *R/ExpectedRejPower.R d068056e18a5f2b64e9ddc38d61ce4f1 *R/ExpoDist.R d1939ec8d1e11e5506d0874fadea8e6b *R/ExtractAnalysisStack.R a321670868b9e620e7ffaba3e02e747c *R/ExtractDataStack.R cc5152e294ea28a259b93da385702ca0 *R/FallbackAdj.R 0313577fa4e308f66b4075c52db954c3 *R/FisherTest.R 2aac5a29bf3bc6057f1fd3f47198a538 *R/FixedSeqAdj.CI.R 68ac177e66c838de608a1dfbe21db126 *R/FixedSeqAdj.R 14e5788f22d1c27947ae9df9a0bf9f22 *R/GLMNegBinomTest.R a27bf632007d0ef99a502537e58f1924 *R/GLMPoissonTest.R cc65d9212a9a22421b021e33f8286907 *R/GenerateData.R 3365ba00c463dddde07039b09509bc55 *R/GenerateData.default.R b35c21b891851cb1234b04d68970ba6c *R/GeneratePatients.R 51231bbcb8b7deadede916537490bcfa *R/GenerateReport.R abbdfc6064d0d1349949d58d11a122be *R/GenerateReport.default.R 1d3b802caf8353b20becddbf25735afb *R/HazardRatioCoxStat.R 4fd7b415e7b64a626dd63aad483d066d *R/HazardRatioStat.R cdff2a13e90efaf1c3cb30607e226e95 *R/HochbergAdj.R edab3592ec9fc7bbc373992934c19167 *R/HochbergAdj.global.R ad5774c7c95832d9eecb3919874387d7 *R/HolmAdj.CI.R 45023b3cf72822f0f6178bf22d33e363 *R/HolmAdj.R 3031e0db519acb1f45e1ff8ba3433713 *R/HolmAdj.global.R cedc6a22821ce306439f248fc261cc65 *R/HommelAdj.R 3cda27401973bb7a7043527d8b87476e *R/HommelAdj.global.R a928f63232ed755f4701d5248cbb5b4e *R/InfluencePower.R 6c25c153b4f22e2f45ad63c7b0696bdb *R/InteractionPower.R 6dcee85d6200147ef28daa441748cbaf *R/Interim.R 9c9271a1b7338f8bd5126d94b4f13602 *R/LogrankTest.R a82b7d43ee87615f2566caacf6faae72 *R/MVBinomDist.R 76a35e2f63e6b39c4feaccbbcaf7611e *R/MVExpoDist.R 7aa4f5dd856980df03331222c13674d6 *R/MVExpoPFSOSDist.R 310f67a40838452c501ceeb677c4f967 *R/MVMixedDist.R 664331a4a0f868bb54b9df414b6f5fd1 *R/MVNormalDist.R da72909399731806b0597625c941763f *R/MarginalPower.R bf0cb1891c43ec9029c200ced29a9330 *R/MaxStat.R afc4195ef853576fadb3030dbe1f7d6d *R/MeanStat.R 86a6ba9b0093f3692d7f028653fcbd31 *R/MeanSumm.R 7e1729d47efbcf36dd922be3aa53cd1e *R/MedianStat.R 833624af3d47b6fe31fcafe1918ff722 *R/MedianSumm.R 2f9542ff0283d9c32077243c971be23f *R/MinStat.R a10d54cd242ce64254b878ebea319abd *R/MixtureGatekeepingAdj.R 89808e47e1bebdbbed3174faf80a7c0f *R/MulinomialDist.R 5bd099b2972fc2c3799611d1a1a145b0 *R/MultAdj.MultAdjProc.R 5318ebbac6439c66f6ed80311b559113 *R/MultAdj.MultAdjStrategy.R c3920123ce0760fe40519bd60a8d0675 *R/MultAdj.R 8194ef1d7a3bc8892f7edf78078460d2 *R/MultAdj.default.R 6c4df19613d6e1a9b1b0d08d0d23b299 *R/MultAdjProc.R 49be39e9680c8fb47a53a0cf3367f7b9 *R/MultAdjStrategy.MultAdjProc.R 32a05f1a72172b1061e2cd9e4604729c *R/MultAdjStrategy.R a6d856af3fcd4cd4578b82a3b3a49084 *R/MultAdjStrategy.default.R 62779edda33b26935fd4881d71ff7acd *R/MultipleSequenceGatekeepingAdj.R 10099b1d329592930e9ffe68298fe7c8 *R/NegBinomDist.R 7c4456504d56d5bb4e1410aaa11865ef *R/NormalDist.R 1ccf6f5df1a880b4ab1762b8788b60c0 *R/NormalParamAdj.R f73228aed07de6a340b3b2b95ee13cbe *R/NormalParamDist.R ef31be71fdcff1ca994f50cbbd1f1e9e *R/OrdinalLogisticRegTest.R f17415f6fcc51c34f89a51777faf493d *R/OutcomeDist.R b61a45431b59c97f8f0561e5ff5d30a7 *R/ParallelGatekeepingAdj.R 423a45d51512c5b477b3255e6aa80577 *R/PatientCountStat.R 53acf54276f9ebcbdc66c6c748dd242a *R/PerformAnalysis.R 442bf9699cc5ae04ae00a39520d47b10 *R/PoissonDist.R d47f4b6adb00e4b3766ad196646a77d2 *R/PresentationModel.CustomLabel.R 5f7702d74d69d62c8fdddac266a13c6a *R/PresentationModel.Project.R c1f499c7ea14954c168661643fdb4bd7 *R/PresentationModel.R 4b733c338d8aca70a7cbdf747d479fb3 *R/PresentationModel.Section.R d58c2bfd91a2259ec44b21ef0cf57f27 *R/PresentationModel.Subsection.R 5ff2cc80397e5e73cf248326d38bdf53 *R/PresentationModel.Table.R 0221382bec1a110500fc5853ae667148 *R/PresentationModel.default.R 1c4adcc90e177a6446c46aa88009c322 *R/Project.R bee72e6c31bbe322ff04309f7b493ced *R/PropStat.R 463831ef3ec8c111ceffb5050682c4f8 *R/PropTest.R b68673f39d718ac6b4288e1c609608e8 *R/PropTestNI.R 0de78a5cdbfd6e4c46805c69c7ab30f6 *R/RatioEffectSizeContStat.R c5793fc8e0bf48abeab894895f83cd8f *R/RatioEffectSizeCoxEventStat.R 857c976d11f6cd665d3ef1b7ad60b0c2 *R/RatioEffectSizeEventStat.R 1cbdb9966a44c74e63109cd8f6a82abb *R/RatioEffectSizePropStat.R 8df14ed5870f497b5682d830e9ecfad2 *R/RestrictedClaimPower.R 2837f60f0946ca0e6c847579d80e77c2 *R/Sample.R 11e0088d8eb0a63368b799031dce0e7c *R/SampleSize.R ed367d1a0217b1034acfb81bcdaf0609 *R/SdStat.R e18bbed5abdd9e11eb7095811d710835 *R/Section.R 004b5dd0b7a2fac57df78f93914ec19d *R/SimParameters.R 7a36288c2710a913f4d453599cf31014 *R/Statistic.R 9e147c67b6eb519051af03efee9ae529 *R/StepDownDunnettAdj.CI.R 3c31768575216c7e80a12e469bc3e1ef *R/StepDownDunnettAdj.R 240ebf96f581b7351fb76ea5fb8a5c2b *R/Subsection.R 796062426d58d160cd411e328f788bb6 *R/TTest.R e4ece59c57de275df481a1c7d72dd5fc *R/TTestNI.R bf17735bb1ffefa6200b079c4a6b4790 *R/Table.R 87ef5c742c4dc2ec01e386fac52870ab *R/Test.R 9aa62999181643f7145b5240d3e9aef0 *R/TruncatedExpoDist.R aaa1c38ba29999e0b08d91f94e1d01e9 *R/UniformDist.R 376ce86a76a3f2c5e39d5434670f5128 *R/WeibullDist.R 31c0d14ff14b96c88fb729404c46616e *R/WeightedPower.R c46341b4446fd513e7bfe263eec9e07f *R/WilcoxTest.R fe004dd3b18ddbf298cca42ee0c00350 *R/appendList.R ae94d00fca3ed7f59416f0da7f20015b *R/argmin.R 5a7afdc7e4e9a1c1075a1b0eb84c4687 *R/capwords.R d733adce863e121ff231a14a458e18c0 *R/errorfrac.R 3df36dde49539cd3ba7d9c6f97427832 *R/families.R c21669015c17640c07a53b2105057480 *R/is.AnalysisModel.R 5707fdf18ed1693cb5b6c18cbf5f5c70 *R/is.DataModel.R db37371324187da47fbd8f969ec684c6 *R/is.EvaluationModel.R 56b448c375139e2cc8a980512a8f8058 *R/is.PresentationModel.R 34729305ac869c7fc859ddaa8b1ad0d1 *R/mergeOutcomeParameter.R b5aaa33d7c1469de90c856cda5ed6d04 *R/parameters.R 2fd442ddace46f5dd8be44b7e19b8c34 *R/qdunnett.R 9986a58b468e7babecc3a909d7a01ae3 *R/samples.R 1e0c0b3c1dcab91310ee66ab7b9da106 *R/seq_vector.R e76abb2053b47a51886f5a0a7ac11a8b *R/statistics.R ee71db957c21af0cdaaf9291658d9939 *R/summary.CSE.R 50cda39a832a3fc266e929a3b2e44485 *R/tests.R a836fc17ff2a99ab3eeeaecf79c61624 *R/z+.AnalysisModel.R 1ef5281f1efcda435e56ac07cd8b27ab *R/z+.DataModel.R 1ba77bfb4f2cce188e3c4d0db9168e13 *R/z+.EvaluationModel.R dd5658f83e344611403e74d70b959cc9 *R/z+.PresentationModel.R 535f492e760d8c64335859b0b6a2a5b6 *README.md cfcadf678113f1fa3f4183dc425316d7 *build/vignette.rds aff568824ffba83a9ae1fb5e8cd2faa3 *inst/doc/adjusted-pvalues.Rmd 68c0928c4540fefefc471c197371754c *inst/doc/adjusted-pvalues.html a63994712e65fd894c9d039bd48af40e *inst/doc/case-studies.R 241a44dcd52730ca39a4646fc73f3a6d *inst/doc/case-studies.Rmd 67e5770b4339dcae4be31e7b966a2bdb *inst/doc/case-studies.html 244c927274648926f75632e42c9027b2 *inst/doc/mediana.Rmd 23198022a9c56fd4818203a9aa29e823 *inst/doc/mediana.html a981cf119bf9592f14bcb79a71be7091 *inst/figures/hexMediana.png 5f58ab60669f642bae604008c8a951f1 *inst/figures/logo_MEDIANAINC.png b4be607697fe92d27fbfc37e53fc2243 *inst/figures/makeSticker.R 91c2b9005389443052ddb63881984915 *inst/template/template.docx 8b61838e67cc661cb5285cfe5fbd7aaa *man/AdjustCIs.Rd ea62b78417ef0ab95713e85fe70132d6 *man/AdjustPvalues.Rd 444c8997b851b4b1ed34964ba6affd35 *man/AnalysisModel.Rd a5b36cd31fac0c29ddf95b00f100d1fe *man/AnalysisStack.Rd ac6ade00c6f0a240f134f8ab6d012d79 *man/CSE.Rd 020a40f61c7beda9629317ce62f2ed1a *man/Criterion.Rd fa577054d63b39bb298f6ef8ff36e63e *man/CustomLabel.Rd 0ff1ac365486d8c395fab027ce208cbc *man/DataModel.Rd a00a8fbe423d6db7c4ca03e007b99d23 *man/DataStack.Rd 33d6fcb6cae8c17e3bfa424eb1655c04 *man/Design.Rd ecdce6ed3e34d2e8e5815a3e88aea1d0 *man/EvaluationModel.Rd b1b26581c71baed1394e056f56f792d3 *man/Event.Rd cbdfc72ca3b80a1ed6c019438479b927 *man/ExtractAnalysisStack.Rd 801cca8a8798400fba78200b9d8b728a *man/ExtractDataStack.Rd ebd44b98f62d56d9f8f79abb97f103d2 *man/GenerateData.Rd 819455742fba0c195152b8abc59fe078 *man/GenerateReport.Rd 9d18204fc4ddc91b109158a75ed45d6f *man/Mediana-package.Rd 46b1d6c4ea7256c4c6225f0137924859 *man/MultAdj.Rd 7b303231200e4e5b31ed7c05ad5c3205 *man/MultAdjProc.Rd 4371f64aa01c4cc320873f9adce48361 *man/MultAjdStrategy.Rd 771bf2b9ef2264be6d3400e32e6146f5 *man/OutcomeDist.Rd 7adfca7b0bb9028d23b31feb83172cc1 *man/PresentationModel.Rd 64a0d4bbec5a90092122d0c7e744c8fc *man/Project.Rd c6a4a5cb929476dee358e853ab33df2b *man/Sample.Rd 00f84636759c9a89e64aeac0c255059c *man/SampleSize.Rd 00ffec04f992814efa15c63e18acbce4 *man/Section.Rd 53b21b28718b950dcc96cf405df6b863 *man/SimParameters.Rd a64f43ef782eeb8ccf9ad0c6a807f409 *man/Statistic.Rd ba64e37aad540dd372d78144d9503c6a *man/Subsection.Rd c8bf53ebda3cde4dabce465d4e571ac3 *man/Table.Rd a40338444b161db516b87acb9c8eb2b6 *man/Test.Rd 1b94fa1130891ef45236cc8125814693 *man/families.Rd aff568824ffba83a9ae1fb5e8cd2faa3 *vignettes/adjusted-pvalues.Rmd 241a44dcd52730ca39a4646fc73f3a6d *vignettes/case-studies.Rmd 942aad214f3d9cc7af3748721d0880e5 *vignettes/figures/CaseStudy04-fig1.png f9dc9844b5a26c4a58f29400b70493a6 *vignettes/figures/CaseStudy04-fig2.png 8fa72ecd9b0a4ab076e45678ba5ac2c4 *vignettes/figures/CaseStudy05-fig1.png 244c927274648926f75632e42c9027b2 *vignettes/mediana.Rmd 9909aa8fb28788e76085f8f8a6847205 *vignettes/style.css Mediana/build/0000755000176200001440000000000013464544414012720 5ustar liggesusersMediana/build/vignette.rds0000644000176200001440000000047713464544414015267 0ustar liggesusersR=O0uPh)$*d;`G;TlqĮHAb{{2$cdmI9y T-\04M?SDp#*fMediana/DESCRIPTION0000644000176200001440000000202013464553603013321 0ustar liggesusersPackage: Mediana Type: Package Title: Clinical Trial Simulations Version: 1.0.8 Date: 2019-05-08 Author: Gautier Paux, Alex Dmitrienko. Maintainer: Gautier Paux BugReports: https://github.com/gpaux/Mediana/issues Description: Provides a general framework for clinical trial simulations based on the Clinical Scenario Evaluation (CSE) approach. The package supports a broad class of data models (including clinical trials with continuous, binary, survival-type and count-type endpoints as well as multivariate outcomes that are based on combinations of different endpoints), analysis strategies and commonly used evaluation criteria. Imports: doParallel, doRNG, foreach, MASS, mvtnorm, stats, survival, utils License: GPL-2 URL: http://gpaux.github.io/Mediana/ RoxygenNote: 6.1.1 Encoding: UTF-8 Suggests: flextable, knitr, officer, rmarkdown, pander VignetteBuilder: knitr NeedsCompilation: no Packaged: 2019-05-08 12:18:21 UTC; gauti Repository: CRAN Date/Publication: 2019-05-08 13:20:03 UTC Mediana/man/0000755000176200001440000000000013464544415012375 5ustar liggesusersMediana/man/SimParameters.Rd0000644000176200001440000000251213434027611015427 0ustar liggesusers\name{SimParameters} \alias{SimParameters} %- Also NEED an '\alias' for EACH other topic documented here. \title{SimParameters object } \description{ This function creates an object of class \code{SimParameters} to be passed into the \code{CSE} function. } \usage{ SimParameters(n.sims, seed, proc.load = 1) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{n.sims}{ defines the number of simulations. } \item{seed}{ defines the seed for the simulations. } \item{proc.load}{ defines the load of the processor (parallel computation). } } \details{ Objects of class \code{SimParameters} are used in the \code{CSE} function to define the simulation parameters. The \code{proc.load} argument is used to define the number of clusters dedicated to the simulations. Numeric value can be defined as well as character value which automatically detect the number of cores: \itemize{ \item \code{low}: 1 processor core. \item \code{med}: Number of available processor cores / 2. \item \code{high}: Number of available processor cores - 1. \item \code{full}: All available processor cores. } } \references{ \url{http://gpaux.github.io/Mediana/} } \seealso{ See Also \code{\link{CSE}}. } \examples{ sim.parameters = SimParameters(n.sims = 1000, proc.load = "full", seed = 42938001) } Mediana/man/Criterion.Rd0000644000176200001440000000533013434027611014612 0ustar liggesusers\name{Criterion} \alias{Criterion} %- Also NEED an '\alias' for EACH other topic documented here. \title{Criterion object } \description{ This function creates an object of class \code{Criterion} which can be added to an object of class \code{EvaluationModel}. } \usage{ Criterion(id, method, tests = NULL, statistics = NULL, par = NULL, labels) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{id}{ defines the ID of the \code{Criterion} object. } \item{method}{ defines the method used by the \code{Criterion} object. } \item{tests}{ defines the test(s) used by the \code{Criterion} object. } \item{statistics}{ defines the statistic(s) used by the \code{Criterion} object. } \item{par}{ defines the parameter(s) of the \code{method} argument of the \code{Criterion} object . } \item{labels}{ defines the label(s) of the results. } } \details{ Objects of class \code{Criterion} are used in objects of class \code{EvaluationModel} to specify the criteria that will be applied to the Clinical Scenario. Several objects of class \code{Criterion} can be added to an object of class \code{EvaluationModel}. Mandatory arguments are \code{id}, \code{method}, \code{labels} and \code{tests} and/or \code{statistics}. \code{method} argument defines the criterion's method. Several methods are already implemented in the Mediana package (listed below, along with the required parameters to define in the \code{par} parameter): \itemize{ \item \code{MarginalPower}: generate the marginal power of all tests defined in the \code{test} argument. Required parameter: \code{alpha}. \item \code{WeightedPower}: generate the weighted power of all tests defined in the \code{test} argument. Required parameters: \code{alpha} and \code{weight}. \item \code{DisjunctivePower}: generate the disjunctive power (probability to reject at least one hypothesis defined in the \code{test} argument). Required parameter: \code{alpha}. \item \code{ConjunctivePower}: generate the conjunctive power (probability to reject all hypotheses defined in the \code{test} argument). Required parameter: \code{alpha}. \item \code{ExpectedRejPower}: generate the expected number of rejected hypotheses. Required parameter: \code{alpha}. } } \references{ \url{http://gpaux.github.io/Mediana/} } \seealso{ See Also \code{\link{AnalysisModel}}. } \examples{ ## Add a Criterion to an EvaluationModel object evaluation.model = EvaluationModel() + Criterion(id = "Marginal power", method = "MarginalPower", tests = tests("Placebo vs treatment"), labels = c("Placebo vs treatment"), par = parameters(alpha = 0.025)) } Mediana/man/ExtractAnalysisStack.Rd0000644000176200001440000000677713434027611017000 0ustar liggesusers\name{ExtractAnalysisStack} \alias{ExtractAnalysisStack} %- Also NEED an '\alias' for EACH other topic documented here. \title{ExtractAnalysisStack function } \description{ This function extracts data stack according to the data scenario, sample id and simulation run specified. } \usage{ ExtractAnalysisStack(analysis.stack, data.scenario = NULL, simulation.run = NULL) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{analysis.stack}{ defines a \code{AnalysisStack} object. } \item{data.scenario}{ defines the data scenario index to extract. By default all data scenarios will be extracted. } \item{simulation.run}{ defines the simulation run index. By default all simulation runs will be extracted. } } \value{ This function extract a particular set of analysis stack according to the data scenario and simulation runs index. The object returned by the function is a list having the same structure as the \code{analysis.set} argument of a \code{AnalysisStack} object: \item{analysis.set }{a list of size corresponding to the index number of simulation runs specified by the user defined in the \code{simulation.run} argument. This list contains the results generated for each data scenario (\code{data.scenario} argument).} } \references{ http://gpaux.github.io/Mediana/ } \seealso{ See Also \code{\link{AnalysisStack}}. } \examples{ \dontrun{ # Generation of an AnalysisStack object ################################## # Outcome parameter set 1 outcome1.placebo = parameters(mean = 0, sd = 70) outcome1.treatment = parameters(mean = 40, sd = 70) # Outcome parameter set 2 outcome2.placebo = parameters(mean = 0, sd = 70) outcome2.treatment = parameters(mean = 50, sd = 70) # Data model case.study1.data.model = DataModel() + OutcomeDist(outcome.dist = "NormalDist") + SampleSize(c(50, 55, 60, 65, 70)) + Sample(id = "Placebo", outcome.par = parameters(outcome1.placebo, outcome2.placebo)) + Sample(id = "Treatment", outcome.par = parameters(outcome1.treatment, outcome2.treatment)) # Analysis model case.study1.analysis.model = AnalysisModel() + Test(id = "Placebo vs treatment", samples = samples("Placebo", "Treatment"), method = "TTest") + Statistic(id = "Mean Treatment", method = "MeanStat", samples = samples("Treatment")) # Simulation Parameters case.study1.sim.parameters = SimParameters(n.sims = 1000, proc.load = 2, seed = 42938001) # Generate data case.study1.analysis.stack = AnalysisStack(data.model = case.study1.data.model, analysis.model = case.study1.analysis.model, sim.parameters = case.study1.sim.parameters) # Print the analysis results generated in the 100th simulation run # for the 2nd data scenario for both samples case.study1.analysis.stack$analysis.set[[100]][[2]] # Extract the same set of data case.study1.extracted.analysis.stack = ExtractAnalysisStack(analysis.stack = case.study1.analysis.stack, data.scenario = 2, simulation.run = 100) # A carefull attention should be paid on the index of the result. # As only one data.scenario has been requested # the result for data.scenario = 2 is now in the first position ($analysis.set[[1]][[1]]). } } Mediana/man/MultAdj.Rd0000644000176200001440000000507313434027611014220 0ustar liggesusers\name{MultAdj} \alias{MultAdj} %- Also NEED an '\alias' for EACH other topic documented here. \title{MultAdj object } \description{ This function creates an object of class \code{MultAdj} which can be added to an object of class \code{AnalysisModel}. } \usage{ MultAdj(...) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{\dots}{ defines the arguments passed to create the object of class \code{MultAdj}. } } \details{ This function can be used to wrap-up several objects of class \code{MultAdjProc} or \code{MultAdjStrategy} and add them to an object of class \code{AnalysisModel}. Its use is optional as objects of class \code{MultAdjProc} or \code{MultAdjStrategy} can be added to an object of class \code{AnalysisModel} incrementally using the '+' operator. Objects of class \code{MultAdjProc} or \code{MultAdjStrategy} can be added to an object of class \code{AnalysisModel}. } \references{ \url{http://gpaux.github.io/Mediana/} } \seealso{ See Also \code{\link{MultAdjStrategy}}, \code{\link{MultAdjProc}} and \code{\link{AnalysisModel}}. } \examples{ # Multiplicity adjustments mult.adj1 = MultAdjProc(proc = NA) mult.adj2 = MultAdjProc(proc = "BonferroniAdj") mult.adj3 = MultAdjProc(proc = "HolmAdj", par = parameters(weight = rep(1/3,3))) mult.adj4 = MultAdjProc(proc = "HochbergAdj", par = parameters(weight = c(1/4,1/4,1/2))) # Analysis model analysis.model = AnalysisModel() + MultAdj(mult.adj1, mult.adj2, mult.adj3, mult.adj4) + Test(id = "Pl vs Dose L", samples = samples("Placebo", "Dose L"), method = "TTest") + Test(id = "Pl vs Dose M", samples = samples ("Placebo", "Dose M"), method = "TTest") + Test(id = "Pl vs Dose H", samples = samples("Placebo", "Dose H"), method = "TTest") # Equivalent to: analysis.model = AnalysisModel() + mult.adj1 + mult.adj2 + mult.adj3 + mult.adj4 + Test(id = "Pl vs Dose L", samples = samples("Placebo", "Dose L"), method = "TTest") + Test(id = "Pl vs Dose M", samples = samples ("Placebo", "Dose M"), method = "TTest") + Test(id = "Pl vs Dose H", samples = samples("Placebo", "Dose H"), method = "TTest") } Mediana/man/Design.Rd0000644000176200001440000001042013434027611014061 0ustar liggesusers\name{Design} \alias{Design} %- Also NEED an '\alias' for EACH other topic documented here. \title{Design object } \description{ This function creates an object of class \code{Design} which can be added to an object of class \code{DataModel}. } \usage{ Design(enroll.period = NULL, enroll.dist = NULL, enroll.dist.par = NULL, followup.period = NULL, study.duration = NULL, dropout.dist = NULL, dropout.dist.par = NULL) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{enroll.period}{ defines the length of the enrollment period. } \item{enroll.dist}{ defines the enrollment distribution. } \item{enroll.dist.par}{ defines the parameters of the enrollment distribution (optional). } \item{followup.period}{ defines the length of the follow-up period for each patient in study designs with a fixed follow-up period, i.e., the length of time from the enrollment to planned discontinuation is constant across patients. The user must specify either \code{followup.period} or \code{study.duration}. } \item{study.duration}{ defines the total study duration in study designs with a variable follow-up period. The total study duration is defined as the length of time from the enrollment of the first patient to the discontinuation of the last patient. } \item{dropout.dist}{defines the dropout distribution. } \item{dropout.dist.par}{defines the parameters of the dropout distribution. } } \details{ Objects of class \code{Design} are used in objects of class \code{DataModel} to specify the design parameters used in event-driven designs if the user is interested in modeling the enrollment (or accrual) and dropout (or loss to follow up) processes that will be applied to the Clinical Scenario. Several objects of class \code{Design} can be added to an object of class \code{DataModel}. Note that the length of the enrollment period, total study duration and follow-up periods are measured using the same time units. If \code{enroll.dist = "UniformDist"}, the \code{enroll.dist.par} should be let to \code{NULL} (then enrollment distribution will be uniform during the enrollment period). If \code{enroll.dist = "BetaDist"}, the \code{enroll.dist.par} should contain the parameter of the beta distribution (\code{a} and \code{b}). These parameters must be derived according to the expected enrollment at a specific timepoint. For example, if half the patients are expected to be enrolled at 75\% of the enrollment period, the beta distribution is a \code{Beta(log(0.5)/log(0.75), 1)}. Generally, let \code{q} be the proportion of enrolled patients at \code{p}\% of the enrollment period, the Beta distribution can be derived as follows: \itemize{ \item If \code{q} > \code{p}, the Beta distribution is \code{Beta(a,1)} with \code{a = log(p) / log(q)} \item If \code{q} < \code{p}, the Beta distribution is \code{Beta (1,b)} with \code{b = log(1-p) / log(1-q)} \item Otherwise the Beta distribution is \code{Beta(1,1)} } If \code{dropout.dist = "UniformDist"}, the \code{dropout.dist.par} should contain the dropout rate. This parameter must be specified using the \code{prop} parameter, such as \code{dropout.dist.par = parameters(prop = 0.1)} for a 10\% dropout rate. } \references{ \url{http://gpaux.github.io/Mediana/} } \seealso{ See Also \code{\link{DataModel}}. } \examples{ ## Create DataModel object with a Design Object data.model = DataModel() + Design(enroll.period = 9, study.duration = 21, enroll.dist = "UniformDist", dropout.dist = "ExpoDist", dropout.dist.par = parameters(rate = 0.0115)) ## Create DataModel object with several Design Objects design1 = Design(enroll.period = 9, study.duration = 21, enroll.dist = "UniformDist", dropout.dist = "ExpoDist", dropout.dist.par = parameters(rate = 0.0115)) design2 = Design(enroll.period = 18, study.duration = 24, enroll.dist = "UniformDist", dropout.dist = "ExpoDist", dropout.dist.par = parameters(rate = 0.0115)) data.model = DataModel() + design1 + design2 } Mediana/man/GenerateReport.Rd0000644000176200001440000000632113434027611015603 0ustar liggesusers\name{GenerateReport} \alias{GenerateReport} %- Also NEED an '\alias' for EACH other topic documented here. \title{Clinical Scenario Evaluation Report } \description{ This function generates a Word-based report to present a detailed description of the simulation parameters (data, analysis and evaluation models) and results. } \usage{ GenerateReport(presentation.model, cse.results, report.filename, report.template = NULL) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{presentation.model}{ defines a \code{PresentationModel} object. } \item{cse.results}{ defines a \code{CSE} object returned by the \code{CSE} function. } \item{report.filename}{ defines the output filename of the word-based document generated. } \item{report.template}{ defines a word-based template (optional). } } \details{ This function requires the package \code{officer}. A customized template can be specified in the argument \code{report.template} (optional), which consists in a Word document to place in the working directory folder. } \references{ http://gpaux.github.io/Mediana/ } \seealso{ See Also \code{\link{CSE}} and \code{\link{PresentationModel}}. } \examples{ \dontrun{ # Outcome parameter set 1 outcome1.placebo = parameters(mean = 0, sd = 70) outcome1.treatment = parameters(mean = 40, sd = 70) # Outcome parameter set 2 outcome2.placebo = parameters(mean = 0, sd = 70) outcome2.treatment = parameters(mean = 50, sd = 70) # Data model case.study1.data.model = DataModel() + OutcomeDist(outcome.dist = "NormalDist") + SampleSize(c(50, 55, 60, 65, 70)) + Sample(id = "Placebo", outcome.par = parameters(outcome1.placebo, outcome2.placebo)) + Sample(id = "Treatment", outcome.par = parameters(outcome1.treatment, outcome2.treatment)) # Analysis model case.study1.analysis.model = AnalysisModel() + Test(id = "Placebo vs treatment", samples = samples("Placebo", "Treatment"), method = "TTest") # Evaluation model case.study1.evaluation.model = EvaluationModel() + Criterion(id = "Marginal power", method = "MarginalPower", tests = tests("Placebo vs treatment"), labels = c("Placebo vs treatment"), par = parameters(alpha = 0.025)) # Simulation Parameters case.study1.sim.parameters = SimParameters(n.sims = 1000, proc.load = 2, seed = 42938001) # Perform clinical scenario evaluation case.study1.results = CSE(case.study1.data.model, case.study1.analysis.model, case.study1.evaluation.model, case.study1.sim.parameters) # Reporting case.study1.presentation.model = PresentationModel() + Section(by = "outcome.parameter") + Table(by = "sample.size") + CustomLabel(param = "sample.size", label= paste0("N = ",c(50, 55, 60, 65, 70))) + CustomLabel(param = "outcome.parameter", label=c("Standard 1", "Standard 2")) # Report Generation GenerateReport(presentation.model = case.study1.presentation.model, cse.results = case.study1.results, report.filename = "Case study 1 (normally distributed endpoint).docx") } } Mediana/man/families.Rd0000644000176200001440000000107213434027611014444 0ustar liggesusers\name{families} \alias{tests} \alias{samples} \alias{statistics} \alias{parameters} \alias{families} \title{Create list of character strings } \description{ This function is used mostly for user's convenience. It simply creates a list of character strings. } \usage{ tests(...) samples(...) statistics(...) parameters(...) families(...) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{\dots}{ defines character strings to be passed into the function. } } \references{ \url{http://gpaux.github.io/Mediana/} } Mediana/man/MultAjdStrategy.Rd0000644000176200001440000001115113434027611015735 0ustar liggesusers\name{MultAdjStrategy} \alias{MultAdjStrategy} \title{MultAdjStrategy object } \description{ This function creates an object of class \code{MultAdjStrategy} which can be added to objects of class \code{AnalysisModel}, \code{MultAdj} or \code{MultAdjStrategy}. } \usage{ MultAdjStrategy(...) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{\dots}{ defines an object of class \code{MultAdjProc}. } } \details{ This function can be used when several multiplicity adjustment procedures are used within a single Clinical Scenario Evaluation, for example when several case studies are simulated into the same Clinical Scenario Evaluation. Objects of class \code{MultAdjStrategy} are used in objects of class \code{AnalysisModel} to define a Multiplicity Adjustment Procedure Strategy that will be applied to the statistical tests to protect the overall Type I error rate. Several objects of class \code{MultAdjStrategy} can be added to an object of class \code{AnalysisModel}, using the '+' operator or by grouping them into a \code{MultAdj} object. } \references{ \url{http://gpaux.github.io/Mediana/} } \seealso{ See Also \code{\link{MultAdj}}, \code{\link{MultAdjProc}} and \code{\link{AnalysisModel}}. } \examples{ # Parallel gatekeeping procedure parameters family = families(family1 = c(1), family2 = c(2, 3)) component.procedure = families(family1 ="HolmAdj", family2 = "HolmAdj") gamma = families(family1 = 1, family2 = 1) # Multiple sequence gatekeeping procedure parameters for Trial A mult.adj.trialA = MultAdjProc(proc = "ParallelGatekeepingAdj", par = parameters(family = family, proc = component.procedure, gamma = gamma), tests = tests("Trial A Pla vs Trt End1", "Trial A Pla vs Trt End2", "Trial A Pla vs Trt End3") ) mult.adj.trialB = MultAdjProc(proc = "ParallelGatekeepingAdj", par = parameters(family = family, proc = component.procedure, gamma = gamma), tests = tests("Trial B Pla vs Trt End1", "Trial B Pla vs Trt End2", "Trial B Pla vs Trt End3") ) mult.adj.pooled = MultAdjProc(proc = "ParallelGatekeepingAdj", par = parameters(family = family, proc = component.procedure, gamma = gamma), tests = tests("Pooled Pla vs Trt End1", "Pooled Pla vs Trt End2", "Pooled Pla vs Trt End3") ) # Analysis model analysis.model = AnalysisModel() + MultAdjStrategy(mult.adj.trialA, mult.adj.trialB, mult.adj.pooled) + # Tests for study A Test(id = "Trial A Pla vs Trt End1", method = "PropTest", samples = samples("Trial A Plac End1", "Trial A Trt End1")) + Test(id = "Trial A Pla vs Trt End2", method = "TTest", samples = samples("Trial A Plac End2", "Trial A Trt End2")) + Test(id = "Trial A Pla vs Trt End3", method = "TTest", samples = samples("Trial A Plac End3", "Trial A Trt End3")) + # Tests for study B Test(id = "Trial B Pla vs Trt End1", method = "PropTest", samples = samples("Trial B Plac End1", "Trial B Trt End1")) + Test(id = "Trial B Pla vs Trt End2", method = "TTest", samples = samples("Trial B Plac End2", "Trial B Trt End2")) + Test(id = "Trial B Pla vs Trt End3", method = "TTest", samples = samples("Trial B Plac End3", "Trial B Trt End3")) + # Tests for pooled studies Test(id = "Pooled Pla vs Trt End1", method = "PropTest", samples = samples(samples("Trial A Plac End1","Trial B Plac End1"), samples("Trial A Trt End1","Trial B Trt End1"))) + Test(id = "Pooled Pla vs Trt End2", method = "TTest", samples = samples(samples("Trial A Plac End2","Trial B Plac End2"), samples("Trial A Trt End2","Trial B Trt End2"))) + Test(id = "Pooled Pla vs Trt End3", method = "TTest", samples = samples(samples("Trial A Plac End3","Trial B Plac End3"), samples("Trial A Trt End3","Trial B Trt End3"))) } Mediana/man/CSE.Rd0000644000176200001440000001123413434027611013266 0ustar liggesusers\name{CSE} \alias{CSE} \title{ Clinical Scenario Evaluation } \description{ This function is used to perform the Clinical Scenario Evaluation according to the objects of class \code{DataModel}, \code{AnalysisModel} and \code{EvaluationModel} specified respectively in the arguments \code{data}, \code{analysis} and \code{evaluation} of the function. } \usage{ CSE(data, analysis, evaluation, simulation) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{data}{ defines a \code{DataModel} or a \code{DataStack} object } \item{analysis}{ defines an \code{AnalysisModel} object } \item{evaluation}{ defines an \code{EvaluationModel} object } \item{simulation}{ defines a \code{SimParameters} object } } \value{ The \code{CSE} function returns a list containing: \item{simulation.results }{a data frame containing the results of the simulations for each scenario.} \item{analysis.scenario.grid }{a data frame containing the grid of the combination of data and analysis scenarios.} \item{data.structure }{a list containing the data structure according to the \code{DataModel} object.} \item{analysis.structure }{a list containing the analysis structure according to the \code{AnalysisModel} object.} \item{evaluation.structure }{a list containing the evaluation structure according to the \code{EvaluationModel} object.} \item{sim.parameters }{a list containing the simulation parameters according to \code{SimParameters} object.} \item{timestamp }{a list containing information about the start time, end time and duration of the simulation runs.} } \references{ Benda, N., Branson, M., Maurer, W., Friede, T. (2010). Aspects of modernizing drug development using clinical scenario planning and evaluation. Drug Information Journal. 44, 299-315. \url{http://gpaux.github.io/Mediana/} } \seealso{ See Also \code{\link{DataModel}}, \code{\link{DataStack}}, \code{\link{AnalysisModel}}, \code{\link{EvaluationModel}}, \code{\link{SimParameters}}. } \examples{ \dontrun{ # Outcome parameter set 1 outcome1.placebo = parameters(mean = 0, sd = 70) outcome1.treatment = parameters(mean = 40, sd = 70) # Outcome parameter set 2 outcome2.placebo = parameters(mean = 0, sd = 70) outcome2.treatment = parameters(mean = 50, sd = 70) # Data model case.study1.data.model = DataModel() + OutcomeDist(outcome.dist = "NormalDist") + SampleSize(c(50, 55, 60, 65, 70)) + Sample(id = "Placebo", outcome.par = parameters(outcome1.placebo, outcome2.placebo)) + Sample(id = "Treatment", outcome.par = parameters(outcome1.treatment, outcome2.treatment)) # Analysis model case.study1.analysis.model = AnalysisModel() + Test(id = "Placebo vs treatment", samples = samples("Placebo", "Treatment"), method = "TTest") # Evaluation model case.study1.evaluation.model = EvaluationModel() + Criterion(id = "Marginal power", method = "MarginalPower", tests = tests("Placebo vs treatment"), labels = c("Placebo vs treatment"), par = parameters(alpha = 0.025)) # Simulation Parameters case.study1.sim.parameters = SimParameters(n.sims = 1000, proc.load = 2, seed = 42938001) # Perform clinical scenario evaluation case.study1.results = CSE(case.study1.data.model, case.study1.analysis.model, case.study1.evaluation.model, case.study1.sim.parameters) # Summary of the simulation results summary(case.study1.results) # Get the data generated for the simulation case.study1.data.stack = DataStack(data.model = case.study1.data.model, sim.parameters = case.study1.sim.parameters) } \dontrun{ #Alternatively, a DataStack object can be used in the CSE function # (not recommanded as the computational time is increased) # Generate data case.study1.data.stack = DataStack(data.model = case.study1.data.model, sim.parameters = case.study1.sim.parameters) # Perform clinical scenario evaluation with data stack case.study1.results = CSE(case.study1.data.stack, case.study1.analysis.model, case.study1.evaluation.model, case.study1.sim.parameters) } } Mediana/man/AnalysisStack.Rd0000644000176200001440000001136513434027611015432 0ustar liggesusers\name{AnalysisStack} \alias{AnalysisStack} %- Also NEED an '\alias' for EACH other topic documented here. \title{AnalysisStack object } \description{ This function generates analysis results according to the specified data and analysis models. } \usage{ AnalysisStack(data.model, analysis.model, sim.parameters) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{data.model}{ defines a \code{DataModel} object. } \item{analysis.model}{ defines a \code{AnalysisModel} object. } \item{sim.parameters}{ defines a \code{SimParameters} object. } } \value{ This function generates an analysis stack according to the data and analysis models and the simulation parameters objetcs. The object returned by the function is a AnalysisStack object containing: \item{description }{a description of the object.} \item{analysis.set }{a list of size \code{n.sims} defined in the \code{SimParameters} object. This list contains the analysis results generated for each data scenario (first level), and for each test and statistic defined in the \code{AnalysisModel} object. The results generated for the \code{i}th simulation runs and the \code{j}th data scenario are stored in \code{analysis.stack$analysis.set[[i]][[j]]$result} (where \code{analysis.stack} is a \code{AnalysisStack} object). Then, this list is composed of three lists: \itemize{ \item \code{tests} return the unadjusted p-values for to the tests defined in the \code{AnalysisModel} object. \item \code{statistic} return the statistic defined in the \code{AnalysisModel} object. \item \code{test.adjust} return a list of adjusted p-values according to the multiple testing procedure defined in the \code{AnalysisModel} object. The lenght of this list corresponds to the number of \code{MultAdjProc} objects defined in the \code{AnalysisModel} object. Note that if no \code{MultAdjProc} objects have been defined, this list contains the unadjusted p-values. } } \item{analysis.scenario.grid}{a data frame indicating all data and analysis scenarios according to the \code{DataModel} and \code{AnalysisModel} objects.} \item{analysis.structure}{a list containing the analysis structure according to the \code{AnalysisModel} object.} \item{sim.parameters }{a list containing the simulation parameters according to \code{SimParameters} object.} A specific \code{analysis.set} of a \code{AnalysisStack} object can be extracted using the \code{ExtractAnalysisStack} function. } \references{ http://gpaux.github.io/Mediana/ } \seealso{ See Also \code{\link{DataModel}}, \code{\link{AnalysisModel}} and \code{\link{SimParameters}} and \code{\link{ExtractAnalysisStack}}. } \examples{ \dontrun{ # Generation of an AnalysisStack object ################################## # Outcome parameter set 1 outcome1.placebo = parameters(mean = 0, sd = 70) outcome1.treatment = parameters(mean = 40, sd = 70) # Outcome parameter set 2 outcome2.placebo = parameters(mean = 0, sd = 70) outcome2.treatment = parameters(mean = 50, sd = 70) # Data model case.study1.data.model = DataModel() + OutcomeDist(outcome.dist = "NormalDist") + SampleSize(c(50, 55, 60, 65, 70)) + Sample(id = "Placebo", outcome.par = parameters(outcome1.placebo, outcome2.placebo)) + Sample(id = "Treatment", outcome.par = parameters(outcome1.treatment, outcome2.treatment)) # Analysis model case.study1.analysis.model = AnalysisModel() + Test(id = "Placebo vs treatment", samples = samples("Placebo", "Treatment"), method = "TTest") + Statistic(id = "Mean Treatment", method = "MeanStat", samples = samples("Treatment")) # Simulation Parameters case.study1.sim.parameters = SimParameters(n.sims = 1000, proc.load = 2, seed = 42938001) # Generate results case.study1.analysis.stack = AnalysisStack(data.model = case.study1.data.model, analysis.model = case.study1.analysis.model, sim.parameters = case.study1.sim.parameters) # Print the analysis results generated in the 100th simulation run # for the 2nd data scenario for both samples case.study1.analysis.stack$analysis.set[[100]][[2]] # Extract the same set of data case.study1.extracted.analysis.stack = ExtractAnalysisStack(analysis.stack = case.study1.analysis.stack, data.scenario = 2, simulation.run = 100) # A carefull attention should be paid on the index of the result. # As only one data.scenario has been requested # the result for data.scenario = 2 is now in the first position ($analysis.set[[1]][[1]]). } } Mediana/man/Subsection.Rd0000644000176200001440000000357613434027611015004 0ustar liggesusers\name{Subsection} \alias{Subsection} %- Also NEED an '\alias' for EACH other topic documented here. \title{Subsection object } \description{ This function creates an object of class \code{Subsection} which can be added to an object of class \code{PresentationModel}. } \usage{ Subsection(by) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{by}{ defines the parameter to create the subsection in the report. } } \details{ Objects of class \code{Subsection} are used in objects of class \code{PresentationModel} to define how the results will be presented in the report. If a \code{Subsection} object is added to a \code{PresentationModel} object, the report will have subsections according to the parameter defined in the \code{by} argument. A single object of class \code{Subsection} can be added to an object of class \code{PresentationModel}. One or several parameters can be defined in the \code{by} argument: \itemize{ \item \code{"sample.size"} \item \code{"event"} \item \code{"outcome.parameter"} \item \code{"design.parameter"} \item \code{"multiplicity.adjustment"} } A object of class \code{Subsection} must be added to an object of class \code{PresentationModel} only if a \code{Section} object has been defined. } \references{ \url{http://gpaux.github.io/Mediana/} } \seealso{ See Also \code{\link{PresentationModel}}. } \examples{ # Reporting presentation.model = PresentationModel() + Section(by = "outcome.parameter") + Subsection(by = "sample.size") + CustomLabel(param = "sample.size", label= paste0("N = ",c(50, 55, 60, 65, 70))) + CustomLabel(param = "outcome.parameter", label=c("Standard 1", "Standard 2")) # In this report, one section will be created for each outcome parameter assumption # and within each section, a subsection will be created for each sample size. } Mediana/man/DataStack.Rd0000644000176200001440000001322013434027611014510 0ustar liggesusers\name{DataStack} \alias{DataStack} %- Also NEED an '\alias' for EACH other topic documented here. \title{DataStack object } \description{ This function generates data according to the specified data model. } \usage{ DataStack(data.model, sim.parameters) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{data.model}{ defines a \code{DataModel} object. } \item{sim.parameters}{ defines a \code{SimParameters} object. } } \value{ This function generates a data stack according to the data model and the simulation parameters objetcs. The object returned by the function is a DataStack object containing: \item{description }{a description of the object.} \item{data.set }{a list of size \code{n.sims} defined in the \code{sim.parameters} object. This list contains the data generated for each data scenario (\code{data.scenario} level) and each sample (\code{sample} level). The data generated for the \code{i}th simulation runs, the \code{j}th data scenario and the \code{k}th sample is stored in \code{data.stack$data.set[[i]]$data.scenario[[j]]$sample[[k]]} where \code{data.stack} is a \code{DataStack} object.} \item{data.scenario.grid }{a data frame indicating all data scenarios according to the \code{DataModel} object.} \item{data.structure }{a list containing the data structure according to the \code{DataModel} object.} \item{sim.parameters }{a list containing the simulation parameters according to \code{SimParameters} object.} A specific \code{data.set} of a \code{DataStack} object can be extracted using the \code{ExtractDataStack} function. } \references{ http://gpaux.github.io/Mediana/ } \seealso{ See Also \code{\link{DataModel}} and \code{\link{SimParameters}} and \code{\link{ExtractDataStack}}. } \examples{ \dontrun{ # Generation of a DataStack object ################################## # Outcome parameter set 1 outcome1.placebo = parameters(mean = 0, sd = 70) outcome1.treatment = parameters(mean = 40, sd = 70) # Outcome parameter set 2 outcome2.placebo = parameters(mean = 0, sd = 70) outcome2.treatment = parameters(mean = 50, sd = 70) # Data model case.study1.data.model = DataModel() + OutcomeDist(outcome.dist = "NormalDist") + SampleSize(c(50, 55, 60, 65, 70)) + Sample(id = "Placebo", outcome.par = parameters(outcome1.placebo, outcome2.placebo)) + Sample(id = "Treatment", outcome.par = parameters(outcome1.treatment, outcome2.treatment)) # Simulation Parameters case.study1.sim.parameters = SimParameters(n.sims = 1000, proc.load = 2, seed = 42938001) # Generate data case.study1.data.stack = DataStack(data.model = case.study1.data.model, sim.parameters = case.study1.sim.parameters) # Print the data set generated in the 100th simulation run # for the 2nd data scenario for both samples case.study1.data.stack$data.set[[100]]$data.scenario[[2]] # Extract the same set of data case.study1.extracted.data.stack = ExtractDataStack(data.stack = case.study1.data.stack, data.scenario = 2, simulation.run = 100) # The same dataset can be obtained using case.study1.extracted.data.stack$data.set[[1]]$data.scenario[[1]]$sample # A carefull attention should be paid on the index of the result. # As only one data.scenario has been requested # the result for data.scenario = 2 is now in the first position (data.scenario[[1]]). } \dontrun{ #Use of a DataStack object in the CSE function ############################################## # Outcome parameter set 1 outcome1.placebo = parameters(mean = 0, sd = 70) outcome1.treatment = parameters(mean = 40, sd = 70) # Outcome parameter set 2 outcome2.placebo = parameters(mean = 0, sd = 70) outcome2.treatment = parameters(mean = 50, sd = 70) # Data model case.study1.data.model = DataModel() + OutcomeDist(outcome.dist = "NormalDist") + SampleSize(c(50, 55, 60, 65, 70)) + Sample(id = "Placebo", outcome.par = parameters(outcome1.placebo, outcome2.placebo)) + Sample(id = "Treatment", outcome.par = parameters(outcome1.treatment, outcome2.treatment)) # Simulation Parameters case.study1.sim.parameters = SimParameters(n.sims = 1000, proc.load = 2, seed = 42938001) # Generate data case.study1.data.stack = DataStack(data.model = case.study1.data.model, sim.parameters = case.study1.sim.parameters) # Analysis model case.study1.analysis.model = AnalysisModel() + Test(id = "Placebo vs treatment", samples = samples("Placebo", "Treatment"), method = "TTest") # Evaluation model case.study1.evaluation.model = EvaluationModel() + Criterion(id = "Marginal power", method = "MarginalPower", tests = tests("Placebo vs treatment"), labels = c("Placebo vs treatment"), par = parameters(alpha = 0.025)) # Simulation Parameters case.study1.sim.parameters = SimParameters(n.sims = 1000, proc.load = 2, seed = 42938001) # Perform clinical scenario evaluation case.study1.results = CSE(case.study1.data.stack, case.study1.analysis.model, case.study1.evaluation.model, case.study1.sim.parameters) } } Mediana/man/ExtractDataStack.Rd0000644000176200001440000000655713434027611016062 0ustar liggesusers\name{ExtractDataStack} \alias{ExtractDataStack} %- Also NEED an '\alias' for EACH other topic documented here. \title{ExtractDataStack function } \description{ This function extracts data stack according to the data scenario, sample id and simulation run specified. } \usage{ ExtractDataStack(data.stack, data.scenario = NULL, sample.id = NULL, simulation.run = NULL) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{data.stack}{ defines a \code{DataStack} object. } \item{data.scenario}{ defines the data scenario index to extract. By default all data scenarios will be extracted. } \item{sample.id}{ defines the sample id to extract. By default all sample ids will be extracted. } \item{simulation.run}{ defines the simulation run index. By default all simulation runs will be extracted. } } \value{ This function extract a particular set of data stack according to the data scenario, sample id and simulation runs index. The object returned by the function is a list having the same structure as the \code{data.set} argument of a \code{DataStack} object: \item{data.set }{a list of size corresponding to the number of simulation runs specified by the user defined in the \code{simulation.run} argument. This list contains the data generated for each data scenario (\code{data.scenario} argument) and each sample specified by the user (\code{sample.id} argument).} } \references{ http://gpaux.github.io/Mediana/ } \seealso{ See Also \code{\link{DataStack}}. } \examples{ \dontrun{ # Generation of a DataStack object ################################## # Outcome parameter set 1 outcome1.placebo = parameters(mean = 0, sd = 70) outcome1.treatment = parameters(mean = 40, sd = 70) # Outcome parameter set 2 outcome2.placebo = parameters(mean = 0, sd = 70) outcome2.treatment = parameters(mean = 50, sd = 70) # Data model case.study1.data.model = DataModel() + OutcomeDist(outcome.dist = "NormalDist") + SampleSize(c(50, 55, 60, 65, 70)) + Sample(id = "Placebo", outcome.par = parameters(outcome1.placebo, outcome2.placebo)) + Sample(id = "Treatment", outcome.par = parameters(outcome1.treatment, outcome2.treatment)) # Simulation Parameters case.study1.sim.parameters = SimParameters(n.sims = 1000, proc.load = 2, seed = 42938001) # Generate data case.study1.data.stack = DataStack(data.model = case.study1.data.model, sim.parameters = case.study1.sim.parameters) # Print the data set generated in the 100th simulation run # for the 2nd data scenario for both samples case.study1.data.stack$data.set[[100]]$data.scenario[[2]]$sample # Extract the same set of data case.study1.extracted.data.stack = ExtractDataStack(data.stack = case.study1.data.stack, data.scenario = 2, simulation.run = 100) # A carefull attention should be paid on the index of the result. # As only one data.scenario has been requested # the result for data.scenario = 2 is now in the first position (data.scenario[[1]]). } } Mediana/man/AnalysisModel.Rd0000644000176200001440000000263013434027611015420 0ustar liggesusers\name{AnalysisModel} \alias{AnalysisModel} \title{AnalysisModel object } \description{ \code{AnalysisModel()} initializes an object of class \code{AnalysisModel}. } \usage{ AnalysisModel(...) } \arguments{ \item{\dots}{ defines the arguments passed to create the object of class \code{AnalysisModel}. } } \details{ Analysis models define statistical methods that are applied to the study data in a clinical trial. \code{AnalysisModel()} is used to create an object of class \code{AnalysisModel} incrementally, using the '+' operator to add objects to the existing \code{AnalysisModel} object. The advantage is to explicitely define which objects are added to the \code{AnalysisModel} object. Initialization with \code{AnalysisModel()} is higlhy recommended. Objects of class \code{Test}, \code{MultAdjProc}, \code{MultAdjStrategy}, \code{MultAdj} and \code{Statistic} can be added to an object of class \code{AnalysisModel}. } \references{ \url{http://gpaux.github.io/Mediana/} } \seealso{ See Also \code{\link{Test}}, \code{\link{MultAdjProc}}, \code{\link{MultAdjStrategy}}, \code{\link{MultAdj}} and \code{\link{Statistic}}. } \examples{ ## Initialize an AnalysisModel and add objects to it analysis.model = AnalysisModel() + Test(id = "Placebo vs treatment", samples = samples("Placebo", "Treatment"), method = "TTest") } Mediana/man/Event.Rd0000644000176200001440000000401313434027611013732 0ustar liggesusers\name{Event} \alias{Event} %- Also NEED an '\alias' for EACH other topic documented here. \title{Event object } \description{ This function creates an object of class \code{Event} which can be added to an object of class \code{DataModel}. } \usage{ Event(n.events, rando.ratio = NULL) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{n.events}{ defines a vector of number of events required. } \item{rando.ratio}{ defines a vector of randomization ratios for each \code{Sample} object defined in the \code{DataModel}. } } \details{ This function can be used if the number of events needs to be fixed in an event-driven clinical trial. Either objects of class \code{Event} or \code{SampleSize} can be added to an object of class \code{DataModel} but not both. } \references{ \url{http://gpaux.github.io/Mediana/} } \seealso{ See Also \code{\link{DataModel}}. } \examples{ # In this case study, the radomization ratio is 2:1 (Treatment:Placebo). # Sample size parameters event.count.total = c(390, 420) randomization.ratio = c(1,2) # Outcome parameters median.time.placebo = 6 rate.placebo = log(2)/median.time.placebo outcome.placebo = list(rate = rate.placebo) median.time.treatment = 9 rate.treatment = log(2)/median.time.treatment outcome.treatment = list(rate = rate.treatment) # Dropout parameters dropout.par = parameters(rate = 0.0115) # Data model data.model = DataModel() + OutcomeDist(outcome.dist = "ExpoDist") + Event(n.events = event.count.total, rando.ratio = randomization.ratio) + Design(enroll.period = 9, study.duration = 21, enroll.dist = "UniformDist", dropout.dist = "ExpoDist", dropout.dist.par = dropout.par) + Sample(id = "Placebo", outcome.par = parameters(outcome.placebo)) + Sample(id = "Treatment", outcome.par = parameters(outcome.treatment)) } Mediana/man/EvaluationModel.Rd0000644000176200001440000000316113434027611015744 0ustar liggesusers\name{EvaluationModel} \alias{EvaluationModel} %- Also NEED an '\alias' for EACH other topic documented here. \title{ EvaluationModel object } \description{ \code{EvaluationModel()} initializes an object of class \code{EvaluationModel}. } \usage{ EvaluationModel(...) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{\dots}{ defines the arguments passed to create the object of class \code{EvaluationModel}. } } \details{ Evaluation models are used within the Mediana package to specify the measures (metrics) for evaluating the performance of the selected clinical scenario (combination of data and analysis models). \code{EvaluationModel()} is used to create an object of class \code{EvaluationModel} incrementally, using the '+' operator to add objects to the existing \code{EvaluationModel} object. The advantage is to explicitely define which objects are added to the \code{EvaluationModel} object. Initialization with \code{EvaluationModel()} is highly recommended. Object of Class \code{Criterion} can be added to an object of class \code{EvaluationModel}. } \references{ \url{http://gpaux.github.io/Mediana/} } \seealso{ See Also \code{\link{Criterion}}. } \examples{ ## Initialize a EvaluationModel and add objects to it evaluation.model = EvaluationModel() + Criterion(id = "Marginal power", method = "MarginalPower", tests = tests("Placebo vs treatment"), labels = c("Placebo vs treatment"), par = parameters(alpha = 0.025)) } Mediana/man/Test.Rd0000644000176200001440000001225613434027611013600 0ustar liggesusers\name{Test} \alias{Test} %- Also NEED an '\alias' for EACH other topic documented here. \title{Test object } \description{ This function creates an object of class \code{Test} which can be added to an object of class \code{AnalysisModel}. } \usage{ Test(id, method, samples, par = NULL) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{id}{ defines the ID of the Test object. } \item{method}{ defines the method of the Test object. } \item{samples}{ defines a list of samples defined in the data model to be used within the selected Test object method. } \item{par}{ defines the parameter(s) of the selected Test object method. } } \details{ Objects of class \code{Test} are used in objects of class \code{AnalysisModel} to define the statistical test to produce. Several objects of class \code{Test} can be added to an object of class \code{AnalysisModel}. \code{method} argument defines the statistical test method. Several methods are already implemented in the Mediana package (listed below, along with the required parameters to define in the \code{par} parameter): \itemize{ \item \code{TTest}: perform a two-sample t-test between the two samples defined in the \code{samples} argument. Optional parameter: \code{larger} (Larger value is expected in the second sample (\code{TRUE} or \code{FALSE})). Two samples must be defined. \item \code{TTestNI}: perform a non-inferiority two-sample t-test between the two samples defined in the \code{samples} argument. Required parameter: \code{margin}. Optional parameter: \code{larger} (Larger value is expected in the second sample (\code{TRUE} or \code{FALSE})).Two samples must be defined. \item \code{WilcoxTest}: perform a Wilcoxon-Mann-Whitney test between the two samples defined in the \code{samples} argument. Optional parameter: \code{larger} (Larger value is expected in the second sample (\code{TRUE} or \code{FALSE})).Two samples must be defined. \item \code{PropTest}: perform a two-sample test for proportions between the two samples defined in the \code{samples} argument. Optional parameter: \code{yates} (Yates' continuity correction \code{TRUE} or \code{FALSE}) and \code{larger} (Larger value is expected in the second sample (\code{TRUE} or \code{FALSE})). Two samples must be defined. \item \code{PropTestNI}: perform a non-inferiority two-sample test for proportions between the two samples defined in the \code{samples} argument. Required parameter: \code{margin}. Optional parameter: \code{yates} (Yates' continuity correction \code{TRUE} or \code{FALSE}) and \code{larger} (Larger value is expected in the second sample (\code{TRUE} or \code{FALSE})). Two samples must be defined. \item \code{FisherTest}: perform a Fisher exact test between the two samples defined in the \code{samples} argument. Optional parameter: \code{larger} (Larger value is expected in the second sample (\code{TRUE} or \code{FALSE})). Two samples must be defined. \item \code{GLMPoissonTest}: perform a Poisson regression test between the two samples defined in the \code{samples} argument. Optional parameter: \code{larger} (Larger value is expected in the second sample (\code{TRUE} or \code{FALSE})). Two samples must be defined. \item \code{GLMNegBinomTest}: perform a Negative-binomial regression test between the two samples defined in the \code{samples} argument. Optional parameter: \code{larger} (Larger value is expected in the second sample (\code{TRUE} or \code{FALSE})).Two samples must be defined. \item \code{LogrankTest}: perform a Log-rank test between the two samples defined in the \code{samples} argument. Optional parameter: \code{larger} (Larger value is expected in the second sample (\code{TRUE} or \code{FALSE})). Two samples must be defined. \item \code{OrdinalLogisticRegTest}: perform an Ordinal logistic regression test between the two samples defined in the \code{samples} argument. Optional parameter: \code{larger} (Larger value is expected in the second sample (\code{TRUE} or \code{FALSE})). Two samples must be defined. } It is to be noted that the statistical tests implemented are one-sided and thus the sample order in the samples argument is important. In particular, the Mediana package assumes by default that a numerically larger value of the endpoint is expected in Sample 2 compared to Sample 1. Suppose, for example, that a higher treatment response indicates a beneficial effect (e.g., higher improvement rate). In this case Sample 1 should include control patients whereas Sample 2 should include patients allocated to the experimental treatment arm. The sample order needs to be reversed if a beneficial treatment effect is associated with a lower value of the endpoint (e.g., lower blood pressure), or alternatively (from version 1.0.6), the optional parameters \code{larger} must be set to \code{FALSE} to indicate that a larger value is expected on the first Sample. } \references{ \url{http://gpaux.github.io/Mediana/} } \seealso{ See Also \code{\link{AnalysisModel}}. } \examples{ # Analysis model analysis.model = AnalysisModel() + Test(id = "Placebo vs treatment", samples = samples("Placebo", "Treatment"), method = "TTest") } Mediana/man/OutcomeDist.Rd0000644000176200001440000001660313434027611015120 0ustar liggesusers\name{OutcomeDist} \alias{OutcomeDist} %- Also NEED an '\alias' for EACH other topic documented here. \title{OutcomeDist object } \description{ This function creates an object of class \code{OutcomeDist} which can be added to an object of class \code{DataModel}. } \usage{ OutcomeDist(outcome.dist, outcome.type = NULL) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{outcome.dist}{ defines the outcome distribution. } \item{outcome.type}{ defines the outcome type. } } \details{ Objects of class \code{OutcomeDist} are used in objects of class \code{DataModel} to specify the outcome distribution of the generated data. A single object of class \code{OutcomeDist} can be added to an object of class \code{DataModel}. Several distribution are already implemented in the Mediana package (listed below, along with the required parameters to specify in the \code{outcome.par} argument of the \code{Sample} object) to be used in the \code{outcome.dist} argument: \itemize{ \item \code{UniformDist}: generate data following a univariate distribution. Required parameter: \code{max}. \item \code{NormalDist}: generate data following a normal distribution. Required parameters: \code{mean} and \code{sd}. \item \code{BinomDist}: generate data following a binomial distribution. Required parameter: \code{prop}. \item \code{BetaDist}: generate data following a beta distribution. Required parameter: \code{a}. and \code{b}. \item \code{ExpoDist}: generate data following an exponential distribution. Required parameter: \code{rate}. \item \code{WeibullDist}: generate data following a weibull distribution. Required parameter: \code{shape} and \code{scale}. \item \code{TruncatedExpoDist}: generate data following a truncated exponential distribution. Required parameter: \code{rate} and \code{trunc}. \item \code{PoissonDist}: generate data following a Poisson distribution. Required parameter: \code{lambda}. \item \code{NegBinomDist}: generate data following a negative binomial distribution. Required parameters: \code{dispersion} and \code{mean}. \item \code{MultinomialDist}: generate data following a multinomial distribution. Required parameter: \code{prob}. \item \code{MVNormalDist}: generate data following a multivariate normal distribution. Required parameters: \code{par} and \code{corr}. For each generated endpoint, the \code{par} parameter must contain the required parameters \code{mean} and \code{sd}. The \code{corr} parameter specifies the correlation matrix for the endpoints. \item \code{MVBinomDist}: generate data following a multivariate binomial distribution. Required parameters: \code{par} and \code{corr}.For each generated endpoint, the \code{par} parameter must contain the required parameter \code{prop}. The \code{corr} parameter specifies the correlation matrix for the endpoints. \item \code{MVExpoDist}: generate data following a multivariate exponential distribution. Required parameters: \code{par} and \code{corr}. For each generated endpoint, the \code{par} parameter must contain the required parameter \code{rate}. The \code{corr} parameter specifies the correlation matrix for the endpoints. \item \code{MVExpoPFSOSDist}: generate data following a multivariate exponential distribution to generate PFS and OS endpoints. The PFS value is imputed to the OS value if the latter occurs earlier. Required parameters: \code{par} and \code{corr}. For each generated endpoint, the \code{par} parameter must contain the required parameter \code{rate}. The \code{corr} parameter specifies the correlation matrix for the endpoints. \item \code{MVMixedDist}: generate data following a multivariate mixed distribution. Required parameters: \code{type}, \code{par} and \code{corr}. The \code{type} parameter can take the following values: \itemize{ \item \code{NormalDist} \item \code{BinomDist} \item \code{ExpoDist} } For each generated endpoint, the \code{par} parameter must contain the required parameters according to the type of distribution. The \code{corr} parameter specifies the correlation matrix for the endpoints. } The \code{outcome.type} argument defines the outcome's type. This argument accepts only two values: \itemize{ \item \code{standard}: for fixed design setting. \item \code{event}: for event-driven design setting. } The outcome's type must be defined for each endpoint in case of multivariate disribution, e.g. \code{c("event","event")} in case of multivariate exponential distribution. } \references{ \url{http://gpaux.github.io/Mediana/} } \seealso{ See Also \code{\link{DataModel}}. } \examples{ # Simple example with a univariate distribution # Outcome parameter set 1 outcome1.placebo = parameters(mean = 0, sd = 70) outcome1.treatment = parameters(mean = 40, sd = 70) # Outcome parameter set 2 outcome2.placebo = parameters(mean = 0, sd = 70) outcome2.treatment = parameters(mean = 50, sd = 70) # Data model data.model = DataModel() + OutcomeDist(outcome.dist = "NormalDist") + SampleSize(c(50, 55, 60, 65, 70)) + Sample(id = "Placebo", outcome.par = parameters(outcome1.placebo, outcome2.placebo)) + Sample(id = "Treatment", outcome.par = parameters(outcome1.treatment, outcome2.treatment)) # Complex example with multivariate distribution following a Binomial and a Normal distribution # Variable types var.type = list("BinomDist", "NormalDist") # Outcome distribution parameters plac.par = list(list(prop = 0.3), list(mean = -0.10, sd = 0.5)) dosel.par1 = list(list(prop = 0.40), list(mean = -0.20, sd = 0.5)) dosel.par2 = list(list(prop = 0.45), list(mean = -0.25, sd = 0.5)) dosel.par3 = list(list(prop = 0.50), list(mean = -0.30, sd = 0.5)) doseh.par1 = list(list(prop = 0.50), list(mean = -0.30, sd = 0.5)) doseh.par2 = list(list(prop = 0.55), list(mean = -0.35, sd = 0.5)) doseh.par3 = list(list(prop = 0.60), list(mean = -0.40, sd = 0.5)) # Correlation between two endpoints corr.matrix = matrix(c(1.0, 0.5, 0.5, 1.0), 2, 2) # Outcome parameter set 1 outcome1.plac = list(type = var.type, par = plac.par, corr = corr.matrix) outcome1.dosel = list(type = var.type, par = dosel.par1, corr = corr.matrix) outcome1.doseh = list(type = var.type, par = doseh.par1, corr = corr.matrix) # Outcome parameter set 2 outcome2.plac = list(type = var.type, par = plac.par, corr = corr.matrix) outcome2.dosel = list(type = var.type, par = dosel.par2, corr = corr.matrix) outcome2.doseh = list(type = var.type, par = doseh.par2, corr = corr.matrix) # Outcome parameter set 3 outcome3.plac = list(type = var.type, par = plac.par, corr = corr.matrix) outcome3.doseh = list(type = var.type, par = doseh.par3, corr = corr.matrix) outcome3.dosel = list(type = var.type, par = dosel.par3, corr = corr.matrix) # Data model data.model = DataModel() + OutcomeDist(outcome.dist = "MVMixedDist") + SampleSize(c(100, 120)) + Sample(id = list("Plac ACR20", "Plac HAQ-DI"), outcome.par = parameters(outcome1.plac, outcome2.plac, outcome3.plac)) + Sample(id = list("DoseL ACR20", "DoseL HAQ-DI"), outcome.par = parameters(outcome1.dosel, outcome2.dosel, outcome3.dosel)) + Sample(id = list("DoseH ACR20", "DoseH HAQ-DI"), outcome.par = parameters(outcome1.doseh, outcome2.doseh, outcome3.doseh)) } Mediana/man/Table.Rd0000644000176200001440000000331013434027611013677 0ustar liggesusers\name{Table} \alias{Table} %- Also NEED an '\alias' for EACH other topic documented here. \title{Table object } \description{ This function creates an object of class \code{Table} which can be added to an object of class \code{PresentationModel}. } \usage{ Table(by) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{by}{ defines the parameter to sort the table in the report. } } \details{ Objects of class \code{Table} are used in objects of class \code{PresentationModel} to define how the results will be sorted in the results tables of the report. If a \code{Table} object is added to a \code{PresentationModel} object, the report will generate tables sorted according to the parameter defined in the \code{by} argument. A single object of class \code{Table} can be added to an object of class \code{PresentationModel}. One or several parameters can be defined in the \code{by} argument: \itemize{ \item \code{"sample.size"} \item \code{"event"} \item \code{"outcome.parameter"} \item \code{"design.parameter"} \item \code{"multiplicity.adjustment"} } } \references{ \url{http://gpaux.github.io/Mediana/} } \seealso{ See Also \code{\link{PresentationModel}}. } \examples{ # Reporting presentation.model = PresentationModel() + Section(by = "outcome.parameter") + Table(by = "sample.size") + CustomLabel(param = "sample.size", label= paste0("N = ",c(50, 55, 60, 65, 70))) + CustomLabel(param = "outcome.parameter", label=c("Standard 1", "Standard 2")) # In this report, one section will be created for each outcome parameter assumption. # The tables presented within each section will be sorted by sample size. } Mediana/man/GenerateData.Rd0000644000176200001440000001116413434027611015202 0ustar liggesusers\name{GenerateData} \alias{GenerateData} %- Also NEED an '\alias' for EACH other topic documented here. \title{Generate data } \description{ This function generates data according to the specified data model. } \usage{ GenerateData(data.model, sim.parameters) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{data.model}{ defines a \code{DataModel} object. } \item{sim.parameters}{ defines a \code{SimParameters} object. } } \value{ This function generates a data stack according to the data model and the simulation parameters objetcs. The object returned by the function is a DataStack object containing: \item{description }{a description of the object.} \item{data.set }{a list of size \code{n.sims} defined in the \code{sim.parameters} object.} \item{data.data.scenario.grid }{a data frame indicating all data scenario according to the \code{DataModel} object.} \item{data.structure }{a list containing the data structure according to the \code{DataModel} object.} \item{sim.parameters }{a list containing the simulation parameters according to \code{SimParameters} object.} } \references{ http://gpaux.github.io/Mediana/ } \seealso{ See Also \code{\link{DataModel}} and \code{\link{SimParameters}}. } \examples{ \dontrun{ # Generation of a DataStack object ################################## # Outcome parameter set 1 outcome1.placebo = parameters(mean = 0, sd = 70) outcome1.treatment = parameters(mean = 40, sd = 70) # Outcome parameter set 2 outcome2.placebo = parameters(mean = 0, sd = 70) outcome2.treatment = parameters(mean = 50, sd = 70) # Data model case.study1.data.model = DataModel() + OutcomeDist(outcome.dist = "NormalDist") + SampleSize(c(50, 55, 60, 65, 70)) + Sample(id = "Placebo", outcome.par = parameters(outcome1.placebo, outcome2.placebo)) + Sample(id = "Treatment", outcome.par = parameters(outcome1.treatment, outcome2.treatment)) # Simulation Parameters case.study1.sim.parameters = SimParameters(n.sims = 1000, proc.load = 2, seed = 42938001) # Generate data case.study1.data.stack = GenerateData(data.model = case.study1.data.model, sim.parameters = case.study1.sim.parameters) # Print the data set generated in the 100th simulation run for the 2nd data scenario case.study1.data.stack$data.set[[100]]$data.scenario[[2]] } \dontrun{ #Use of a DataStack object in the CSE function ############################################## # Outcome parameter set 1 outcome1.placebo = parameters(mean = 0, sd = 70) outcome1.treatment = parameters(mean = 40, sd = 70) # Outcome parameter set 2 outcome2.placebo = parameters(mean = 0, sd = 70) outcome2.treatment = parameters(mean = 50, sd = 70) # Data model case.study1.data.model = DataModel() + OutcomeDist(outcome.dist = "NormalDist") + SampleSize(c(50, 55, 60, 65, 70)) + Sample(id = "Placebo", outcome.par = parameters(outcome1.placebo, outcome2.placebo)) + Sample(id = "Treatment", outcome.par = parameters(outcome1.treatment, outcome2.treatment)) # Simulation Parameters case.study1.sim.parameters = SimParameters(n.sims = 1000, proc.load = 2, seed = 42938001) # Generate data case.study1.data.stack = GenerateData(data.model = case.study1.data.model, sim.parameters = case.study1.sim.parameters) # Analysis model case.study1.analysis.model = AnalysisModel() + Test(id = "Placebo vs treatment", samples = samples("Placebo", "Treatment"), method = "TTest") # Evaluation model case.study1.evaluation.model = EvaluationModel() + Criterion(id = "Marginal power", method = "MarginalPower", tests = tests("Placebo vs treatment"), labels = c("Placebo vs treatment"), par = parameters(alpha = 0.025)) # Simulation Parameters case.study1.sim.parameters = SimParameters(n.sims = 1000, proc.load = 2, seed = 42938001) # Perform clinical scenario evaluation case.study1.results = CSE(case.study1.data.stack, case.study1.analysis.model, case.study1.evaluation.model, case.study1.sim.parameters) } } Mediana/man/Sample.Rd0000644000176200001440000000423313434027611014076 0ustar liggesusers\name{Sample} \alias{Sample} %- Also NEED an '\alias' for EACH other topic documented here. \title{Sample object } \description{ This function creates an object of class \code{Sample} which can be added to an object of class \code{DataModel}. } \usage{ Sample(id, outcome.par, sample.size = NULL) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{id}{ defines the ID of the sample. } \item{outcome.par}{ defines the parameters of the outcome distribution of the sample. } \item{sample.size}{ defines the sample size of the sample (optional). } } \details{ Objects of class \code{Sample} are used in objects of class \code{DataModel} to specify a sample. Several objects of class \code{Sample} can be added to an object of class \code{DataModel}. Mandatory arguments are \code{id} and \code{outcome.par}. The \code{sample.size} argument is optional but must be used to define the sample size if unbalance samples have to be defined. The sample size must be either defined in the \code{Sample} object or in the \code{SampleSize} object, but not in both. \code{outcome.par} defines the sample-specific parameters of the \code{OutcomeDist} object. Required parameters according to the distribution can be found in \code{\link{OutcomeDist}}. } \references{ \url{http://gpaux.github.io/Mediana/} } \seealso{ See Also \code{\link{DataModel}} and \code{\link{OutcomeDist}}. } \examples{ # Outcome parameter set 1 outcome1.placebo = parameters(mean = 0, sd = 70) outcome1.treatment = parameters(mean = 40, sd = 70) # Outcome parameter set 2 outcome2.placebo = parameters(mean = 0, sd = 70) outcome2.treatment = parameters(mean = 50, sd = 70) # Data model case.study1.data.model = DataModel() + OutcomeDist(outcome.dist = "NormalDist") + SampleSize(c(50, 55, 60, 65, 70)) + Sample(id = "Placebo", outcome.par = parameters(outcome1.placebo, outcome2.placebo)) + Sample(id = "Treatment", outcome.par = parameters(outcome1.treatment, outcome2.treatment)) } Mediana/man/CustomLabel.Rd0000644000176200001440000000300413434027611015062 0ustar liggesusers\name{CustomLabel} \alias{CustomLabel} %- Also NEED an '\alias' for EACH other topic documented here. \title{ CustomLabel object } \description{ This function creates an object of class \code{CustomLabel} which can be added to an object of class \code{PresentationModel}. } \usage{ CustomLabel(param, label) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{param}{ defines a parameter for which the labels will be assigned. } \item{label}{ defines the label(s) to assign to the parameter. } } \details{ Objects of class \code{CustomLabel} are used in objects of class \code{PresentationModel} to specify the labels that will be assigned to the parameter. Several objects of class \code{CustomLabel} can be added to an object of class \code{PresentationModel}. The argument \code{param} only accepts the following values: \itemize{ \item \code{"sample.size"} \item \code{"event"} \item \code{"outcome.parameter"} \item \code{"design.parameter"} \item \code{"multiplicity.adjustment"} } } \references{ \url{http://gpaux.github.io/Mediana/} } \seealso{ See Also \code{\link{PresentationModel}}. } \examples{ ## Create a PresentationModel object with customized label presentation.model = PresentationModel() + Section(by = "outcome.parameter") + Table(by = "sample.size") + CustomLabel(param = "sample.size", label= paste0("N = ",c(50, 55, 60, 65, 70))) + CustomLabel(param = "outcome.parameter", label=c("Standard 1", "Standard 2")) } Mediana/man/PresentationModel.Rd0000644000176200001440000000361513434027611016314 0ustar liggesusers\name{PresentationModel} \alias{PresentationModel} %- Also NEED an '\alias' for EACH other topic documented here. \title{PresentationModel object } \description{ \code{PresentationModel()} initializes an object of class \code{PresentationModel}. } \usage{ PresentationModel(...) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{\dots}{ defines the arguments passed to create the object of class \code{PresentationModel}. } } \details{ Presentation models can be used to create a customized structure to report the results. Project information, structure of the sections and subsections, as well as sorting the results tables and labeling of scenarios can be defined. \code{PresentationModel()} is used to create an object of class \code{PresentationModel} incrementally, using the '+' operator to add objects to the existing \code{PresentationModel} object. The advantage is to explicitely define which objects are added to the \code{PresentationModel} object. Initialization with \code{PresentationModel()} is highly recommended. Objects of class \code{Project}, \code{Section}, \code{Subsection}, \code{Table} and \code{CustomLabel} can be added to an object of class \code{PresentationModel}. } \references{ \url{http://gpaux.github.io/Mediana/} } \seealso{ See Also \code{\link{Project}}, \code{\link{Section}}, \code{\link{Subsection}}, \code{\link{Table}} and \code{\link{CustomLabel}}. } \examples{ presentation.model = PresentationModel() + Project(username = "Gautier Paux", title = "Clinical trial", description = "Simulation report for my clinical trial") + Section(by = "outcome.parameter") + Table(by = "sample.size") + CustomLabel(param = "sample.size", label= paste0("N = ",c(50, 55, 60, 65, 70))) + CustomLabel(param = "outcome.parameter", label=c("Standard 1", "Standard 2")) } Mediana/man/Statistic.Rd0000644000176200001440000001007113434027611014621 0ustar liggesusers\name{Statistic} \alias{Statistic} %- Also NEED an '\alias' for EACH other topic documented here. \title{Statistic object } \description{ This function creates an object of class \code{Statistic} which can be added to an object of class \code{AnalysisModel}. } \usage{ Statistic(id, method, samples, par = NULL) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{id}{ defines the ID of the statistic. } \item{method}{ defines the type of statistics/method for computing the statistic. } \item{samples}{ defines a list of sample(s) (defined in the data model) to be used by the statistic method. } \item{par}{ defines the parameter(s) of the method for computing the statistic. } } \details{ Objects of class \code{Statistic} are used in objects of class \code{AnalysisModel} to define the statistics to produce. Several objects of class \code{Statistic} can be added to an object of class \code{AnalysisModel}. \code{method} argument defines the statistical method. Several methods are already implemented in the Mediana package (listed below, along with the required parameters to define in the \code{par} parameter): \itemize{ \item \code{MedianStat}: compute the median of the sample defined in the \code{samples} argument. \item \code{MeanStat}: compute the mean of the sample defined in the \code{samples} argument. \item \code{SdStat}: compute the standard deviation of the sample defined in the \code{samples} argument. \item \code{MinStat}: compute the minimum of the sample defined in the \code{samples} argument. \item \code{MaxStat}: compute the maximum of the sample defined in the \code{samples} argument. \item \code{DiffMeanStat}: compute the difference of means between the two samples defined in the \code{samples} argument. Two samples must be defined. \item \code{EffectSizeContStat}: compute the effect size for a continuous endpoint. Two samples must be defined. \item \code{RatioEffectSizeContStat}: compute the ratio of two effect sizes for a continuous endpoint. Four samples must be defined. \item \code{PropStat}: compute the proportion of the sample defined in the \code{samples} argument. \item \code{DiffPropStat}: compute the difference of the proportions between the two samples defined in the \code{samples} argument. Two samples must be defined. \item \code{EffectSizePropStat}: compute the effect size for a binary endpoint. Two samples must be defined. \item \code{RatioEffectSizePropStat}: compute the ratio of two effect sizes for a binary endpoint. Four samples must be defined. \item \code{HazardRatioStat}: compute the hazard ratio of the two samples defined in the \code{samples} argument. Two samples must be defined. By default the Log-Rank method is used. Optional argument: \code{method} taking as value \code{Log-Rank} or \code{Cox}. \item \code{EffectSizeEventStat}: compute the effect size for a survival endpoint (log of the HR). Two samples must be defined. Two samples must be defined. By default the Log-Rank method is used. Optional argument: \code{method} taking as value \code{Log-Rank} or \code{Cox}. \item \code{RatioEffectSizeEventStat}: compute the ratio of two effect sizes for a survival endpoint. Four samples must be defined. By default the Log-Rank method is used. Optional argument: \code{method} taking as value \code{Log-Rank} or \code{Cox}. \item \code{EventCountStat}: compute the number of events observed in the sample(s) defined in the \code{samples} argument. \item \code{PatientCountStat}: compute the number of patients observed in the sample(s) defined in the \code{samples} argument. } } \references{ \url{http://gpaux.github.io/Mediana/} } \seealso{ See Also \code{\link{AnalysisModel}}. } \examples{ # Analysis model analysis.model = AnalysisModel() + Test(id = "Placebo vs treatment", samples = samples("Placebo", "Treatment"), method = "TTest") + Statistic(id = "Mean Treatment", method = "MeanStat", samples = samples("Treatment")) } Mediana/man/Project.Rd0000644000176200001440000000335713434027611014271 0ustar liggesusers\name{Project} \alias{Project} %- Also NEED an '\alias' for EACH other topic documented here. \title{Project object } \description{ This function creates an object of class \code{Project} which can be added to an object of class \code{PresentationModel}. } \usage{ Project(username = "[Unknown User]", title = "[Unknown title]", description = "[No description]") } %- maybe also 'usage' for other objects documented here. \arguments{ \item{username}{ defines the username to be printed in the report. } \item{title}{ defines the title of the project to be printed in the report. } \item{description}{ defines the description of the project to be printed in the report. } } \details{ Objects of class \code{Project} are used in objects of class \code{PresentationModel} to add more details on the project, such as the author, a title and a destiption of the project. This information will be added in the report generated using the \code{GenerateReport} function. A single object of class \code{Project} can be added to an object of class \code{PresentationModel}. } \references{ \url{http://gpaux.github.io/Mediana/} } \seealso{ See Also \code{\link{PresentationModel}} and \code{\link{GenerateReport}}. } \examples{ # Reporting presentation.model = PresentationModel() + Project(username = "[Mediana's User]", title = "Case study 1", description = "Clinical trial in patients with pulmonary arterial hypertension") + Section(by = "outcome.parameter") + Table(by = "sample.size") + CustomLabel(param = "sample.size", label= paste0("N = ",c(50, 55, 60, 65, 70))) + CustomLabel(param = "outcome.parameter", label=c("Standard 1", "Standard 2")) } Mediana/man/DataModel.Rd0000644000176200001440000000347613434027611014517 0ustar liggesusers\name{DataModel} \alias{DataModel} %- Also NEED an '\alias' for EACH other topic documented here. \title{ DataModel object } \description{ \code{DataModel()} initializes an object of class \code{DataModel}. } \usage{ DataModel(...) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{\dots}{ defines the arguments passed to create the object of class \code{DataModel}. } } \details{ Data models define the process of generating patients data in a clinical trial. \code{DataModel()} is used to create an object of class \code{DataModel} incrementally, using the '+' operator to add objects to the existing \code{DataModel} object. The advantage is to explicitely define which objects are added to the \code{DataModel} object. Initialization with \code{DataModel()} is highly recommended. Objects of class \code{OutcomeDist}, \code{SampleSize}, \code{Sample}, \code{Event} and \code{Design} can be added to an object of class \code{DataModel}. } \references{ \url{http://gpaux.github.io/Mediana/} } \seealso{ See Also \code{\link{OutcomeDist}}, \code{\link{SampleSize}}, \code{\link{Sample}} and \code{\link{Design}}. } \examples{ # Outcome parameter set 1 outcome1.placebo = parameters(mean = 0, sd = 70) outcome1.treatment = parameters(mean = 40, sd = 70) # Outcome parameter set 2 outcome2.placebo = parameters(mean = 0, sd = 70) outcome2.treatment = parameters(mean = 50, sd = 70) # Data model data.model = DataModel() + OutcomeDist(outcome.dist = "NormalDist") + SampleSize(c(50, 55, 60, 65, 70)) + Sample(id = "Placebo", outcome.par = parameters(outcome1.placebo, outcome2.placebo)) + Sample(id = "Treatment", outcome.par = parameters(outcome1.treatment, outcome2.treatment)) } Mediana/man/AdjustPvalues.Rd0000644000176200001440000002244113434027611015450 0ustar liggesusers\name{AdjustPvalues} \alias{AdjustPvalues} %- Also NEED an '\alias' for EACH other topic documented here. \title{ AdjustPvalues function } \description{Computation of adjusted p-values for commonly used multiple testing procedures based on univariate p-values (Bonferroni, Holm, Hommel, Hochberg, fixed-sequence and Fallback procedures), commonly used parametric multiple testing procedures (single-step and step-down Dunnett procedures) and multistage gatepeeking procedure. } \usage{ AdjustPvalues(pval, proc, par = NA) } \arguments{ \item{pval}{ defines the raw p-values. } \item{proc}{ defines the multiple testing procedure. Several procedures are already implemented in the Mediana package (listed below, along with the required or optional parameters to specify in the \code{par} argument): \itemize{ \item \code{BonferroniAdj}: Bonferroni procedure. Optional parameter: \code{weight}. \item \code{HolmAdj}: Holm procedure. Optional parameter: \code{weight}. \item \code{HochbergAdj}: Hochberg procedure. Optional parameter: \code{weight}. \item \code{HommelAdj}: Hommel procedure. Optional parameter: \code{weight}. \item \code{FixedSeqAdj}: Fixed-sequence procedure. \item \code{DunnettAdj}: Single-step Dunnett procedure. Required parameters:\code{n}. \item \code{StepDownDunnettAdj}: Step-down Dunnett procedure. Required parameters:\code{n}. \item \code{ChainAdj}: Family of chain procedures. Required parameters: \code{weight} and \code{transition}. \item \code{FallbackAdj}: Fallback procedure. Required parameters: \code{weight}. \item \code{NormalParamAdj}: Parametric multiple testing procedure derived from a multivariate normal distribution. Required parameter: \code{corr}. Optional parameter: \code{weight}. \item \code{ParallelGatekeepingAdj}: Family of parallel gatekeeping procedures. Required parameters: \code{family}, \code{proc}, \code{gamma}. \item \code{MultipleSequenceGatekeepingAdj}: Family of multiple-sequence gatekeeping procedures. Required parameters: \code{family}, \code{proc}, \code{gamma}. \item \code{MixtureGatekeepingAdj}: Family of mixture-based gatekeeping procedures. Required parameters: \code{family}, \code{proc}, \code{gamma}, \code{serial}, \code{parallel}. } } \item{par}{ defines the parameters associated to the multiple testing procedure } } \details{ This function can be used to adjust p-values according to a multiple testing procedure defines in the \code{proc} argument. This function computes adjusted p-values and generates decision rules for the Bonferroni, Holm (Holm, 1979), Hommel (Hommel, 1988), Hochberg (Hochberg, 1988), fixed-sequence (Westfall and Krishen, 2001) and Fallback (Wiens, 2003; Wiens and Dmitrienko, 2005) procedures. The adjusted p-values are computed using the closure principle (Marcus, Peritz and Gabriel, 1976) in general hypothesis testing problems (equally or unequally weighted null hypotheses). For more information on the algorithms used in the function, see Dmitrienko et al. (2009, Section 2.6). This function computes adjusted p-values for the single-step Dunnett procedure (Dunnett, 1955) and step-down Dunnett procedure (Naik, 1975; Marcus, Peritz and Gabriel, 1976) in one-sided hypothesis testing problems with a balanced one-way layout and equally weighted null hypotheses. For the Dunnett procedures, it is assumed that the test statistics follow a t distribution. For more information on the algorithms used in the function, see Dmitrienko et al. (2009, Section 2.7). This function computes adjusted p-values and generates decision rules for multistage parallel gatekeeping procedures in hypothesis testing problems with multiple families of null hypotheses (null hypotheses are assumed to be equally weighted within each family) based on the methodology presented in Dmitrienko, Tamhane and Wiens (2008), Dmitrienko, Kordzakhia and Tamhane (2011) and Dmitrienko, Kordzakhia and Brechenmacher (2016). For more information on parallel gatekeeping procedures (computation of adjusted p-values, independence condition, etc), see Dmitrienko and Tamhane (2009, Section 5.4). } \value{ Return a vector of adjusted p-values. } \references{ http://gpaux.github.io/Mediana/ Dmitrienko, A., Bretz, F., Westfall, P.H., Troendle, J., Wiens, B.L., Tamhane, A.C., Hsu, J.C. (2009). Multiple testing methodology. \emph{Multiple Testing Problems in Pharmaceutical Statistics}. Dmitrienko, A., Tamhane, A.C., Bretz, F. (editors). Chapman and Hall/CRC Press, New York. \cr Dmitrienko, A., Kordzakhia, G., Tamhane, A.C. (2011). Multistage and mixture parallel gatekeeping procedures in clinical trials. \emph{Journal of Biopharmaceutical Statistics}. 21, 726--747. Dmitrienko, A., Tamhane, A., Wiens, B. (2008). General multistage gatekeeping procedures. \emph{Biometrical Journal}. 50, 667--677. \cr Dmitrienko, A., Tamhane, A.C. (2009). Gatekeeping procedures in clinical trials. \emph{Multiple Testing Problems in Pharmaceutical Statistics}. Dmitrienko, A., Tamhane, A.C., Bretz, F. (editors). Chapman and Hall/CRC Press, New York. \cr Dmitrienko, A., Kordzakhia, G., Brechenmacher, T. (2016). Mixture-based gatekeeping procedures for multiplicity problems with multiple sequences of hypotheses. \emph{Journal of Biopharmaceutical Statistics}. 26, 758--780. Dunnett, C.W. (1955). A multiple comparison procedure for comparing several treatments with a control. \emph{Journal of the American Statistical Association}. 50, 1096--1121. \cr Hochberg, Y. (1988). A sharper Bonferroni procedure for multiple significance testing. \emph{Biometrika}. 75, 800--802. \cr Holm, S. (1979). A simple sequentially rejective multiple test procedure. \emph{Scandinavian Journal of Statistics}. 6, 65--70. \cr Hommel, G. (1988). A stagewise rejective multiple test procedure based on a modified Bonferroni test. \emph{Biometrika}. 75, 383--386. \cr Marcus, R. Peritz, E., Gabriel, K.R. (1976). On closed testing procedures with special reference to ordered analysis of variance. \emph{Biometrika}. 63, 655--660. \cr Naik, U.D. (1975). Some selection rules for comparing \eqn{p} processes with a standard. \emph{Communications in Statistics. Series A}. 4, 519--535. Westfall, P. H., Krishen, A. (2001). Optimally weighted, fixed sequence, and gatekeeping multiple testing procedures. \emph{Journal of Statistical Planning and Inference}. 99, 25--40. \cr Wiens, B. (2003). A fixed-sequence Bonferroni procedure for testing multiple endpoints. \emph{Pharmaceutical Statistics}. 2, 211--215. \cr Wiens, B., Dmitrienko, A. (2005). The fallback procedure for evaluating a single family of hypotheses. \emph{Journal of Biopharmaceutical Statistics}. 15, 929--942. \cr } \seealso{ See Also \code{\link{MultAdjProc}} and \code{\link{AdjustCIs}}. } \examples{ # Bonferroni, Holm, Hochberg, Hommel and Fixed-sequence procedure proc = c("BonferroniAdj", "HolmAdj", "HochbergAdj", "HommelAdj", "FixedSeqAdj", "FallbackAdj") rawp = c(0.012, 0.009, 0.023) # Equally weighted sapply(proc, function(x) {AdjustPvalues(rawp, proc = x)}) # Unequally weighted (no effect on the fixed-sequence procedure) sapply(proc, function(x) {AdjustPvalues(rawp, proc = x, par = parameters(weight = c(1/2, 1/4, 1/4)))}) # Dunnett procedures # Compute one-sided adjusted p-values for the single-step Dunnett procedure # Three null hypotheses of no effect are tested in the trial: # Null hypothesis H1: No difference between Dose 1 and Placebo # Null hypothesis H2: No difference between Dose 2 and Placebo # Null hypothesis H3: No difference between Dose 3 and Placebo # Treatment effect estimates (mean dose-placebo differences) est = c(2.3,2.5,1.9) # Pooled standard deviation sd = 9.5 # Study design is balanced with 180 patients per treatment arm n = 180 # Standard errors stderror = rep(sd*sqrt(2/n),3) # T-statistics associated with the three dose-placebo tests stat = est/stderror # One-sided pvalue rawp = 1-pt(stat,2*(n-1)) # Adjusted p-values based on the Dunnett procedures # (assuming that each test statistic follows a t distribution) AdjustPvalues(rawp,proc = "DunnettAdj",par = parameters(n = n)) AdjustPvalues(rawp,proc = "StepDownDunnettAdj",par = parameters(n = n)) # Parallel gatekeeping # Consider a clinical trial with two families of null hypotheses # Family 1: Primary null hypotheses (one-sided p-values) # H1 (Endpoint 1), p1=0.0082 # H2 (Endpoint 2), p2=0.0174 # Family 2: Secondary null hypotheses (one-sided p-values) # H3 (Endpoint 3), p3=0.0042 # H4 (Endpoint 4), p4=0.0180 # Define raw p-values rawp<-c(0.0082,0.0174, 0.0042,0.0180) # Define hHypothesis included in each family family = families(family1 = c(1, 2), family2 = c(3, 4)) # Define component procedure of each family component.procedure = families(family1 ="HolmAdj", family2 = "HolmAdj") # Truncation parameter of each family gamma = families(family1 = 0.5, family2 = 1) adjustp = AdjustPvalues(rawp, proc = "ParallelGatekeepingAdj", par = parameters(family = family, proc = component.procedure, gamma = gamma)) } Mediana/man/SampleSize.Rd0000644000176200001440000000424413434027611014733 0ustar liggesusers\name{SampleSize} \alias{SampleSize} %- Also NEED an '\alias' for EACH other topic documented here. \title{SampleSize object } \description{ This function creates an object of class \code{SampleSize} which can be added to an object of class \code{DataModel}. } \usage{ SampleSize(sample.size) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{sample.size}{ a list or vector of sample size(s). } } \details{ Objects of class \code{SampleSize} are used in objects of class \code{DataModel} to specify the sample size in case of balanced design (all samples will have the same sample size). A single object of class \code{SampleSize} can be added to an object of class \code{DataModel}. Either objects of class \code{Event} or \code{SampleSize} can be added to an object of class \code{DataModel}, but not both. } \references{ \url{http://gpaux.github.io/Mediana/} } \seealso{ See Also \code{\link{DataModel}}. } \examples{ # Outcome parameter set 1 outcome1.placebo = parameters(mean = 0, sd = 70) outcome1.treatment = parameters(mean = 40, sd = 70) # Outcome parameter set 2 outcome2.placebo = parameters(mean = 0, sd = 70) outcome2.treatment = parameters(mean = 50, sd = 70) # Data model case.study1.data.model = DataModel() + OutcomeDist(outcome.dist = "NormalDist") + SampleSize(c(50, 55, 60, 65, 70)) + Sample(id = "Placebo", outcome.par = parameters(outcome1.placebo, outcome2.placebo)) + Sample(id = "Treatment", outcome.par = parameters(outcome1.treatment, outcome2.treatment)) # Equivalent to: case.study1.data.model = DataModel() + OutcomeDist(outcome.dist = "NormalDist") + SampleSize(seq(50, 70, 5)) + Sample(id = "Placebo", outcome.par = parameters(outcome1.placebo, outcome2.placebo)) + Sample(id = "Treatment", outcome.par = parameters(outcome1.treatment, outcome2.treatment)) } Mediana/man/Section.Rd0000644000176200001440000000316713434027611014266 0ustar liggesusers\name{Section} \alias{Section} %- Also NEED an '\alias' for EACH other topic documented here. \title{Section object } \description{ This function creates an object of class \code{Section} which can be added to an object of class \code{PresentationModel}. } \usage{ Section(by) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{by}{ defines the parameter to create the section in the report. } } \details{ Objects of class \code{Section} are used in objects of class \code{PresentationModel} to define how the results will be presented in the report. If a \code{Section} object is added to a \code{PresentationModel} object, the report will have sections according to the parameter defined in the \code{by} argument. A single object of class \code{Section} can be added to an object of class \code{PresentationModel}. One or several parameters can be defined in the \code{by} argument: \itemize{ \item \code{"sample.size"} \item \code{"event"} \item \code{"outcome.parameter"} \item \code{"design.parameter"} \item \code{"multiplicity.adjustment"} } } \references{ \url{http://gpaux.github.io/Mediana/} } \seealso{ See Also \code{\link{PresentationModel}}. } \examples{ # Reporting presentation.model = PresentationModel() + Section(by = "outcome.parameter") + Table(by = "sample.size") + CustomLabel(param = "sample.size", label= paste0("N = ",c(50, 55, 60, 65, 70))) + CustomLabel(param = "outcome.parameter", label=c("Standard 1", "Standard 2")) # In this report, one section will be created for each outcome parameter assumption. } Mediana/man/MultAdjProc.Rd0000644000176200001440000000701313434027611015040 0ustar liggesusers\name{MultAdjProc} \alias{MultAdjProc} %- Also NEED an '\alias' for EACH other topic documented here. \title{MultAdjProc object } \description{ This function creates an object of class \code{MultAdjProc} which can be added to objects of class \code{AnalysisModel}, \code{MultAdj} or \code{MultAdjStrategy}. } \usage{ MultAdjProc(proc, par = NULL, tests = NULL) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{proc}{ defines a multiplicity adjustment procedure. } \item{par}{ defines the parameters of the multiplicity adjustment procedure (optional). } \item{tests}{ defines the tests taken into account in the multiplicity adjustment procedure. } } \details{ Objects of class \code{MultAdjProc} are used in objects of class \code{AnalysisModel} to specify a Multiplicity Adjustment Procedure that will be applied to the statistical tests to protect the overall Type I error rate. Several objects of class \code{MultAdjProc} can be added to an object of class \code{AnalysisModel}, using the '+' operator or by grouping them into a \code{MultAdj} object. \code{proc} argument defines the multiplicity adjustment procedure. Several procedures are already implemented in the Mediana package (listed below, along with the required or optional parameters to specify in the \code{par} argument): \itemize{ \item \code{BonferroniAdj}: Bonferroni procedure. Optional parameter: \code{weight}. \item \code{HolmAdj}: Holm procedure. Optional parameter: \code{weight}. \item \code{HochbergAdj}: Hochberg procedure. Optional parameter: \code{weight}. \item \code{HommelAdj}: Hommel procedure. Optional parameter: \code{weight}. \item \code{FixedSeqAdj}: Fixed-sequence procedure. \item \code{ChainAdj}: Family of chain procedures. Required parameters: \code{weight} and \code{transition}. \item \code{FallbackAdj}: Fallback procedure. Required parameters: \code{weight}. \item \code{NormalParamAdj}: Parametric multiple testing procedure derived from a multivariate normal distribution. Required parameter: \code{corr}. Optional parameter: \code{weight}. \item \code{ParallelGatekeepingAdj}: Family of parallel gatekeeping procedures. Required parameters: \code{family}, \code{proc}, \code{gamma}. \item \code{MultipleSequenceGatekeepingAdj}: Family of multiple-sequence gatekeeping procedures. Required parameters: \code{family}, \code{proc}, \code{gamma}. \item \code{MixtureGatekeepingAdj}: Family of mixture-based gatekeeping procedures. Required parameters: \code{family}, \code{proc}, \code{gamma}, \code{serial}, \code{parallel}. } If no \code{tests} are defined, the multiplicity adjustment procedure will be applied to all tests defined in the AnalysisModel. } \references{ \url{http://gpaux.github.io/Mediana/} } \seealso{ See Also \code{\link{MultAdj}}, \code{\link{MultAdjStrategy}} and \code{\link{AnalysisModel}}. } \examples{ # Parameters of the chain procedure (fixed-sequence procedure) # Vector of hypothesis weights chain.weight = c(1, 0) # Matrix of transition parameters chain.transition = matrix(c(0, 1, 0, 0), 2, 2, byrow = TRUE) # Analysis model analysis.model = AnalysisModel() + MultAdjProc(proc = "ChainAdj", par = parameters(weight = chain.weight, transition = chain.transition)) + Test(id = "PFS test", samples = samples("Plac PFS", "Treat PFS"), method = "LogrankTest") + Test(id = "OS test", samples = samples("Plac OS", "Treat OS"), method = "LogrankTest") } Mediana/man/Mediana-package.Rd0000644000176200001440000002006413464523165015614 0ustar liggesusers\name{Mediana-package} \alias{Mediana-package} \alias{Mediana} \docType{package} \title{ Clinical Trial Simulations } \description{ Provides a general framework for clinical trial simulations based on the Clinical Scenario Evaluation (CSE) approach. The package supports a broad class of data models (including clinical trials with continuous, binary, survival-type and count-type endpoints as well as multivariate outcomes that are based on combinations of different endpoints), analysis strategies and commonly used evaluation criteria. } \details{ \tabular{ll}{ Package: \tab Mediana\cr Type: \tab Package\cr Version: \tab 1.0.8\cr Date: \tab 2019-05-08\cr License: \tab GPL-2\cr } %~~ An overview of how to use the package, including the most important functions ~~ } \author{ Gautier Paux, Alex Dmitrienko Maintainer: Gautier Paux } \references{ Benda, N., Branson, M., Maurer, W., Friede, T. (2010). Aspects of modernizing drug development using clinical scenario planning and evaluation. Drug Information Journal. 44, 299-315. Dmitrienko, A., Paux, G., Brechenmacher, T. (2016). Power calculations in clinical trials with complex clinical objectives. Journal of the Japanese Society of Computational Statistics. 28, 15-50. Dmitrienko, A., Paux, G., Pulkstenis, E., Zhang, J. (2016). Tradeoff-based optimization criteria in clinical trials with multiple objectives and adaptive designs. Journal of Biopharmaceutical Statistics. 26, 120-140. Dmitrienko, A. and Pulkstenis, E. (2017). Clinical Trial Optimization Using R. New-York : CRC Press. Friede, T., Nicholas, R., Stallard, N., Todd, S., Parsons, N.R., Valdes-Marquez, E., Chataway, J. (2010). Refinement of the clinical scenario evaluation framework for assessment of competing development strategies with an application to multiple sclerosis. Drug Information Journal 44:713-718. \url{http://gpaux.github.io/Mediana/} } %~~ Optionally other standard keywords, one per line, from file KEYWORDS in the R documentation directory ~~ \keyword{ package } \examples{ \dontrun{ # Clinical trial in patients with rheumatoid arthritis # Variable types var.type = parameters("BinomDist", "NormalDist") # Outcome distribution parameters plac.par = parameters(parameters(prop = 0.3), parameters(mean = -0.10, sd = 0.5)) dosel.par1 = parameters(parameters(prop = 0.40), parameters(mean = -0.20, sd = 0.5)) dosel.par2 = parameters(parameters(prop = 0.45), parameters(mean = -0.25, sd = 0.5)) dosel.par3 = parameters(parameters(prop = 0.50), parameters(mean = -0.30, sd = 0.5)) doseh.par1 = parameters(parameters(prop = 0.50), parameters(mean = -0.30, sd = 0.5)) doseh.par2 = parameters(parameters(prop = 0.55), parameters(mean = -0.35, sd = 0.5)) doseh.par3 = parameters(parameters(prop = 0.60), parameters(mean = -0.40, sd = 0.5)) # Correlation between two endpoints corr.matrix = matrix(c(1.0, 0.5, 0.5, 1.0), 2, 2) # Outcome parameter set 1 outcome1.plac = parameters(type = var.type, par = plac.par, corr = corr.matrix) outcome1.dosel = parameters(type = var.type, par = dosel.par1, corr = corr.matrix) outcome1.doseh = parameters(type = var.type, par = doseh.par1, corr = corr.matrix) # Outcome parameter set 2 outcome2.plac = parameters(type = var.type, par = plac.par, corr = corr.matrix) outcome2.dosel = parameters(type = var.type, par = dosel.par2, corr = corr.matrix) outcome2.doseh = parameters(type = var.type, par = doseh.par2, corr = corr.matrix) # Outcome parameter set 3 outcome3.plac = parameters(type = var.type, par = plac.par, corr = corr.matrix) outcome3.doseh = parameters(type = var.type, par = doseh.par3, corr = corr.matrix) outcome3.dosel = parameters(type = var.type, par = dosel.par3, corr = corr.matrix) # Data model data.model = DataModel() + OutcomeDist(outcome.dist = "MVMixedDist") + SampleSize(c(100, 120)) + Sample(id = list("Plac ACR20", "Plac HAQ-DI"), outcome.par = parameters(outcome1.plac, outcome2.plac, outcome3.plac)) + Sample(id = list("DoseL ACR20", "DoseL HAQ-DI"), outcome.par = parameters(outcome1.dosel, outcome2.dosel, outcome3.dosel)) + Sample(id = list("DoseH ACR20", "DoseH HAQ-DI"), outcome.par = parameters(outcome1.doseh, outcome2.doseh, outcome3.doseh)) family = families(family1 = c(1, 2), family2 = c(3, 4)) component.procedure = families(family1 ="HolmAdj", family2 = "HolmAdj") gamma = families(family1 = 0.8, family2 = 1) # Tests to which the multiplicity adjustment will be applied test.list = tests("Pl vs DoseH - ACR20", "Pl vs DoseL - ACR20", "Pl vs DoseH - HAQ-DI", "Pl vs DoseL - HAQ-DI") # Analysis model analysis.model = AnalysisModel() + MultAdjProc(proc = "MultipleSequenceGatekeepingAdj", par = parameters(family = family, proc = component.procedure, gamma = gamma), tests = test.list) + Test(id = "Pl vs DoseL - ACR20", method = "PropTest", samples = samples("Plac ACR20", "DoseL ACR20")) + Test(id = "Pl vs DoseH - ACR20", method = "PropTest", samples = samples("Plac ACR20", "DoseH ACR20")) + Test(id = "Pl vs DoseL - HAQ-DI", method = "TTest", samples = samples("DoseL HAQ-DI", "Plac HAQ-DI")) + Test(id = "Pl vs DoseH - HAQ-DI", method = "TTest", samples = samples("DoseH HAQ-DI", "Plac HAQ-DI")) # Evaluation model evaluation.model = EvaluationModel() + Criterion(id = "Marginal power", method = "MarginalPower", tests = tests("Pl vs DoseL - ACR20", "Pl vs DoseH - ACR20", "Pl vs DoseL - HAQ-DI", "Pl vs DoseH - HAQ-DI"), labels = c("Pl vs DoseL - ACR20", "Pl vs DoseH - ACR20", "Pl vs DoseL - HAQ-DI", "Pl vs DoseH - HAQ-DI"), par = parameters(alpha = 0.025)) + Criterion(id = "Disjunctive power - ACR20", method = "DisjunctivePower", tests = tests("Pl vs DoseL - ACR20", "Pl vs DoseH - ACR20"), labels = "Disjunctive power - ACR20", par = parameters(alpha = 0.025)) + Criterion(id = "Disjunctive power - HAQ-DI", method = "DisjunctivePower", tests = tests("Pl vs DoseL - HAQ-DI", "Pl vs DoseH - HAQ-DI"), labels = "Disjunctive power - HAQ-DI", par = parameters(alpha = 0.025)) # Simulation Parameters sim.parameters = SimParameters(n.sims = 1000, proc.load = 2, seed = 42938001) # Perform clinical scenario evaluation results = CSE(data.model, analysis.model, evaluation.model, sim.parameters) # Reporting presentation.model = PresentationModel() + Project(username = "[Mediana's User]", title = "Case study", description = "Clinical trial in patients with rheumatoid arthritis") + Section(by = c("outcome.parameter")) + Table(by = c("multiplicity.adjustment")) + CustomLabel(param = "sample.size", label = paste0("N = ", c(100, 120))) # Report Generation GenerateReport(presentation.model = presentation.model, cse.results = results, report.filename = "Case study.docx") } } Mediana/man/AdjustCIs.Rd0000644000176200001440000001217713434027611014514 0ustar liggesusers\name{AdjustCIs} \alias{AdjustCIs} %- Also NEED an '\alias' for EACH other topic documented here. \title{ AdjustCIs function } \description{Computation of simultaneous confidence intervals for selected multiple testing procedures based on univariate p-values (Bonferroni, Holm and fixed-sequence procedures) and commonly used parametric multiple testing procedures (single-step and step-down Dunnett procedures) } \usage{ AdjustCIs(est, proc, par = NA) } \arguments{ \item{est}{ defines the point estimates. } \item{proc}{ defines the multiple testing procedure. Several procedures are already implemented in the Mediana package (listed below, along with the required or optional parameters to specify in the \code{par} argument): \itemize{ \item \code{BonferroniAdj}: Bonferroni procedure. Required parameters:\code{n}, \code{sd} and \code{covprob}. Optional parameter: \code{weight}. \item \code{HolmAdj}: Holm procedure. Required parameters:\code{n}, \code{sd} and \code{covprob}. Optional parameter: \code{weight}. \item \code{FixedSeqAdj}: Fixed-sequence procedure. Required parameters:\code{n}, \code{sd} and \code{covprob}. \item \code{DunnettAdj}: Single-step Dunnett procedure. Required parameters:\code{n}, \code{sd} and \code{covprob}. \item \code{StepDownDunnettAdj}: Step-down Dunnett procedure. Required parameters:\code{n}, \code{sd} and \code{covprob}. } } \item{par}{ defines the parameters associated to the multiple testing procedure. } } \details{ This function computes one-sided simultaneous confidence limits for the Bonferroni, Holm (Holm, 1979) and fixed-sequence (Westfall and Krishen, 2001) procedures in in general one-sided hypothesis testing problems (equally or unequally weighted null hypotheses), as well as for the single-step Dunnett procedure (Dunnett, 1955) and step-down Dunnett procedure (Naik, 1975; Marcus, Peritz and Gabriel, 1976) in one-sided hypothesis testing problems with a balanced one-way layout and equally weighted null hypotheses. For non-parametric procedure, the simultaneous confidence intervals are computed using the methods developed in Hsu and Berger (1999), Strassburger and Bretz (2008) and Guilbaud (2008). For more information on the algorithms used in the function, see Dmitrienko et al. (2009, Section 2.6). For the Dunnett procedures, the simultaneous confidence intervals are computed using the methods developed in Bofinger (1987) and Stefansson, Kim and Hsu (1988). For more information on the algorithms used in the function, see Dmitrienko et al. (2009, Section 2.7). } \value{Return a vector of lower simultaneous confidence limits. } \references{ http://gpaux.github.io/Mediana/ Bofinger, E. (1987). Step-down procedures for comparison with a control. \emph{Australian Journal of Statistics}. 29, 348--364. \cr Dmitrienko, A., Bretz, F., Westfall, P.H., Troendle, J., Wiens, B.L., Tamhane, A.C., Hsu, J.C. (2009). Multiple testing methodology. \emph{Multiple Testing Problems in Pharmaceutical Statistics}. Dmitrienko, A., Tamhane, A.C., Bretz, F. (editors). Chapman and Hall/CRC Press, New York. \cr Dunnett, C.W. (1955). A multiple comparison procedure for comparing several treatments with a control. \emph{Journal of the American Statistical Association}. 50, 1096--1121. \cr Marcus, R. Peritz, E., Gabriel, K.R. (1976). On closed testing procedures with special reference to ordered analysis of variance. \emph{Biometrika}. 63, 655--660. \cr Naik, U.D. (1975). Some selection rules for comparing \eqn{p} processes with a standard. \emph{Communications in Statistics. Series A}. 4, 519--535. \cr Stefansson, G., Kim, W.-C., Hsu, J.C. (1988). On confidence sets in multiple comparisons. \emph{Statistical Decision Theory and Related Topics IV}. Gupta, S.S., Berger, J.O. (editors). Academic Press, New York, 89--104. } \seealso{ See Also \code{\link{MultAdjProc}} and \code{\link{AdjustPvalues}}. } \examples{ # Consider a clinical trial conducted to evaluate the effect of three # doses of a treatment compared to a placebo with respect to a normally # distributed endpoint # Three null hypotheses of no effect are tested in the trial: # Null hypothesis H1: No difference between Dose 1 and Placebo # Null hypothesis H2: No difference between Dose 2 and Placebo # Null hypothesis H3: No difference between Dose 3 and Placebo # Null hypotheses of no treatment effect are equally weighted weight<-c(1/3,1/3,1/3) # Treatment effect estimates (mean dose-placebo differences) est<-c(2.3,2.5,1.9) # Pooled standard deviation sd<-rep(9.5,3) # Study design is balanced with 180 patients per treatment arm n<-180 # Bonferroni, Holm, Hochberg, Hommel and Fixed-sequence procedure proc = c("BonferroniAdj", "HolmAdj", "FixedSeqAdj", "DunnettAdj", "StepDownDunnettAdj") # Equally weighted sapply(proc, function(x) {AdjustCIs(est, proc = x, par = parameters(sd = sd, n = n, covprob = 0.975, weight = weight))}) }