ggeffects/0000755000176200001440000000000014100527662012212 5ustar liggesusersggeffects/NAMESPACE0000644000176200001440000000127114100515106013420 0ustar liggesusers# Generated by roxygen2: do not edit by hand S3method(plot,ggalleffects) S3method(plot,ggeffects) S3method(print,ggeffects) S3method(residualize_over_grid,data.frame) S3method(residualize_over_grid,ggeffects) S3method(vcov,ggeffects) export(collapse_by_group) export(data_grid) export(get_complete_df) export(get_legend_labels) export(get_legend_title) export(get_title) export(get_x_labels) export(get_x_title) export(get_y_title) export(ggeffect) export(ggemmeans) export(ggpredict) export(new_data) export(pool_predictions) export(pretty_range) export(representative_values) export(residualize_over_grid) export(show_pals) export(theme_ggeffects) export(values_at) ggeffects/README.md0000644000176200001440000002601614046746430013503 0ustar liggesusers # ggeffects - Estimated Marginal Means and Adjusted Predictions from Regression Models [![CRAN\_Status\_Badge](http://www.r-pkg.org/badges/version/ggeffects)](https://cran.r-project.org/package=ggeffects)    [![DOI](http://joss.theoj.org/papers/10.21105/joss.00772/status.svg)](https://doi.org/10.21105/joss.00772)    [![Documentation](https://img.shields.io/badge/documentation-ggeffects-orange.svg?colorB=E91E63)](https://strengejacke.github.io/ggeffects/)    [![downloads](http://cranlogs.r-pkg.org/badges/ggeffects)](https://cranlogs.r-pkg.org/)    [![total](http://cranlogs.r-pkg.org/badges/grand-total/ggeffects)](https://cranlogs.r-pkg.org/) Lüdecke D (2018). *ggeffects: Tidy Data Frames of Marginal Effects from Regression Models.* Journal of Open Source Software, 3(26), 772. doi: [10.21105/joss.00772](https://doi.org/10.21105/joss.00772) ## Why do we need (marginal/conditional) effects or (adjusted) predicted values? Results of regression models are typically presented as tables that are easy to understand. For more complex models that include interaction or quadratic / spline terms, tables with numbers are less helpful and difficult to interpret. In such cases, *marginal effects* or *adjusted predictions* are far easier to understand. In particular, the visualization of such effects or predictions allows to intuitively get the idea of how predictors and outcome are associated, even for complex models. ## Aim of this package **ggeffects** is a light-weight package that aims at easily calculating marginal effects and adjusted predictions (or: *estimated marginal means*) at the mean or at representative values of covariates ([see definitions here](https://stats.stackexchange.com/tags/marginal-effect/info)) from statistical models, i.e. **predictions generated by a model when one holds the non-focal variables constant and varies the focal variable(s)**. This is achieved by three core ideas that describe the philosophy of the function design: 1. Functions are type-safe and always return a data frame with the same, consistent structure; 2. there is a simple, unique approach to calculate marginal effects/adjusted predictions and estimated marginal means for many different models; 3. the package supports “labelled data” (Lüdecke 2018), which allows human readable annotations for graphical outputs. This means, users do not need to care about any expensive steps after modeling to visualize the results. The returned as data frame is ready to use with the **ggplot2**-package, however, there is also a `plot()`-method to easily create publication-ready figures. ## Definition of “marginal effects” There is no common language across fields regarding a unique meaning of “marginal effects.” Thus, the wording throughout this package may vary. Maybe “adjusted predictions” comes closest to what **ggeffects** actually does. To avoid confusion about what is actually calculated and returned by the package’s functions `ggpredict()`, `ggemmeans()` and `ggeffect()`, it is recommended to read [this vignette](https://strengejacke.github.io/ggeffects/articles/introduction_marginal_effects.html) about the different terminology and its meanings. ## Documentation and Support Please visit for documentation and vignettes. For questions about the functionality, you may either contact me via email or also file an issue. ## ggeffects supports many different models and is easy to use Marginal effects and adjusted predictions can be calculated for many different models. Currently supported model-objects are: `averaging`, `bamlss`, `bayesx`, `betabin`, `betareg`, `bglmer`, `blmer`, `bracl`, `brglm`, `brmsfit`, `brmultinom`, `cgam`, `cgamm`, `clm`, `clm2`, `clmm`, `coxph`, `fixest`, `gam` (package **mgcv**), `Gam` (package **gam**), `gamlss`, `gamm`, `gamm4`, `gee`, `geeglm`, `glm`, `glm.nb`, `glmer`, `glmer.nb`, `glmmTMB`, `glmmPQL`, `glmrob`, `glmRob`, `glmx`, `gls`, `hurdle`, `ivreg`, `lm`, `lm_robust`, `lme`, `lmer`, `lmrob`, `lmRob`, `logistf`, `lrm`, `mclogit`, `mlogit`, `MixMod`, `MCMCglmm`, `mixor`, `multinom`, `negbin`, `nlmer`, `ols`, `orm`, `plm`, `polr`, `rlm`, `rlmer`, `rq`, `rqss`, `stanreg`, `survreg`, `svyglm`, `svyglm.nb`, `tidymodels`, `tobit`, `truncreg`, `vgam`, `wbm`, `zeroinfl` and `zerotrunc`. Support for models varies by function, i.e. although `ggpredict()`, `ggemmeans()` and `ggeffect()` support most models, some models are only supported exclusively by one of the three functions. Other models not listed here might work as well, but are currently not tested. Interaction terms, splines and polynomial terms are also supported. The main functions are `ggpredict()`, `ggemmeans()` and `ggeffect()`. There is a generic `plot()`-method to plot the results using **ggplot2**. ## Examples The returned data frames always have the same, consistent structure and column names, so it’s easy to create ggplot-plots without the need to re-write the function call. `x` and `predicted` are the values for the x- and y-axis. `conf.low` and `conf.high` could be used as `ymin` and `ymax` aesthetics for ribbons to add confidence bands to the plot. `group` can be used as grouping-aesthetics, or for faceting. `ggpredict()` requires at least one, but not more than four terms specified in the `terms`-argument. Predicted values of the response, along the values of the first term are calculated, optionally grouped by the other terms specified in `terms`. ``` r library(ggeffects) library(splines) data(efc) fit <- lm(barthtot ~ c12hour + bs(neg_c_7) * c161sex + e42dep, data = efc) ggpredict(fit, terms = "c12hour") #> # Predicted values of Total score BARTHEL INDEX #> #> c12hour | Predicted | 95% CI #> ------------------------------------ #> 4 | 67.89 | [65.81, 69.96] #> 12 | 67.07 | [65.10, 69.05] #> 22 | 66.06 | [64.19, 67.94] #> 36 | 64.64 | [62.84, 66.45] #> 49 | 63.32 | [61.51, 65.14] #> 70 | 61.20 | [59.22, 63.17] #> 100 | 58.15 | [55.71, 60.60] #> 168 | 51.26 | [47.27, 55.25] #> #> Adjusted for: #> * neg_c_7 = 11.83 #> * c161sex = 1.76 #> * e42dep = 2.93 ``` A possible call to ggplot could look like this: ``` r library(ggplot2) mydf <- ggpredict(fit, terms = "c12hour") ggplot(mydf, aes(x, predicted)) + geom_line() + geom_ribbon(aes(ymin = conf.low, ymax = conf.high), alpha = .1) ``` ![](man/figures/unnamed-chunk-3-1.png) However, there is also a `plot()`-method. This method uses convenient defaults, to easily create the most suitable plot for the marginal effects. ``` r mydf <- ggpredict(fit, terms = "c12hour") plot(mydf) ``` ![](man/figures/unnamed-chunk-4-1.png) With three variables, predictions can be grouped and faceted. ``` r ggpredict(fit, terms = c("neg_c_7", "c161sex", "e42dep")) #> # Predicted values of Total score BARTHEL INDEX #> #> # c161sex = Male #> # e42dep = [1] independent #> #> neg_c_7 | Predicted | 95% CI #> ------------------------------------- #> 7 | 102.74 | [95.97, 109.51] #> 12 | 102.27 | [97.10, 107.44] #> 17 | 93.79 | [86.96, 100.63] #> 28 | 164.57 | [95.98, 233.17] #> #> # c161sex = Female #> # e42dep = [1] independent #> #> neg_c_7 | Predicted | 95% CI #> -------------------------------------- #> 7 | 109.54 | [105.20, 113.87] #> 12 | 99.81 | [ 95.94, 103.68] #> 17 | 94.90 | [ 90.21, 99.60] #> 28 | 90.26 | [ 71.79, 108.74] #> #> # c161sex = Male #> # e42dep = [2] slightly dependent #> #> neg_c_7 | Predicted | 95% CI #> ------------------------------------- #> 7 | 83.73 | [77.32, 90.14] #> 12 | 83.26 | [78.95, 87.58] #> 17 | 74.79 | [68.68, 80.89] #> 28 | 145.57 | [77.00, 214.14] #> #> # c161sex = Female #> # e42dep = [2] slightly dependent #> #> neg_c_7 | Predicted | 95% CI #> ------------------------------------ #> 7 | 90.53 | [86.71, 94.35] #> 12 | 80.80 | [78.17, 83.44] #> 17 | 75.90 | [72.29, 79.51] #> 28 | 71.26 | [53.07, 89.45] #> #> # c161sex = Male #> # e42dep = [3] moderately dependent #> #> neg_c_7 | Predicted | 95% CI #> ------------------------------------- #> 7 | 64.72 | [58.28, 71.16] #> 12 | 64.26 | [60.30, 68.21] #> 17 | 55.78 | [50.04, 61.52] #> 28 | 126.56 | [57.98, 195.14] #> #> # c161sex = Female #> # e42dep = [3] moderately dependent #> #> neg_c_7 | Predicted | 95% CI #> ------------------------------------ #> 7 | 71.52 | [67.59, 75.45] #> 12 | 61.79 | [59.79, 63.80] #> 17 | 56.89 | [53.86, 59.91] #> 28 | 52.25 | [34.21, 70.29] #> #> # c161sex = Male #> # e42dep = [4] severely dependent #> #> neg_c_7 | Predicted | 95% CI #> ------------------------------------- #> 7 | 45.72 | [38.86, 52.57] #> 12 | 45.25 | [41.03, 49.47] #> 17 | 36.77 | [30.97, 42.58] #> 28 | 107.55 | [38.93, 176.18] #> #> # c161sex = Female #> # e42dep = [4] severely dependent #> #> neg_c_7 | Predicted | 95% CI #> ------------------------------------ #> 7 | 52.51 | [47.88, 57.15] #> 12 | 42.79 | [40.29, 45.28] #> 17 | 37.88 | [34.66, 41.10] #> 28 | 33.24 | [15.21, 51.28] #> #> Adjusted for: #> * c12hour = 42.10 mydf <- ggpredict(fit, terms = c("neg_c_7", "c161sex", "e42dep")) ggplot(mydf, aes(x = x, y = predicted, colour = group)) + geom_line() + facet_wrap(~facet) ``` ![](man/figures/unnamed-chunk-5-1.png) `plot()` works for this case, as well: ``` r plot(mydf) ``` ![](man/figures/unnamed-chunk-6-1.png) More features are explained in detail in the [package-vignettes](https://strengejacke.github.io/ggeffects/). ## Installation ### Latest development build To install the latest development snapshot (see latest changes below), type following commands into the R console: ``` r library(devtools) devtools::install_github("strengejacke/ggeffects") ``` ### Officiale, stable release To install the latest stable release from CRAN, type following command into the R console: ``` r install.packages("ggeffects") ``` ## Citation In case you want / have to cite my package, please use `citation('ggeffects')` for citation information: Lüdecke D (2018). *ggeffects: Tidy Data Frames of Marginal Effects from Regression Models.* Journal of Open Source Software, 3(26), 772. doi: [10.21105/joss.00772](https://doi.org/10.21105/joss.00772) ## References
Lüdecke, Daniel. 2018. “Sjlabelled: Labelled Data Utility Functions,” May. .
ggeffects/data/0000755000176200001440000000000014030655632013124 5ustar liggesusersggeffects/data/efc_test.RData0000644000176200001440000004416413451124203015635 0ustar liggesusers՝Ǒ[Hb]c6#|B!2dͫ](Z"R;lz88& ʞ6u#:/e__ _BP򗭦/+:*kNmS^ 1OJ¸w?==1_?ʯ—~0.}/[z&:[u~#3CE5*潏h+ʎp_v;XG xx Pv(y/AG~*n|˞C~ nw78ޢu~Ԧ~\G~ ӟ 6'sWڮ*^{yܷ$\ j>$X7h>KS_@?+D_䦿k;">#$~2s4Ny~qJv\WF벀|q}w[njٓp|~kn?|?^A[~q1κQ|.:Nrdkݔ7DGucۢ-ck:p|g׸W(uR?!׭#|:W?-+w_v$GG1>X OΡ? Cwa\Fw4{ACBX|Ch1#kAŁ_|h'+[zPz.aB?~Ta8S(ӾF9x>U޿K^O8xxܿؖypyx>RI/yYg^S?3~}E?חxz2O\OsN|~uH;.k!;;%kGx2N0?pj?^5$쏺o 7Ev_?kt~…ͻc}鍤OQk}md$/ `}ތGt 3ɹ0t~ gZnփ|tԢ~ek89|u`W50/~РC\EyoC} "q)Xv[</;y-{ ozq?7wu>=?{oq|XZ- go9lGN5CbuuѽG>Q]ccϵSpn>tƎ17T8vSzu2ݦgT_GO͛wY,a\S 7>Uz-,TzV~s?M_;>k#" |uϵFrgn>cDxY|7sYw,/Sln8W]3oǧSZ9"YS6Udc~\ױ0+76OԼ~9v?n669;ghhqg_^shɭx^ /,_wx[ }OWA@KU`wodSk˿CrOwo|(}Kxwʺ_zhJ~%=v~?}q<͗ e_Ϡ_ x#?.Os:@ڟuY;}'ߓ>?~=~o|Y/IޒTlCm/v\ $W |ї>Sw弦<٫?{wn]/ @oҗQe_Yc٧S. ڒ[K8bǮg_Ϻ}P?/e_8(y9'}75K5'uX7:&s^c]/_}>{qɿuzy~޳{] ox/y-eםe/~\;E~%}J}c1EgBgӗw-'}=4aK}{?{?<+/`{S}y~_2.ywC9<菗۵˽I|T]No~d6}P,U#MR sk.{ҋ}Sr?h3 rp|pc 6c`؅|[G?+\ɾӏ`ov~#+Sj/]>vkƶ3oɷrrGGz:koAޔ#[$o$_?#AZ"z]#M? a8/OEL ] ЏuvOу\a䫍lvj뇬_E>vuAucckҍϵ:>GnXKMxqvd7G?8f;YqCH"9)x7`(Dr9zuU GøipOOZ76Gu@nEH?W[pi[?f<89rgqcvhM-$lt,Z9SZKMK-S6ycen15Dg8uxYyjuğsu摈ozed5A|O*yo;Gvkrf `v-:>s['/gOYcm,F"<G贘7R0#[լ~$X})y+WmeO=V^$OD'+΋]gG_?k3<[/nV{g9Z|[l-vWwv\LoT"k݇?-ڍ>^=_j'6ׇ?'~ؿ O9xPEv0q:^yvb1:?LGA?huK+üy94:suj.nϝC<2ӕsi@bSGw-`9+S_=_GrY}~ WDGsAGwWAr(>g}K;S$#ѷr6é|3:<35S9y w=?{oGݷwWW,K>ogІrZzϽ"WG)+@xѫruv5r?Z_a&_}T? \;|tڀ^U#WW_U\ 6/{^EF~4È^D7#E_D%Oڿ&^~scW䄼^8>2s}x|l(x"3Uh쫝d.,by?Ij3˩[do/GN汹H>f!~o֎KGs1ീ'v=ppĭ|Z]8ۜ\_Ye]?vt#~c|j}Q`{1Zz*}5*gk؏v'OcDGr͵ίC?+_;5~F'#ß MǨVurX/̛z^ N~|zwbugx۷W;Nje꽻_l->;oO#kQ[(Jz(HIVZQMk=?QST?;ͽns6$tjZBR,^7ssO٬sgvSy?~q3<[ӹ\I7|sy⋣l⠋)s۴Jyc짦HhX=ɿ弨&O+qƣyyYڷ,z_T#{;ֻ_*/Mu"vms@'WVh>k?E95]ea8:?-k㿖oӔc8ZY~a5Nf4Z$Wv|Sq3W^t|Oo <ߜO~{ _DbwgMjflʕik#-AcjL\3hvmX2:zd6O k?|NS+g"yj+詟'"͒\o~Tg,r̭Xz8NSЦIɱ)?ѼMK-'=O_zlMj='ټ->\~s=?:|8٘ch.s57'm'OJo[ow:VbsƮ^g¿'|6UM{V8΋ʿs'I7wwvzTik ]/(W \zv;zl[QZy_`!G6o;֏ΫS;ϓ;MOV}*S%=X?#=7gߖvGW{[hzZlguv`'.zMc/Jkv 3,DžtX9dFf~i>`m|m)Z<^e ::?ksOvo8IEKJa<ynkl[>kmBt%+gvi;DI8ֿYߟK5|#?=tѺZǵ|Cy%W]_}kG? ;zY~Ԃ^P'DvrmWi9*Xgҫ#gۇiI|:u 1Nt~b۟_8Ξ޵翴tԬ.]ϵIc us2Nx{\Qܸ~JN z#a/uJzl^s%Vr 󚓇#=|v@ȓc<S_3Yץ]zt޽խ_/nop{nG|Yzpeve)7,H%)Oh^q7J#g ֌GKo4zFG|Xtt+WV\1=>ɋ#[<8fGnz$ED 9l1c=@$(oZ7[g+ǣCqOɷvj!vm߲rox$g,Xqޘ䡨zr:'ҍA^^RZa^q֡t{Oa^dv?lQ;#]'gvuq~]u^7+"{!j}9ܗJ8/ὂ[W7|&l ^v֭z/9<2\$HI/ 0gMoh_]Zo'hZ>I~i=뒽Mt2rG/uJ{dKᅣʼn|K97y/H>=g=[5G:Oh䭮˦<__zE?v^$'>?xӵtn<׻_ul?^$o%woo>8ϻ) >O}=i'EOv֫Z??.II.ڒI__K§뗿x+Xwo?{x]CtP#;k]w7ģCzKN%{\tSxZbϓEnW<|@Kʭu\Q?BS\jEnj/ɭiY~AMW*=?%z^+e1^Ki~v'~cɡ"z'Y+gח}ZgcNlˮʇ[_zy%;(6e'_Ih=%'|ɏ8)'d0/DF~ŋݹ>Gޜ|}u/>^Ziٵ'9wPQvPq-ُܲ(d]?F'no!M> bo`7՘_V:1ig/EP'W\ɟG_#?Ktn| |œ]rJ}_vmi9?k}erNqw#{;*yPט%}ٻ;({_}Fr-} n޻׵ewC󝟗:7WhǗ=yRyy~kyc=1ENOy^_I(}:7V۵iggj>s΃i~ѕDr;@>UDh_!{;w*uPqa*Cc!Ousp/zc=%Gӧ~%_aNR}_։vI'e~Ou[y_뿄N/ )<*;~Po?I^7%u/opeW,үe`"_K/K܁ONKށ>ʹrPz^#uA.ɞk_>}I_O<%<8=U@;>a_&avT>S?mO:^Mѭ?U߱jrQ~jV?ϩ7^c kXY/gS'Q/s}SuMmK}YM#sSů]>G)e:*ҷ-a߶k?L$B>}O]E:S #~tf?_x$`@C|=$Gn^$;>ť˛>ѳ+^6/8*03?;߭X:?_'~f();,]?d2|k, ꛱^syY:G-c#f_Gou%>n?XWWVC4]n_B>cad{AzX/'|hv#IQoʎ.73:Βiyս~ɟϧ5<8rRnsW7>9އOS dtu˿Vd>_N]쾞ϝ47o\-n/Vl]%axyZG?^~}h}wݵߞ[g|\r3];?NZ>sדvSVOu=o:ծc<\cZ{E7ꏞzel0zD|^?;wS=^'glʮX|l<잩~WY>u΋s|>SҦ֝OjݞԾƣfݱzDxOjjOSrԮje<[럵v+oF>csűelM'l\voS5SgOoGn<;xMMXFxc7gp}:=ϝ㽯/||n(گ>׫orxp6o)^j[֬oGvЏoroH_?^w'}y? A}A9K=x7=E/kOW1ߤ*z҇w;}􉯿,\?{h!${?6}ZOy7>H_巒 ''77O?B~:/ TB ok>D#zғ$Jo.Kv }yd/)>kz|C\QOe^UJ_GOC>.|/췥/e'T#3Ͻ<GG|Im6ƙ\>?3'=$'O_v_w7I1vi\_roqݘ5x o+h}-_goo/󠑧6FC3?OHN/n| |A3%yDr0иGڕu1e7>yc~$%=92_Ňu-y_rNqyN}InIx̷_iG~)YGބS^~OPzM9xR=A32owyz\7u*~KPxZ?ŋ)rj7-姜N+}N/̼9_^Vr_~{ W/.y'Op>-<}~{~0Yv⺈O 1yCI8 's|OP|hy+E{P~5_=uud ?NatB ;?>7\'egy_-ɏ-\GJE] rpGj/v$:'qMD\~"/8ߩ} ӎϥ'כvV|^8:jx >suȻʭ/?ڑn<)ׇ^Y矢&2 ٟuiQY_/>egB;nq>^{?[ tf=E֍ܧ_~Uq}G/=xqqq4K?:quu󿷁'x~cP5XNyvp_as}x=;W;R^[פC1'nn܇nX}8y|D?gܲޠ.'#ǢKR=ޣH^s>;;溑'w|tuEt>uyOܹISyXмϘl~Z~qo{x;(~ PO?~{+оm`i׫h+}Bx.R~@A7'7GW}gto,nur}~q<~3HݹWgA{oyC|g7_E>x<+iy<m9"qy)zG;Yd؝7?sb|9{'`8?q3qgޣFn\>s_pv~G<;\}$G{ ~qs1.nYǹHGg'޻zzښד.xˮ3c @2 -O3Bz7yr hKN-UkKnWr)P~Ń۟W=^_ /S+_wˏsnmqے{]*>MA&H_y%~XQǯܱiO GoOC8aqZk xw8?[/gsybޡ[9t}1{|"|ܿ2ɟ1o/L8U0\ ڛ|W{ma|oyAqk \+oH)/dEq-{/wB+g~Stu>ϛ_חr^#?+ďC\W-9dW'Cy~qkAu םsyhOc-s&Iv##4O뺀>:9|MKgx_|{|L5_kM\w^{ s-N8)W1O|gxO_o/9/_h}E~q{Mĺ~/cѠ{2ԃu![zxnxp6֭Y=@cY _u$_%Hty~`^ybw/AwD(>G_c}A܏StyIH:}l}ν7(-n[}8ɿ~{oɦ{W܀ٟ ~S!G?ܓsJi?]>ŋ:&u5'tٿgS7-wk}Zb{kgZ6DqfC7w|DG|Gε.MlX~XEݓcμ.ٺ0'-צŹlY{9${^OswǞ%\q45/cä[Kz|xx69w27\S]b?v3i:s=OJ'ʻt}dl,ߩ p=cV|=+.~6E7k_M:|q::>9=]Spr|yxǿ66`׫}XCA?k>\r{#wz@]ne] <ҷg|PNYwo!O=4ɯi57goƮ]=`Y3ZooڋFSGTKGޱ8~g8ƥ /Oڃh{'bzdyS=˱&>#YS>-M;ǵ=O]~s ԯO~3oJ{Fre=zKs姈O,kZ}'d^ kvkfԎOKiW?>K?6/*󴾺m/Os]SGz&X{.g;W9VyfjOٸsm~A~]׾zsuhqpڻ:Y]I=EtowaI{G+;mNc֓y"m''{p5Q'_Do@m\3?e!_l?Ot Fx7ǣ?{^;w>{} '}'_'o/s%>Rd~m=zn|d9{j=cF(.vΏ7u:aEDo٬oGO~vqu,g9ѳ`ɛO;'9>q}/n{/W{0.b<[[ǛǵG/&G3>,^_ƚ>q}9{%EKT#bV|bm޳D CKTgǏ/\XY9;%]2Etڶ}gKT{0"}7Ot6p<Ӕ@7R(D~0se$wF9s`~%jOXGI$/eE0IV/Ezwfk=j= lryod;zc\KٸGTՏv'_ ΍Od?+WKԹ68:uU%凱~]%jo_{;̷_F٫v=<5x=Y$;|'Ev+o35:yv|K>_NKNэ{-rEywz=gjZR z<0luyx/Q/]RyMϿ JVey%/skߋ,WDZ+kp/]G:l r^'ssPRQup$ 2~Is\6t.9w)}yW X3s|%~~y:/MWur>ܚӿ$d4C95\ikFed_&Qu<ݔSkD;Pύs}3k.^|9E/^|ӗ"uy^n^[c܋lu:;b@ym FM>z诌uz]\nw>م6<_cs.a!5 dx>y\]4r|.f}naMsu}yw~/6_up뮸xy/%P+|6Zuk\wru=:6:Ӿpv})G9wph{ۇî{ƋG}kǫ`mu_ݻ9庻/!_][?}$6wW{gswgjrƽhA߷8k*:ZN ;.:\d{d״ށ-.k쵖9ow@?u llE>~\Eŗl#>lhykٸ6&߈fc(ZŅ8E|%Ȗ5}/EkwQle|0g4OƟ#^0|&Zˬ|udv0YG'n=D>wQ&X2vq}5ͬysuY7/ÌȕQ~(9 NNGu|nwi0^QT\-vwjWNwϝ{Yg߸{p_+:;Vq;轺ƭսB7vik77:m^{|;ͷ7=mi|oc}z,;RZk➾|R՝v>+}gy`3I}ʞN]ĕvY1{G ; ;pwg_bCyLظ˷omovtJӽggeffects/data/lung2.rda0000644000176200001440000000275114030655632014650 0ustar liggesusersBZh91AY&SY]L@AQ@@@@@@@@P@@Ax>}>z=b@ɢb#&Fzz=ShMꞠڃOPѦ ?IMTT~ @C#Ja  0 a05=5)HԚz TJ0`0C~6@44ɦ @4{qK-݉nF泈 jtė m s`H`X30bIBfd'Ot(szIэoEM3-t~ 6TL5DH]YWg|dͭL"arL64Rsg&1 $@PR΅JS_+=[DE{_[KcQ3k͔[Rm4H@igssV)*f+,DZsx3i7TK2Z1 +6TCQgiIPDc.\mK0bQ@լvcV!Ȟb\r!F B׏ɽ| f߻}*ڂXƓRRT%i?B TNv)j9\T+ҳjf1d߿p"(.W7?l[|KkyVzbQ 03cڍZN2غr";WL)f+S.ES[̏pb||&"(HՀggeffects/data/fish.RData0000644000176200001440000000673113660224126014777 0ustar liggesusersZ TW%+1c͉N\D61f4D D4Hb4.51c xԈG(%J%b 4 jZ9{ޯ_eFL}{7q'z :Nuq^.lj玾m=yk5WwjU.WWxꯥ=oM+[-Xrm-rn]ѝzZڷasʶ^(<:qYe<cuqf6jVT[y\A#+W85GswG̓^̏lkeiwɫ6nۜCպѺ=[Uj s l-Zt's&pwMily륵== V1z\|t_眎{Z+C۪GGť1|X-8Ը9b51ǵr@ *1i:i},\Z1V-t\VnX4W9z׭1֢ڵӰqt j/2}ao^+6CsQæseW;Aie *6t\j]|W~l~kmv=E٪Bкoa[Aաƕ_Ikٔ8 nԚH{Km xVvE2nv.њB2%}cf'_$8ƚNA} -;C-0F[R y,dF@%BܦdĵxftO9׀SF[_< Ec,Q1yԥ,ԍOZ:\~ŝ:{q>o3#!!N~Ua5. vByB71}9pM}Q?qĭ]]ٞOO>׆wk9b[9%?U"9?+| Ik!:zYr/=^yW}Q=/΂wu@Ƴ }U3Ghk8p z,ԥ_tcxB< s5zy;^^>=-{& Yct@YOx&!FmcgO%˒z<[s]Kg+-Ek;hR^48ܵGb|w}(VwX.oys+⬧Pd%qLmU~Ͱ;rm3+<?HtH'~[ *E? )z|.o jЖ_Z ;I?)eOމ8؋Cz|~mQ%]o>uuget%\錼XfGlm@{uUѵfJA7!^WTx$ _&oY.-&*jWB6]#zy3Nt)1oF[3V[? vO*ր=- v~Ɉ[?=.| ;t[g=w}n}'+;?c |EיIhV""}yÒu8׎ AV1e14f& U2f%7#Ɇ/^A~@Q0;9^C_fVax qGm̧}ۿ _a۶rjSOJ(^tTGz "T605;=:3 )U93NO7s,WtF~ 32tz*~cY[ i=x#x0;8K6Gc|{A>/$~ݵ%]5~']x/ x 3QO#.A/&/0g?prנ}ޓoCy{4JzCG۱d^*}qL]VcV.u@R<J1o;/dWAl+s/egڢRyO*LKwG+ < !_"Kˇn{e9&},u.G_T8tطn>WLp=!%qd4~zo0:R}-+KMGLx>%3- ~-kX78^9F/  ՜E궯~FY/^1d\ D<_)U?DēorA4 yzuQyXQN)J)e'sG!>WDTT E>t[xn`F=%)?ȣSyn4G"b@R7)C}}¬o+B|MߪWiN{#q%KwY~ ׮|T2? ӓ0SaWmݧ_(y9?&oEk\" 9W-7~C՟~8^i[#z}P"4N_$%%wJC;T7a*ǩ#jswV|NtO>Ծay;vva13 ז87tNx,l^̓scJ(|fhTs6,tμ=OilX'^ػQѳ;3gK"7Y8 %Z''1 v޳BBD,ppʟQm9-ggeffects/data/efc.RData0000644000176200001440000004415713451124203014600 0ustar liggesusers՝Ǒ[Hb]c6#|B!2dͫ](Z"R;lz88&(exc%˯y7}{vRO#?闝W|ZO?d=?y?|eON9;i.ڟutO?~MiG˾\k]rr?V?-G2-C=ཌྷI~ꕶk+J/hk-+9$O;I? ORTOJ.W=+鯬*H./~Ho!! (\Sa\y}੮%|CѺ, cs<A1G<ܷ>?O4_u'WЖna\un~??1 .Njsg+(ܿ?م"ڧh7 %|pXhܯ%u5?J/szHuk*U>z| ]}G+nGyo65ɩ~֟~|s(]<'y<@v8;֡SYgJЯ qd\Kt+gByyS0+ӸJsI\gx{x+ك78.;KͼgI *lsO8'QO>ǼƳ?03/ew/\<kKt^tuYF|O_wxl&'^3S\o}|a??e!^A>jW\nY-pqp{qlN;Z)+/.NC7^̍/L_Զ|krdbX@Jyj嶰5?/l^vteW`k,Iw~cKvW =|0?z7wr. ]ș`k(30]7_|rNN7_A]3Dn :A4ooц}m@P|a9qr Vh7ϭNt~˞ou[#Ok\@=?@$>!px-v>np_6]w7\rp/H{_":~̳w|:X{>Wtߛ^\>M]_w[,V{Vx[ΣG*+}SPXur]:stOPw/~kw?|z8:_`w{_u[pq{wW_pw;\w%᫋{_7gk+z׾K7O_{4{bWb#*Xn|zEsT?O-z̍7]'T^m3/摱ESam/vVlD7?~*:ȍO+;^K'?n*onޭ#ߜOS`׎?wck={4>6lje=sX:^~$_-M\z k당T?ULTDV~TM?Yhؼ_+צu,=%ʹS658'_k%Mo'+ڿ=ZYw{;?,*uT.Wz^~(#qׯ9?ovPtw7Hn;w'}“<?%:])r@ݸ'MOr}>?#_x}ԖyGte'?N>O'}=%{%>ߦ)JЮwS-:|Quo˾үottJG!oז]/ߵj sӧSEvָ%O jtq"4.}g;hOq;ҫkuYoHQ'OOtY ;Iޓ| /+mH$i~A{M|@9 >}Gr+W:~eVqtCrCSxPCz|X=Y/S%J]Ǽ'_zI]Ozb/ weBwٗ3~>`H>gO G.N:g]Vr_/Ϗƹ_/?+>_K䃮-?պ>!P:] ?H/!G;_Ŀԯ~9i~7O*w]u@mׯ'>Лue|z|zX;> =oA,#bmr*Y?s`X>!N?7 E~З3y^tԿSr^0O2_勞w}U#+Gşg4qUsּ>w]{pΒ>,c+㕺/uVy~U^EK4}9,;zsf̿%/Q>\[wA>Oek&ЧĝZb^/ImB鏫ud1b+v`)`7}ر/Ynx4=Jny*IM}+I<֍m㼿义Xx?WA^tyo\ow] '^䅮l%7}k|ƛ>Kk}ur_/Nѥ_IR_?uuYyP/kI_G{?o/yfp^J|<#9XwG}A_^)AKP+%vro3su.)9_ac~~s}{#ٽt_o'B{Ոm'o\޵uߔ\C*l|?X뷂 X:/v!֑x=~v=g#}"gETZK(m [m0.86ܑNڸq[r~7ꤱop/`V^dHnOC}S?G|Gmfݭjlw Wl;j9!+`Wѧ];ihߘZts瑿/-tq96^|G'kя`6N֩a\GRHNh(ޢ!`<+\^v]&B0+z54QwzFr'gO#>0N#Oۢ>x5~ې +'h `1Ko~Q9?}fq#Z.r6/m[{o5qy9x /y{ \T՚7Z_ ?`V/O-G7˯V5pTitR˯o'TrXx/rLY9"~$\>vy$[^Y>Y~M&'Ǔhѭ]ZY{آ]K'\KsYkX #Hh6OF>":-9y#=jxy5/3V_`mUY+jxʽ)"~ckfvfZ)L/O۫?v_G՝/W0vtxOvWiq? `CIpS7^#~%@C/#gL70_܁<_K, J:JE_6/'ʛ0X9Lܦ?62keൠӮvGiXr~~Яr[Vn[]=G?og+6r[zce$_eM>nF( b}3"lg2ڼG+-Cڀ_DՋ- =G G=Vێ֞W|y|*;_9uغ.gC뗕sjP퇭*<ߍ;YB>oI[C50+md%l{lk)[F#{'oz<G:H?ùBV4]Dq>/zmodk:9 - ߹kl϶guXg~#o{}zݒ|09@^a;_s竩P6 AGt\j?=Ю?vT~u}G] m TyW\V >pC}>ѿ\|?j*?;%z+ƀ~2Dpm:ߌ379<-Otj̽>|M!T|mN+y_[?؎'{_ߜoIu`utۻ;wOGwH=\w[|{|QgUj[yܵsQv={zG;{Z( p^: >hգnu'oSB{ďtԮԬ7Ns\Mv~ ~e3\zq~89y6+\/T_0L0?t3W~%\t59"zc?[8~ 3{\6oj:~~];,H]ۯ\>+ɕ7OQN{c׃rYX?OK4X>n_0Sd>ɕT̕W6[wB?O~:7gߞ~8Q{ZY6G/ag@rGHfй嘚!7{#? >Z]?V脪8νYMӂc._<|TF#9YkHZ9:zxH{;כ?Ks1y8>Kk/ƫ+:6HǣƝa m08ݦDt-oׯ]`_srO̿]5bpt6YmIʪ%+_z3xٷ-:kS@ڪ:?#7r>OH}3:d~#YldKϵu[jd_Fo[3j[y#ɗpO~~2 86UT_ }&&^TOPﻺNd;yDV: fX6OۂY4kgQGW~u[_ o֟0c2VjD\坶y>]#%ۖ~TV~ȑ̓cN#}kj<īNn~埕+kx?v G eH/yO+䷥:xjoq|}/ez]92XI^j#2& ˯q!mN8_mmE|(_rrt @#WYq NzQ}A&mr4}hC9~C#.~;{vYݧCo?Fr;{ogk Oja}oֿq-Pd~UחmfQ8<^_? \UZJc9kjao6N7xS0.;G7w/?z7]/5Ksm/Bp\;^,W7n~aӂ|EKsze{/&ϝz|^?_vou moю"Q]P]'6Qx G ?7ü!H#";5170O#}s?u_ݿwxuk׋ۇ[G?_޹[e_Y{Yj6 qFm`<rGp&hg5>`m -JՇW B;s=kّy?~C[DXϴ}ZO- 1_j=$}@~/4y j~?̋#?Ѻߢ_"yew~Ǥ7)o(_pf~O;|EvZvmI;=~|zT\g/}#k+?eǏѯ~[۬o~Ӈ[:My5&׾U~ zZo5Ka5+oɿE_.ʞzq'{K_%~'¯um]|: wr?޼佺Hz|ЧK/?wxG>Oyq+Oɾ8N|E퇯Ev}gլ1BI?TO瓎~A`"oStkOw,Zs}k.sMXڼ5nnc˹ٹ_f##\vT]So撣v~S\Tyjko,'OIճcᓲ\y|*֞_7«͓W?\SgozśR+XLǮLi&'SMgroZxs37u4=<Ӓgz}s6>mw7u> OD~iW~Q>?|ˆ_׸G41ؿ?А_}n6_p~? rFƣ8pqO="?vlJ#G0.?w1NI=Dz`#Y?JNG+2KGL>d ff8\yv^NfzguXHבluIգX}t)8tП8jI6_p?/Cy|}foj#dA&dg>茎zDt~xpuoFgz>?k$䟔CՍO`~7vsd=;~%]y3S3GCs{/W{ۋխ;ǻxyEy^(뽃Q}4wW_nn<_ݽswlDY,'WkEo,ϭxs?Ei]#gө=v{v|켬O[NX+O$֞sэ'^v[7̵ώ;9:DϦI2?6gծ{ODv"z߱|Լu'Z'ovDwޓZSԼkg.}?5egʛ4u\yql~GS>3۱|,u:Sۦ꣹#^m{SqSq<־ؼm~sjx{{_nw<tj7M-zߴ5ۑݸcw|[g?OI_n~CPA_m zGz<5wOKU7oN}o?f7_Oxo}=d#?zMSAo+>/}?ݯyBx&>e_Kz⃿Z_WSA}WS o-m|Yo/s~!<|Ƒ4_vR[zͼqO̸ISŗ+/~҃~L?b~]$/y"\GmA\7i+nć~B Z_ "s=[_h7A}I~κW!evjKޫ\S\`S__<#yn[vWQ~J|Q7!d?s2>n?j3ϑ/n\G'>+Q(ݵH>cƟSvf/rG3OYp=}~MtY(] `7^;;L9'{'?/?Ï~/'_+ M7ï/ ëTn|M-/}J-_K+OO򞡯L=e𣝸.:Ӯ=kh}oazo)?"?{u>ځ%ochs>| WAvs'ϭk'-üD#POqƍ%7Ivhby @c|.בA}2q{;~+/Q/ ?I}S7f>H> N~wzr_g8´#s&W1Nl5O\ݠq9.r:3v[qWz蹼I xtgnځyTm˼ut|Y?eY[ܾOy=?~C9>/Yzu#)%ߺu|\_>BKlAs\e\;g<ͺEOn?`s}]mI.ߘ'~ 3iChլWU[]:wμq'AG{2;w"g ~ם-P'e~!;Wƍ;yt-}ί~ӝﷀuq,߈R'G'[w;~`]qu,~0~\yήV5n3/!'/nqyi3y=;7(㬃ig ҟT(\|ίn##ɟ~-]EusxS>wnRT>fg43fz9G;ǻ__9^|{|wbxEJd<8euԓ;yn9>Gs/v/4>"[/Y>3B [tw #n:X_ue0wa8OxY}0Eyzma9 ֋ݧmW1woi'k`a](Y'Ϣ/u ;w2X*gn˧_z;.sͼIƳ76 wݛG=k/=g{n_s7_s8q>A?;p |w?q;_4q(:?s·.r ^|}Qn?Wj@\=F8-vTD-౾]m%ή?D2e>`t]'wqc_w>v3\3^sÝ2wS^ѽ\AtA8{'}OO=-۝\#:?8<gSTO`aR^:#;GW_9>ѹGB_ܸ\;;[eq?/$ k̠?4;PetnM^>ڒSymỲZ8yRە\| k}1߭vc}EOGח|((K7 A][#oyܶ$?~^ r1eѼ q]wȎ39%?^^+70N~xY i>?N HM@y׌3oumz1a~,.y?Wz=)|DF}?f^P}7gy(:#>GJK?٭ |a\ ᷐ /=]&W8{xÊוrKGI9*|egoa\Zg]ºu\sxﱅy;joymq]Ӻ.N}3_?l3^.D{^3~)1S WZo{l/׃睗q^xz*}4s U0<h>KN˫Xֻ—|qI?e>]WxKQ?Xp_e|.]߳~$9ҽtss>@ Vz=zqp{{{xkj7`BTȻ}:v]x-} i"]oM˝}Z'ů֯؞jY0ֱM?QܻYP% ~(esK?>V=~9~Q;/~dX3K.?I˵zq&yw繹?g 8WMˀ0'|>6./MιݳL-T}wE̽?l\ϓI=.]r[=:5w}¦e93>w{_ϊKM>j<>ˑx~?LW>_www_l\ldve>/ }'/?X_2+mПzh|8G>ڤOy!WݼЅxkG;m|#g٬o{:?-qtj[Ӯ+C>rkZ ƥog`6oַ87#bT;q{w,_4zyNq) ӱC7^z 㫽/GY>n'{ur8\nn|Wz+zOۻG}}rn^ί;{|Sdq^tH۱ɞwMsT5~kۆo$W= OY}%!S#~B7!(O}g٠=ҟry3K~|H?ɝ{|D8DzuO6N~<3zծGqy+EodǷvևڇt_?rn8t,DAAy0ͿVں*FvOvϞkΝO7}z4qAh_'Wۋ}I7Omel"}_[Oq9/[->z2zxN3|aD7˨~7{ M{zG~>Nzo[6'}\~*r6w/E>#_pl/Qг< /(~mɟzDv|]㷶mf_f6 H_?<4$4ͫ'+ \QyܺE:|7V1~yY~Q>G|qՋx^䝺YZuB?\ޭuCX;g6c# kttsUu'Nm]UI;Da2d9zAėQ>u]"8y @@~Fyo~G55jy!LͻN^]-0ėS풥pt{~~Qޝ^ZuߣTc߬}¬*[/=+v{kAv`~y׭I3~q~.5s7d>sq }˽1xN;+?ezg8"/q_':>cgE1ίe_Pt^Gu!_?8{Y[XӜy]o>}_k=ߋsxwoFk#+n.5^ދh|3J3-bo8Z?8םoδ/gtJ{FQw?Z}~ޭ{w_ھpqp8:_`w{_輽cuWjg%avNK㻫WGV;mICs~/ݽ8>xaq/Z--zeodpN;5w`K{}[Џd]':9F>p_ACzF|"񈏣%FF6I75"#Eqj|f.jy:_Cf.efjy_dl{][͓)s:82+_c>پ]6oʼn[>]G"1;]\_m3kmh]͋0c1rem9F.o6Ay4[]3xG;/W/".=xg{sg^֙|7yΎbzzzgN9zqkuz{?ny|qiͷN7>miwO8Զ^砋\jSA}sy&UBP {}BGQ}У[( *SCT( ,h\*r}p؂V* Z(/h\-6Cr}pLQu2(NH|C>8) (:-A3 P'&h\QCr}ptPu(NUФ\tM8NZ!E>8ԉ گ(:yAPO@cC>8SHQ4]4z"+r}pP>jP$*g:r}pX.֊ 1HT4A7J P+B VTd(NTn٭,zU1ر*nxThCkE m(PV/꿴zTBkEU5l}@.j۝R nt/ (h&z{o_Z=@*5SBE1l}@SH#AU[t-PGШSTBkي s =Jh h!h [RtPP`B&( > ) :hRXCTBk@KAGޣ4Ґo VEjPNXQl}@th4Ȑo AE 1H%W\A R QQ4_CTBk@A-4_Q t2Jh %螢6 Z h9}TB GzZntLQEkM+ > T(AOVJh 0TG[E ''Go4R v<UWl}@ ݾHcȫU `*h BrkJh 0^ѧLZZCw) > TlASl}@V(zN`Bm*|bf/-6N ETMC~*UxmQf\~ ZhEw]߮/M:X+gZ&!_kIo~E%*Eo+BuSAoVレd@WDNᝤRGOӠNWЄ_ XTԀ)P,蠡l?`H% ZE7LЂM*H% Z XRG$hTA@*QPAd)j2H% *DEMDAő:: (B$)j2H% ۢYrT: D3eDA"4)ljI@*QP$mڗj *=OѢ}TPOVOzcjm: zʂ*+Z̶Tħz`@*QPx0 (jy;QRd<JT(j2 H% ZA/)զQ!2E* U-'"C%@DAUPQiJJi@<*ݏ@/ET ^P]Q (jy (JTdQxF @*QPS-@*QPrR-Vm4jAm݀T N:̶i Zh[B6-^Ѕx5m A#ENZڥzЦ hE zWEMІJA#ZChRВTڴT ZE3mІ pRHDMFl/tʂ ו*:mW*I&-Gu쩨lmuulmuuيf@5JA%4[0-JA7%zRm$>ſA!":EUE@A"=StQRPsH'V @tk(P[) Z(h DWi1܀HP J]T}6 GBU'o@7')h A?vjAHTRSڀAAj~ًT-Td1յ^W_n#dBmEޟrl@s%?_=t-QN Gql@chU5)t7wK:N'\DGiݖtyg*5U-(-nΧT4x54!wo3O96R&ǫÂݿ^<sZ"VO:54up݋TM4c> h/^兡JUQ{%>(Jj@{w}Є(h<&šP~ܮ4b&š)S*( t8B(➂~|DR@7AS*/$URAX7 IQP)@; ?餠=PIuqlMR>MDCCTқ(4$UA*g6$ JÊ,RTR`qfKP_Qcq`JPOQs*'~6Sq4T:VT`qsƦf|?D}bo(fy !Şqn<HA+**+jrnm&nQb`t 5d2t( e>(_Td2t( d>(URfP4Pf*d3t( ͐ʳTZ*L^*3a RhPPċ=z*Zc+uPNE$gJV@րPPoJf@Ԁ$?,σ@T2:oUSRQsf{6P=~N_١z*U(jrn%_K%TQbE5$n@^wECU*TTx䄷ZBU*RTﻢ*Tꗙe  m T)[Qoۋ0 R&6STTS,E +ѩJ@CAcSePU'CZ ҈V.a*%5t( U) MG)h(J {n3UVi@QAJRhPe&&vPpPe\&Cݠ*=;rQЅQ ҁ.RxPuu^Q Ҏ ]ť1*=*Fcq)hPJ[E[ ;ꫴQ5RаO*m*h44tM5Pdz]EmaKn a_.+Z!4,jᒶ5Ol@AB.*Z+4,tj⒦&R=KzKAcCZ* X\M5 ClRuTT =j3`ZT]~AAݠ=Tz\rxT~|1ͼCŇ>?C]#@TRm6;{I5`\MtR k"]򞯸T(hX0UdðKj_nV߮?(4Aؐr&d׳ً ېv|EKa;Y jRkms|T(hX҆q+@ABǩU6<_1 A!%Pа,j .9WL5fRKmwmb4,4ZfCKNS a"\rj -P|T(hXh5ۆr+@ABK6Ըd<_1 ZLͳ%Pа j .WL5VQmwlb4,:f%PаZ 2L+@ABI',*JAcSÕd"M X :UPаP!u! S abj ZS>+@AB^ШWL5jP{lМ4_1 B=Aw|T(hXIs S aZG!>ѩKݳd(M _ &de@ABUO6L;.j U>Qd@S aG#j է>T(hXua+hPаPU@ABւv:j 5j͡e@ABfn#j@ v<W̩;h8)Uq"*SWLJAB~ +JaTAsMJAB.+*P]S4U* uV4U* R4U* :ŐT4,ԓ#&*R}P4U* uګhT NGT4,ԟ$葢R)hXh JaM^)*6z hT ڌ%裢R)hXh;[ESRаЖT7A7JamnMJABSjN*JAZBQJ-RD@R)`@S)h<*PT :\*, TjB*JA{ZM}PJP)(,TRJQ)(&T)"RJS)(T*B%l6{y;;~H^λ?G凋/W E}6^߬u>_gxeN'ZCsE=AJ*UbrRP(5| Š/ZxW( }+P[  TFjB@P)hl*PV @5ƥj  Ti(hP*P !@7  TG(h4*P> @z5F  Tc(h*P A@6Ơ  T{(h*P @ASJm ژ TjPжTR[5ڬ!Ԇ ͨ@lmE*m(h*P@APJm ڂ TjPP*P@A@huԕ TjPPO*Pq@A@juv2F4F!F3 *PKp TR6SJEk ERTR@AB46TPаP$*M*4,JAcSJEk ERTR@AB46TPаP$*M*4,JAcSJEk ERTRP)~K K  /,u'<  ʄeB2CAЩF\fWrnJuilGRr9_nevSOcq=ݏ^uy2cWTV zfuB@neF+էՂ~9|=V}[fj7%NrP/yct9@w(Aty&̧toWv˼-3zF{A_\ xԾ|'  ʄeB2CAСLPP&t((:  jW?.78 5o~j]r(u~WxA(q>և揵PPݿWʋ|u((:  ʄeB2CAСLPP&t((: ;/IENDB`ggeffects/man/figures/logo.png0000644000176200001440000006325413451124203016101 0ustar liggesusersPNG  IHDRxb]esRGBgAMA a pHYsodtEXtSoftwarepaint.net 4.0.21 ifIDATx^}|= f&qW:w n:*^$z1]Q7T9yzq~ye9}36%t7k+0=;ڜ·2 ghrA>[x%RZvGD矲VuZS,=ȦCeӫ7d+~UٌMn?U] ~0x (uu.ɖHzN#=+x]nQK%M-kk[o׋[RZv{M DF^ѯ\lۑ5l_X/߱ mKnþ{>E@޻wS y;j% ~6ƃ=KOm=-˒kK!mgtۆ"* ]8ϳ]η Me8QFNAΦ!ob mspl}Y?'{piҷGJ~!kWlHkb|VDyXX ,lk)c#b=dƌlijcwewRm$'%(4-K^M`:wMV}~? /*a}vff淲&2o+%NX(>QBVgHu/d>ݝ]ٗ\EA^E %!XOWexS *m^,\UF)Og$ȦOɥcR_]/Ë5;Lpr{ 702-"H/Wqm*ȶ?d aNEl,iSޕ[?_]l֜e*F(,/n<|6+R IYʉcyw"}jRrE)>Ԝ[ c-P pu͐p8:j_d pAd\ʒڦ"|WzU}Be6Mvh|oc @.u:͚sk?3є(N8Ak Ykp`C+^6^{ER\PCJ ^?#% A꾑n3qD ,2%ܪLvC,;T'Ieɞ.ol~mYٲ6':,fc2q6oqbuم%cOzȅE1))`og8. 0oH-o\UrGQ聳!ः A$ :$[TUdRLؾ3ޑ2dľtgFS6Tò n㘦]e;LK΁_Ob$h @Xe^.:l͟ʔA_r1NJ1NNv%hQ{|I3^Dk-/NcJ쳘GAmsԚW]sr<ث W?]!~t9H<<|$e{i<01xL b=RzDJ/]bSLIBNhg -汱a`znnuʞ38Zյ}m^;X=8d泔%/HʚrcQ\TXHXfbIS'X*J2ˌL 8Bsj/Ht 9(6 2n8<nzy dJ l^S@WgdԵN2?4 zumG#*IJq_O*%w6rS0Z^D\{F-`EU AJJYރJpHCA0X>A.>>AަJtjk76Fk"/M8$H{K\#M 'duY\wYx#Ե:%yYdLx˱wg> y.U[OI&e&fK;%[FapY ` kd0ȀiAC)OQ`3$:猬 gKdzPˑ:X":K08iX4Oz76Cc%9[,l5-E!8L4?C׿k.)mSn\uY ٍZ~DJiY3-W|^-74Fm,wZ|Gn,D %'Je) VS⪒Kٰ;'i_UndufX78<1LNBлD ;+[ /uñjxzy*hC2SB,8ij2X5 G@5=Bx 24M '![~y-`N^Xe̽9M 0{^0_Dn\Ht234),.64GQ 0,?\B 7gMVIIKlCkOc1L&302g[RC \=fE4VswYfU!kJKw@e70)|Sd 1 ſ$‰A`dc(*Ӑ6,n`WK Uў!%!`)4Za,R͖ FM'j NkbM$<9q7n3ңNQ55mj7|Xqkvss]:\3s f[}iWbU1OYVoMV`D׃w5OJyGW=<'=H]e۔e{k ̖K xL q\V.۾NA` bZ1-:h2<_hp\m$J('`n&v thtHd1j1>fdsDO l2UeOSyYDjyzg~b$׺W(ZZcZL*~V&+zt?/*%^V_/TSl 3U߾'xSN+{)ӣԗFh0[1&Q' 2k2`&v9h1G<{^x].b6=G9 3ĥ˜gFҘ'6Pʢ7[ZT~NLlr%:84xW՘”9 k<@rYQ>][Z$QВKB)ӖD\f &plmyޟcLCHhxSmA!f]brdli6g^v4(ݤPX LfrGcύf/9]1Bl1yk$ͯ+)g]wXY;\=S-SbS ~;4fp\:g@Y?^[s/ud0]BXLKu*b/eZ0)w?4R%߬%o܏n hFJpHt kln F%YLOY5N(E'tvBCF;nIsh u]mKf<}b-.3K%#76LJPp1g/;"G $eMhc2}}a} d-kŎ+"֢wUv6{15R} h,Kh=t# C"%RGU)1Udr{vd Xy|;&C9M#95CW{햄&L/dH<ԂS AZ^dj lҚc7=O:V}P&sA`,;%[  W 6.4):,Y+KjX- ZJxD#0FdHvogIeJH4@٢f/ jk:N=>&iS{w08crssss yS lhr[EvAj_GB6|l1xx5gc[ _ykIcevWTnX;Y"sP哋`+\̾a3ORfx&jnPG7Ș.Z&kM 2EDu*єg8i#7벬23{x,J%19l>;wm|cJqcu(J׸r~fɱ~\~B%v B0e"K&KΆ`P:ٓ3FNj&` oh+($H4$*M48Ik(i Ci,MwǓص[P̝jbr$faŊ;`3dӡkRr p9]a%4@Lҝ_5[ahEDT3qą/biXK`Un =r5<6Vf^yd;2LL'=% w(Ht<eZb0ȱ| W&[r!_"Smt82ގѰX+e72w2kw&ݱ$ۮdݼ>`TI6_R &e݇qܶjoNju,tS)#;IsI"j'wD؁q`o)Z`.+j#3s+Pe$,50-,!d#4|k$"wCy3%.ПH\ W]۟↳6s5 /E3M"N[nB09}oUZ\ ͓(vyr"#E" y0SJȃ5벷1cZ 4fsyjJy|e0thG>l-k),)qk! 2.!M*@ LZ#:htM&h1]7 9 1x%)٭v̍vW&d3k>45 X6dkvofLV^ |`K/kbdY;h(WJֵK5w5T⫾ 4Yб/ɀ˲%1:]kYI!~t'i6Dl4HtL#%Ҿ`hTp|skĈ2b+]`K[Z t܇Ԙȶ]Y̽#.n_g\f8w9l)S!Y SA19ckljR`vDJ"!$180!͓1MrT%HW֢a:J771cZ⁈6Ds> m6oz:d)s_;Ʌn?}!|~W>}_*Yw?ޒ+xx_ܗWνYIw?wi +~˭Y5ġr&%NWμ-8&_|T(o$y#|M&i韱_VX#"yn 'Μoߒ/>v7~:}dp7Icwugg؍Oͻrz<ͯ]~.7XZRQqly9my=>cWoK4VK++]?7UpM^[ϼkaZ*l?Xr[K&+eܑӜsY[N.]L-O:Ui>}꣓ L9}T@D]!m .hcx!O􃏿kT/x\ZsL >2?8#yAZ:Zv;ktIX.h6[TGJn辎IGȕ7~WzSD_ dqW( ,ܺP}+L].?\ms|u~|K+esiIٻߐ_ݻۋ.0NE_y n{ɶM Ͷ;g|dZ#'/]nݓ%\dL(f doy*9mD5[.ڤIV $4ziie+pG &'Ja(%{O'A.syۈw ߗsu{Nr(]tuO{:uF\|QIoD+.z`z 8qCg2H $Z4Y.iRym_ *kO/W?B?<9y1-O}.H<0&:Wxݷ5hCjt&ǭCd0USrnuJΏ9 \ppp0H>35T ܯ ݶq7cac)SPa~G}F{A}>goN3h:-WvTN^33co8'=/Ndc[_}]^0J8k yV޻"؆wk%Wq)MX`;Ve{ܹ %p4e@΃洨We˴vmz{vghe;ͣy2oFe|o[Siԉ[Y5j>LY&)^';J bwDυV£&myvRnh~%zځ"]YA\2rŖ[V~%d=.?= eǵ0Yr)+M3#eGߔ`-}QlزLvo_˝Ԕ-Iictm}J4 (P9{ J ҤR)Jȃ'臶]}LeilWi&d:_u:_ C6yj~ojxgWr8h.A X˯Lo?x\%;8Sgtrdtyc mGcytI[2_F͛,;FN?,{M:qؖw^w\ѵq8<'ttR-zMzm92 'Ή>R;eUPJW&Y%zBfߐ>Ыs5_k~u䍫.T7‰|]x ~c<ϑr~2q)W+ )Sܿ+}|мe*`GI'?'WfK}Zׄ^-tM)crҳ ˍk`'@f`UNaH;+ڏŞh}O%eӚny1c0;.f7 b}YWhm?D9DDRjKtyO,4E}tq7h*i/L9R#GW \ I?N7KSuyŒw3EH\[Oi8z][qCffcp4>0:rf_6b\*e]t˕za 23by)PӺ+ 7k`^˩=Lhmi#R4w:JτԻhoo.=r,Hsd=6PN BvnIG5꘬||Yq{^8kΆD\`eϚ:"PjLPs!IȃΆAer4VY&Ea6" DKzңsݥ*F/|Xf i-v{m>B K&ks͠h1;h8l7 0U~A`h6e*fHe3歪H$ ; ;X*])Gl] `*tJy c Ԧry ފrtG>堻KfT%亸äI|,]4X pMLDH0$9#G2n\p n7l Q:HW>2K Aca4˨Jgdf\t9wEls^ޣ1  +go1<2͑ݥ) BzM`q.}X4Bkg&2{=]CK;Vi (aH**<de 8Aח0Lu0&Ɗ NM #!ь *%:,RWcڛE@W]&.LS <}R+ɲ$z?b0M>׀] &E\4\>6XTOtXhC40n:&+2:^]4e9"i"]c| `p\$vP~s;W&iapi vmj@8Af,CX[&ekkV/Λ(dxLaD̔)8IMygU&#l59u3EyJD*mU`+M*q6+מ@{e@e21s}z 8vb0%7 *cpV2M!x}/R؋X y+it}+Eh#בiҖh>q#|VDLXȱ|&l`݅JcY5} pd>abd,pduuu}.`EXͪ9\Ef ^8$1fɒ̊T7G 妀N4[8tÑib kȲ/>j?HV ̸ $.c_[ڕ%1w  ,e3ȇRzT]E'͜q8Y`.`0&+RʛКCh4Iq`2[2lbtiDcOm)pj#:`&2>a 5WA{o%6 Ns̳ ?OP/2-z2e֑yHɂ4or_5Wg;>ń pC\,IX>s,Nk fM]t}gK%| ->]WSy :"Șhv:%@%k&sd0^d3ܹ#\K[Ɛiu"Mmjn-$0u*2S$褥_l1@΂rIptJ`iB~aBC |^>qb NL&Of?7t#WH׾-4+`/Y;rMpMvwu_RYk؛i3-`;Nq %`1~ks5YX9c0Qp4MT> 2z^DLc?G| "-yΰuYX|E0̒ I@G++c){1yo =9W @z/%;L%` Ii)^&ƌLhv?gŞ&kSWZו.ȗ;mjsڪ') IP m\33Ox9wIǖ5 M*[%"XVcV5%6_2VJGt8ܒhx#9YHYK'l4$G6&s"wf2UiOq#Ӭ/c~y f=xZrl$yO_8liR=oxm CMq\)ZpաA ^Oc&i6rXkI2bڤs ٦KA:5qeo}x^aǵTÔul:n3.z9s5 [ehЎum,!^XFQoͣy22lK] Mdٜ s*@CejDat8s oFn.Z͘(4 L86*`{w&,e%&MeTO`jFLdw2鬵b&SbMZ /=ΖJ'x[:2&dǝjʸTfOith !a3q`JtmDY .ZӲd rky)S wml1wΜNq;j!H}̍*9ƶƓkQR9ӏuؾx{k^౒v>atYx9SNVC4)':(@G}9Κ k= ;|/ߴO?uqp]ؽHvN0]]ԢFiWYեSҡEMy4WMTrV.,G\lmW6W*AP\4٥ƴly& ApOw .yK`0?JE+FWzV/h3 90M 2 p~wݤ%tf+7PHs(㘇CJɻ$Oe ;ں`nJ`.ح_CǴN(C ו4Z3IweN`3M۵Q.>6r[}'U,k!"٪+ W,s~k^d0b.ey)by|8 p 2s^:f[ćk.9ʹ2AB_{^g `mݍ8LYÀMYfđ*;&-; c|7}4VK@k@"$e0.Ɛحuy")\+֙=Ct(MłǩKUSeZ +d lJw Fuq9Y~H,e]:)SH5}"߸(c^zvBdTVhltcprH{ۣ9k̗;aZ,UYV2?1wlY`vl 6#o_ِHyn%Ҫ)CN90tV|8.+R(t 10UňF8?El٥nـLݭYu.kdISJ_6IJ: ?/ pE4L`t`Ek]bn:adm~,Yܸ k0i7՚[6Ȕښŀ1M۞ Ԙx5|iuH|mG%<) GV#dhΝ%*$k~25c\/mtSe,c&Ɯo _ºh_qSƸ2@X2ebiד,Iƃ}}p0SYj:bdId2Ymzt#2:]5'zu巣5G<&S.imh=ϤII6{)v\UF_R$ֵ+.vKs<֙)UH3%A&iݕǗl!hG9u2'LN+`6kw;MĘp-Mr~V`:`1){l\0&sSRGM 0˕tNZc0ܾ%U_!r߮5itՕd VV[iSLsn / `ESGj,|V , eꪟwQCk)*F#5rIX54sʺmk@bEy'%5=db5tVrtu֝]R~:E`1k )N4\&+>sHۡeuӶHttxۉo[8flsi 2^d1*tܜ[߀ cH 䍹=lS%#W z/%[Mtȁ5]b =Wd?䭥կ)s@^K`M̥<hn^o5V+[2fcw]!uܕˌt٦/L͍ll}Tiɴtg06a_h}k<%Ɣ +T^JǶpƵki+jʎ u;!.S\c34H:heq͌E9GhY=R JWPׯc-X4sYA* 矑O='Rc5+lԧXB10Vc|K2+^~b' aRWJVu:p ]3!0k:tD"bUp<~g|0 ɔdtN>;NtGu^6_^ H{*0@߱c㚕+,~is2wKi~MIT"O6$LfU!t>MgNȱ@cLs6 @``T.7!{ǛY)%@G7b[YFE`P֒o2 8̓UgMY)Q9is`q xunUSY1~s0r-)tTZ]XSmpa'h>dHlKW!GH@1bJ04 s?Kyd iIfD &hLӱrMIƜ̶_W}lq~̼`;כL:~hA`,<(lC ̐ Y 3{G5f@mzpd9.kHr}A\ulń sPG fJ[-\xxK;l8!o&m(3-we1os>Lo C,f 9*Db L?oCA]GyTaH]G&v3f[Xv*v\.C;Hh؎ d/w/fڤ1{%Yh}< +Vd-Ib;c3(h}z1s(r Yn(]J*b%v>xl~m9ߕ5Yׁ=ԢrnN;07F |x)E'L!!}1K  jLh=#N<'̰S6>o#b?aXi@P}!pQxi7݁6 vb]m cY66E>E/֩y`ۮzduk5`{朝^>d#߿jA 5[:#1Q~c@Y0#Qe)#)؛5!Ab`8'IT)-KKuʛ7U^~pLfXlL` 0b/_QϚ;]A6@] NGMbmA6]=ǾxϞa?(l }PYߗ1@H1)d;A2%]7@9[h$I,fH6@*1RkIV(-⹯kblȻ7=^%[_{G6 l =Zs {c'^5d["S 'f+,8\򮿔An5)뺮#-_DE8?Jy7L# :f x,u^ze1h{sM "cY hp~N#ȴ2KyCK)83II2ڊj0YRLlQy!0.U/"R Is?u) lZ7-/kJ{x)k+'c϶yߖr!۸bcpO>=[Wr\ru,n RxKIF>(4l.{Y`Dlݞ24ؒdV L@(Mpyp}|>1QƒY2fJclb04 |ukJ2`躹֙+6j&fPiNn@p+ 0VtMtN&qJ;hDp ,X]{ r>7S;qMօX0VڰS/;Zұ7놌|oP}:| t 4,}^ 7/@)7$٩,{Oe/=+l#FF)IFyw!*x 4g| ogT}ˇg5Fym S`m3z T6fUڞQ '& 5`.$1a(1pb*Yn*[\vxF%8Aw]=Xvuf[[RZ-7Hu; WGK\G6e̟go? fD^82%*/iZkіb|'>,oI97qO7rF$W`ȳv#Gb5/Axgʀָ^}ɡKƲمH Y"As ;`N ^anVNMkd*e2ڵΊ_O%hGw D1U9y'_A9~mmVas &Suz,2d;y+J&o>)b 5\+qsա,C)pT0Vɐk5ZkS2 h2R^,.[HXZhM/@ T Ww0\`kf3!0mcاu X Sah9񖆊6cZ~GFtzN Φ#\Y _k}7]O+O3++Oʙvw ΠDZSTV΂3]aaaLadrZ0Fkʦ˖r?mSF?n泅%$Q 85 ~gQ̞{1r'<26")XWp=F|^r|ŪpY@,xD/5Oѱ[-0KOYbñͼkyp2 LiL0\pԇta&Qbq|Zj[-V}S7c Eqkd0W !pm|Ζ޴DS1ẇL[=JI6#D}_~*ɉ2 =s5X%c-Ɍ։ kOKL'dz4~,0&C 2aI/sW.{^b-筓Γ܆ߖd (pnXk I&G9[^yߔq|-e_o=1`B _ VC$H7&ٕt^ix!52)#iNi>sss0vNs{^Wٿvw9 k c G )a) h&O֑c7qlLYw!dCn a$$9Kt&FVC*Fsv6<16DL=xkgw]/ML˹qzAp`v> uްa?A([[[cj9:Wam*/uQS}Z7k)TdKd>gJ`,He~S+kmVQ}\t}d4 eO<%a-= L܄j=ٰl5MGUW`b0Lv9)98nΰ {]ldjqX0VqO8xmAS :k}TTM#:>+#6Z^D$gXA`y)N#Jw֑,X=+獯^gZ2 G(/o;#Lֻ)9'KY1<ـf3s\sU9S5W]qypu3 [tT`Avɦ@#$ $7 X_.؏`e9qk2mi@=܌s.8D״_)ٳgWq-xCp fv\9%lB|J)d@Ղ><c@V\s1{X jydtd*'d6He4{Aeȁld?.@zɸzl(gnYE >fy.c?$s῭ӥZ ԓ7[ y5Gs}@MZ̜*'{"ca-l2eLv2.+ qԶYlcUr(!ypSݢ r yas-e%:Isd)ɒck/΃XďqK7@pbU,ѱ[rܴ$&dӎS))`u;at<[uWwlg4XȠklV+^j%21Y92cv `GzYҺ !_u*/oeO1]%.[.#jEˤAbJ1jǞ'2镚,Ԧp#l'Gc y6a)-)G+՗KPTO.TuE%Y^c.?)=akR¸|OiثX.*З +'75# [G@?v+Sb3-+i'vo)R&LSi4-V0 6=uL-*oc!7a 6+t]&ʶ+9xĽڶ]C@-{~Vfʦa4`.}4R,Y`JNpjpD"c\VP{d&ǘ N6˘pȍb+e;uHC|uX:DKWdu;&i=_Q6U5>O|J'R^#mYS4lѲ6ELZז}H ay.$ nXS E3q CCn< k n؁ s;nj-eOƼ e_K_jjq[cNlU*'6a: b[!0T`_;`z~: h8yi?RvnI=961 Bۣ'Sk:[g˂dzrlEN{TU IZ`o}?{X-dngeWo+~WpnIĸΐ,i4?6cY)Jd {J94@~a'+a`JjXjNt]&?1*x9ߧv3Ok}{k..G804eO6؄e񄌛Kf-9c:][.χvDK ]k)ǼXn<*HN;6jM[?Ko&3PkRF81a#7/ee~>99lb}Oױ34qe"Yw )t%`»K`Y?C;^.@=|UIi<'#1i9?Ap=u7Q2R)ڤCJ=[ڍz};= efXkuu-r&f/'eKۤx܍ ڄ|qٌK [=*wCvMtoVژ]?wc<8얌}@9du"{qN`8vuZoVx`:9kL|1t䢖]d9zs[*=R-*3ԉ*NHd=֬a3aKr~ SME QLdݒk_WYXka;ͦpr%=Ru9ƛq`oEW25\yLrK)*+T]SJ@YxAIENDB`ggeffects/man/figures/unnamed-chunk-5-1.png0000644000176200001440000002342013725702655020206 0ustar liggesusersPNG  IHDRMR/PLTE:f:f:f?b?b?bb333::::f:f:f:????b??b???b?MMMMMnMMMnMbb?bbb?b??b?bbbb?bbbbff:fff:f::ff:fffffffnMMnMnnMnnnnn????bb?bَMMMnMnMn::::ff:fbb?٫nMnnnMȫff:fې۶?ٽȎMٟbٽٟٽې:ېfnvmfȎېo pHYsod IDATx흍qGnL;X%YV&R-9$MJ9 nMk & QEwa"w[|"cAD6AD6AD6AD6AD6AD6*kohP PY-J(q@e-q@e-q@e-q@e-q@e-q@e-q@e-q`>^}竽'g\WV޼ҷ*-P}Oά^)joe:BEqsz|Rn·{O8FPTaIz%yGkٹo_?;^q"t@rT;j rٿ*7g冀3!E"8yƧ9Xsc[9yqr"Z-y@!1;c92\,Ü гEwVםKWvomW).7ϕWyqH6fǢ}կ*!Eʢ89uZ.S'Uu*+8˸BsIScRGu.tO?gĎ"@eQ{˅\#dPY9n@kS|;NN&WUUВ!ÚZ]67I!Eʢ89YݽYݐ·ߩY'*ݛWsw:AձF'G2\,WyKex:Sy'Lr{-Z!E pԓwa,Y-+ź@ۊk+(k(kK…B8N!q7. 7. 7. 7. 7. 7. 7. 7. wV[ PBX;/w (c-6؂[kϫ@x8;k,w KU$c--)=*+c,uWg\(@ cQo"E zg. qj"@ewA3c PB;3pP%t,}Ю|.TVxǒ;!BJXzg]!\(D8TVxR[mL[PYK\3p (ciupD8TVxPN >;˺C<5ޱκCBܒDJ¿vHYwh=h8TVx; B!][PY@x*zl\(5s`k*+cݺj PBP&vl#@ew,Q[pP%t,M[|P8TVxP p;k9v@m[PYKQo@yjyG Xz@PP7q =yG Xz@PPu9f@F^;&wu*+cŧ*z쬓?:14#PYZn>Pրyx}mmsqK.z4ej8K@Ïom.>kB8B&:#=пn.ś>hB8e4zϋ4O)EhX,4"mqlbVtyK.ݤkG^؂@n_o,%ױd!Bzx]y qK.:6uu-Y.,\(tk8[PlMksvlB8SPo˻͖,\(tj8TVxf%p (cm}3B8B&zL G Xz+pС>#PY \Yj-!\(@  [Gі.7kj8TVxB;}m !BaoF^j]Y Xz;ܧZBmzq]YY @]S-!=Ba_=#@eBCJXz;XB],>#PYƃ`ntF边#eU% ##@ebRFntp`ӄ0#PYȔYMBԛ ~qڑNCtfBݯ55@*k຺Ҽcn o W/‚8TV2DS]6`wK@6Phyb,#@eyڲb4"Pq,-FJy{l%p!)p!kҙcح/xەYݙRuHJJD2\,*ǚ!1j箈#@eQ;6ôb}]G XٝzPShq.P;?׻E2wE*+c=}ִGWh;4h8TV0ڡ-iG42\PYBG5ཀྵ:p8TV BuVkt@!32\ ]l7ctq9oq8J05@*+c;'Oj "@e;[3EM]B$sW с7"+4t |.Tc~!]mV#@e:b QVM G"u7HsWCSTkR>RN Gʢs  h6~]I\N G"s Ij>[%ЧTBO!v|LeIjӈbA% o:64ޱRqx}£| B8иwXz>~v},&pkşw̚XzX`@/٣:nD)VQZLKA Nկy~J)|FIgZhYzט\|\ZqA?\x4AвZu"hp鄂hPvr Z-ws3LmMA.Xj]w۠ h~v<.w`ЮOY+PиղPK~*dPYohh|}2pk2RľPh)>cva P!/|.TV(3B(mrUkN#E XOH <gHi[%E z#B$:ƶP(:ƶP(:ƶP(:ƶP(:ƶP(:ƶP(:ƶP(:ƶP(:ƶP(:ƶP(:ƶP(:ƶP8"@EHTV4 nrD=hx PB PB PB PB PB PB PB PB PB PB PB PB PB PB PB PB PB PB PB PB MP>[, o;@zo}P;{x}r_}J)xDwg~pT+KmpK7=:=Vx gkZr Z-w 諫P/xk]ppz|u:.Tuq'~u˗Muq?x3_{廟|-|1߃1@;}u=_oOfMTMhS  kew:^oVx[><=aE>ߙl_<H[ z )uUx[⫳w ӽ׎wnן IRݴ:?ux1@m>+Lo&0|6}?\zx[=ѻn Awuc1-p Aegg&F !BaT]vi͟S<p0)^xϧHK"MPx(*b'teUf ,~G/HW @_^"M1}t!toLĻuOmQB(kuڢ^|jJ@Z FLO?w !qߞ@CKz/^a\+}u5˒湙КāN@uW߻S NV>x/HV ;_=^]}+k4Ğ{ mGDj'YS# P{?/_T~¡{{wGIﶆĜ?Cq@5}o(P􋤀V.eٽOTznooY_'=9z\xJ~ꁚk\[ՖU]宆T@ uxS@Kv}l6jp8y <?>{Zy=juLHݽ3n괧(wՏmS~*ʏJk\RoOW;c`G{oV\.O, NG>o//N)/>T#kЮaf@ÿ47{كn+u꼤TVvN~-_oesJr 6n?T3SjMfqS!-| 's=çtRy@sTNZ;곟v|pO?EUE U^sB3@Vf?j2# PG:K@uMzz˅\)-nqkˤj~۫Zf7I \6sO#D @_*82krA;NN715m{Mvn s{&8LDUCʲjwoVomLNd*A-ٽY˖ w8·ߩeՊU7O7Y\Tm`XÑOefbi-Ϥ'2,,o4jN;LDuEM?+^C~TD#D @_7I?Q8]jJ@YKHu%89ÐVnI@PWwmpp}Lc[o\(\x@' z#B@PhAw/TzGPP SS'TRW)75,_%ͶCB8\ sփ ('@~Q)נ.^u[r_,@2% @nO9h.. z#BJz#BJz#BJz#BJz#BJz#BJz#BJz#BJz#BJz#BJz#BJz#BJz#BJz#BJz#BJz#BJz#BJz#BJz#B" PY_l;$\c[o\(@ c[o\(@ c[o\(@ c[o\(@ c[o\(@ c[o\(@ c[o\(@ c[o\(@ c[o\(@ c[o\(@ c[o\(@ c[o\(@ c[o\(@ c[o\(@ c[o\(@ c[o\(@ Uזּ. wV_ PBX[m.)@ewSP(:Ʃ܂[*kkc*Ĭ..K:wp~|xks_&+/)=пn븳ppͧD^FSJFY[` ?X%ͦC;\:Y >5<(7@'|1 }9YWu|6υScRoTEYYsP%tGPS >*+c,uWg\(@ cQo"E zg. qj"@ewA3c PB;3pP%t,}Ю|.TVxǒ;!B&;YW 44n0ޱ@!#cgYg M4#@ew,m. %w%恭q <@BwuF hǸ8TVx; h_Hi׸8{[L"sJk¿uH7uM9n01o(Z7uB!PYKT Ak\(@ @yxj7G Xz@PP;qq ,<@`;39r@[PYKPo 3#>514=[PY^o3 > h8TVxԞl#@ew,r@PPu9^@G^;& ޱs:#@ew,f۟d!BG]F^;ކZpP%t,^Mxd!B{F^qx}£|i2K.:71 b V_ͥcu,YPx,~?F@G^PS;=XBPDPY [J,!\(@  YQ%p M UkJA%vw\(kqF{WPTK@6P8V@4}%;5@*x69nU(7wu z4}q,AC;b ~fj8T 2% iȐ{a 980hiBctŕI_Z^@ hWA$O]Gʚ;>uoP,!mpEXGʪZ;p}9db!,BՋ oK'쬥svK m^Ti )e3vsWTNRzW[@N/idPYCmYYij͢l(UskZ:9eH42\Q֥ 7(V 7¼HwH+[AMsW:6@_]FαZ|Oͪ5tq`C՛5V;Ӭ#\|.TV(,'V.ptq@>$eƖxY yPA,>5@*+cΡ[4wn"(Q'IDATqX? %DEG(nQ[ϸq6 v ]G"w'FT*PC@ˮ6Y|BTSh 1T(+@ɦ#@e:b ~+!©PYxU5}SCZ_=SW=کPYTB53X=IgK6V% c*sOR;W b PY$- }б^(ah!-d侥';3JZ\@ 5ϻ?fMv,3t2dCzՉ !5;S+C3,66 Hێq!=ۯ?l-!8ro}WVNCu=w< h,T,Ph uD" uD" uD" uD" ֋ʍvYzG+O͏sݙYs !7~9kVpkWzRÏ{ݙYs #hNuUk:@nzݙ"Zs AjN㚥NTřGj/;ӫcv檨m AЭkӎf鳎ZO~3ؙbZs A> y]^wVd]-9ނA[?D\!sx]컳I<,$uTs1,}:woI^߽'%YVu,>:W'^/|cƳ[#|ʶ/;.%߻!!#6UU-|&& iF@kp0s/4QiG@ h/GM\EwKpJKo$L)x_($mdIߕo#'C@{?!{IJQ/ 5ۧzlIR/1 ФL_xYg*B-Z ( Pj1,b@YŀPeB-Z=qᷯ!J(T `@љRre@q jPt\PܮB)%W+P&EgJɕJ(T `@љRre@q jPt\PܮB)%W+P&нFtzgrs?]Ȁ7}zsFvuoJ50oFD% {SJ=} jbxHzM] =qB5l*,FPD}踆G) >t\z7|y?"ۇkP@=: C5t\ j(WҀCǕ6rq>t\j(AQn:>t\hD}P}B} PmD} PMB} P=D}PD}nF}D}VD}D}FB}&D}zD}:D}.UB}.F}.ED}. yD}踒%;BP">]I:} Qn:ƈZ2ZuAJ Pɨ->t\ jۇ+I@mRAЖ69'6VT6^5F+ѨfސPhBrWڀvW/ܕ8fmt_ *uFtAE}ОnfQDPhPFr:@%>]6J% z-DEXJT DEWJTNDEVJT⦹D5Eh w&vj(A:T@mh7&mQ@T%}pC=zJ% d:DuEPJT7UD ^JT*}Gh w]@ˈ4hEB(QM @} X`JTiD"!jDb5 Quڎ(AJ P8Dͥw`" DrWrFn>D% Qy (AJPKBE0D+Q@-r`"842f4?(AJPCDPm5BcQB(1`jwQB(1`rQǕ(2QB( Pu6P@D DP=Ԯv*3Pre@[e(ܕ ִ#$^ 24m֕-Ԯv%N*3 Ik86@g3/Rʃq0Wtk64Ds{:t!"z7 vw}L%k@Sv!4Xܹ2 Ѿ̕`&aɍey08@Cz8 j(@u$ 6SJ+jCTub!@tAˮVH}x=ʀp"=uJ!_9gVDtZaPZPWV@e:%fԝO Q8@J+Պ /;HMT<0̀v )%J+&`UJ+%`UW @fUY4պ72e@kd( 2!V6(MAzMӖбԀ5!0(63~]PC3(Lu=0Ew&7eԮ|ъ 0Fht+_7*V7p+_7*V6|{?:xS-Y' \Ej/U:->7(t8ap릖(6U]g^8W[ʃU+L5?I:$sAlQ+V2m6SAź%`eyJ@ S]@OLYyТQ2נ$ .uZ0@+&(63]QhI|$)Q(r>z,.%}D u QEff fʀ%`5rO$7_$%`ur?MnˑJ*Q(0mtóh6UT]Z?Piܛ*fcz[jU4S+O|rצ-IG+]P!Zy$5mOT FD2.M}n h*|3e@AR-Oug (@PAcqL]AP.V+-^NG܃00(U?]Ԛюi&,>)4J>ʃ5dEj `y!)4J˃ue/B^BԽi9 NʃA&K"Խi1 Nʃb@ݛv<5PF48)ƀv  Nʃyc1rO`܃v j_c@q  Nʃ$hpR/ggkP}\{п<ӢtPsk5R`@ݛsզ?KΫ$ 2vش& &,97c:ۊ0M{`ƒ`: h-Az61Mm\[hpR/p" `: *L-]hy8hu0AљZz`: xt4Q)k U>$xt0`ЀTI_$3r-]' :Nx )kB^ts>b@}BhpQi!: `@љipT Q0L],905GT' (:Sp9KJLMGzEЙ»XV:  שk@IԍzGz0LtڷIԙ+ AT'}?IU@~^jFDȕP+`)Uޠۋ$EgiԀ: `@љvej@Q0LzTQ0L=zQPPt^\څN (:SO^mGT'T/W?!*ԟ@ EgM# EgnT'TgW_ *Ի/@+ʀ3 :2LCzN (:0!*47@3DuBe@љsBT'Ti@WJDuBe@љuhN (:Թ+P]r%jWU\ ϕm!WBte@EȕP]&r%+Z/BB5qe@kEȕPf h ԕ+P]В •-+P\PBte@I ڕ%J(T{W+P!\7LډNLnP]QhC`#{;]ƀ77';;j!+@}2وoGo\ȋWtzE,H: E*hԃ6+P]砾MiT_?⽚Rq G7=O;$T?5~t&d,n@xE30G+BXc@ĝLl7;xE0G+K^;TxR;sPG=rZ90_g%/v+Y=l}bc&x ΃O,B-Z ( Pj1,b@Yŀw;_aÎN?4b@{Onmش1ΖN7荄SP:ZǫgϻWWFGoCw|r7\K5/}H-twycvLwP᫿P>y3ẵOMgʟ 3c0wƌl=n磕[Ϣb5^:`kIDAT!%/b゛P0AKath]YQ,KC;㛻B~5^˶)C ){e&:eC˜tOnҖׂ\0SX\ 37faVvTYb*!ĮgiLdģ;p82C_܉쎥V:) kC{UIb?餶ιgx[qva!|_]qyÇͶY(iܺr79? rjd]Zm hM7! 4r|u!瞳?lV΂ lY'ˠٕ9t| M˻ɿ&GtFs'S' ]~-wf5`@E"Zhk,37X7HJVRĺ+%@P"I./ڲ"IKC eINw6ɺ6>p*N܋qQ|ftP8qw=c-IyQN/kwvP!& 5a<)sHXijFr@.P1jʹBBb@Ib (H ( Pj1,b@YŀPe7) =lIENDB`ggeffects/man/figures/unnamed-chunk-6-1.png0000644000176200001440000002636213725702655020217 0ustar liggesusersPNG  IHDRMR/vPLTE:f:::f:f6s7~::::::::f:ff:f:f:::MMMMMiMMnMMMMMiMiMnMnMqMMMMff:f:ffffffffiMMiMiiMiiiiiinMMnMnnMnnnnnnnnփMMMiMiMiiiMiMMMnMnMnnnnȎ:ff:ff:ې۶iMiiiMiζΝnMnnnMnȫȫff:Mi:۶Νې۶۶ȎMȎnȎȫnȎ)3Νi֫ې:ېf۶f۶۶n䫎䫫嶃ΝfȎ֫ې۶D* pHYsod IDATx흏]Ǐ1Q! b" TlVk_Hh mmMZZ&`Z5HHRZ+1i͏OΤ=.93;;ggyv|?/wfg;?v٩D XS@" J Z()hEJ Z()hcSBWoJ}Wȅ_xLe>F_gycHOK:͓nSO]dxcuE)5+:6u]%Og/K+hh4@佩 ~7T&9t.F@~R:aG4NO{[R(0wZMM)]PDo~%-ºLgǯKU4R%Y Py4`4kfZd[uSW~=꼚J9ٟ/HWz<盿3~'y S#fg^&ϗ^rΪo]7/ IS@[K+Bºh[ֱOuu;Uz.=,R ]XYe-me@LE\vhf^w.sEY%Kq}d#E5߸WK+-jǧC?z15}<}}xJ"ƪ,@қ)pgK"u׾"ȌS䴋+ӋK-}ΠzN$XEr]`(@:̏󞬒QchG$(ë1oNSJZٙD $Wn -BxsVN/ ϗ_jU+}EW@H|huOf\^})'(>mEVPTq,mN+9E[3ۏ)Of/S$I|8k^)]c/ڙ@)) J4b4E#?.@YݟQϚ| .'TRֽ=mz;se9M+mįD?=W~RA<ʕ>ոV=|fA{YF9v`i\fu7}sTiD-'}28TyN"a:u]`yRioYgIs3I fA7kk|?Nkl&XO-b:Q/߸?)OW1G~z~g5.R9 'MOr|٥yW*tlךcz!~55Q~,E0JHA%-PR"@IA%-PR"@IA%fM.ƥ(F(Ph"@mD&F(Ph"@mD)@O;Z۾(~S םD?4@oKG,ӎ uOVOToBe{OkvhI#ߜ;?A̶[?`3,ʿ$qjl#2TO\t #wץ_)uAKܴӞzS`~bg^ɑlhʚ3w0 c5<;QW64<2ƝTV=\rp̃c u;Zx X(W$iQĉn:QvLEUzUCţ&F(Phj跱[oT -1e,8[DYpJŷuoAf)߂&͂S*MT| P7 N4nR-h"@,8[D,zp-h"@M4ˀ B }:{V#j46̚"(lP C(l!@ Zh "@A PHg PDB"@ F(P@h8"@D EVE$AԀZQhEhHʮ$^2ۣ̀^dM^}"Hǯe=q<2{KGO?ÀoeQt哯+?/X>K9UoD 3%'Ȕ?tuۣz鑇P UK2Ybc$>@S>E3/FE0 e٨A t: n@(Ѐ%@CDEhA䀺Qh& 2P'=M5 @ А-GB 2PG=vX: Ig~]`˪QAj.Q(gl:Q(Gh8"@>Eh=*z@!>7fĹҖ,hOq, ć,Z"@ hum<*r@a>ٍG%@PkCd1ڷQ!0YL|I͘#=ŝEdrL9ESvX6ڏddzd,^~V@hq; >cl=&P޶_~hvX6ݨ@|N$|t끎܎jPU\$C iˣbT^$QcnTZ.-aX ħ"2ԸHfFFM.+֫<*2@B|vP"U\Q5ApYQQ aX t`eG(!g->(䳻A|Ci@t<xT4[[;oǣL܊z&.q>;A5zN-P- џᝥE(O|`G-yViTвwjEþ2  > PüQЪwg a4fP;Nz)!aX(HdI8M,Tbqu&jsE2~V4ZhyLqY #z7"M4< a"pj|aSH,$$xB5 \z~";jo)`tz^+BPs+-Z~F;,}x=t^wKm*hRŹ X1on6]g<%pE2EYZ3d9NX$[<{Nf$oA-͂S,(5VR[8@sb}dc)us~Km2.J.5ˤ:`[jtubЄ]1@mD-Փ%l@ 33|#w N,j=25,BoAS퓤ϊ=Ǻ[pJŷPִ&PV[agZ@Wr}8:s_hG}2CZNIBXG"DP@SA.8"@I]r2(}'dn}bhNwwHKU 36}Nܖd~ˊY %Do6?娲 j/ƥ?~[($TIFk<Մm̃S O-m7Z NT7sNaIhf Pi[D]eq({9[s|i^~gW$Kܠb3Xd?ӿ:}{]-|NgL=ٙN'A?8(jzgN倊D^1h=.i-YU:wIּLT#%#nRoOO`~6VwvH|%Ϥ>u nȓPČ  3_cXxoBM|ۀ_]?~r;G J`-Ulkq̽/~uE3۶TI+'7Q~ڴPzE]!&B-] NwQ2+* (XVu\xsOO-n777o# @ه7.߲%uP~wj"=F۴]ֱrЇD7G"Q7$ˋ[xwzo$(V$מ*jl%?E:!(j+fnZT.߲%uPB>h6_؝ Bߓ6i,;[jO5>sQL&Oț݈xд]q9eKB;I7*k]!~7pP9ZӸrL!ήORE$CCfm?*^=doytE5 @MܶGY8]!א P:?^U]|_C&@mDA7$ƫߗI!7 PM\z_<$ؼ3ۥ_3T|&+@ks7`N~~Kŷh$_5Vm5d{;9OP[4َS#h!nR-О$&R-u. J[*ES P[4a*a%oMXڿO[*Eη:>H"[*MA,~KŷInR-hjkq):|kl5We̓oA>HOzD$`uoA5nR-hAT| h͂S*MT| 4@6_I"@=oA9&}i H4f)߂&͂S*M4f)߂!_[533$yTR-h2O3]dM^}"H~Kŷ yK_:|!mR-h[`Qt哯+?qZ08TyP6N*C 2iT| JL|>fClFY+mׂa[r@Wv1|B}!¿]ݻeigƧh(!ţoA]m:5KC=qj-o4&hŹ\ P B2 @ja-4 5{_<z-h Zh "@A O:Kҳս;m^@| {g c/nWghWVsSyE=e4ݎᨲ7Ӝ͋ PbG4SE4 7 E5 @CR|,>o4 @ JPK=k@(Au ExM 2Pk=xUh؀Ib0{F n@aL9XZ?yA/?+6auۣ6i{EْSV!hCCFT-+Q[qGP=P# " "omoNN(|to>P7=j(FuJlPG=,s "o"@*h8"@o"@>#o"@> pDFh= &~{J![ >cPUVyr*2Sq86̚+ PIqDf-GE('Q8]PHh}X g,~{T̀5P4P[Ё~$M<:e`p} @ }wy4K~; > p);Y IDATݨ@@|X`@7*P,, 1*v[tjgCϘZ$Q;,˭;ò!g^$QCMwXuiw; Ͼv OE2UˍA;2 PL !6w:d<*@! xvPŻj$@- F> X(ijo(wű|[: ]9mGE(g> %)kTвw :  > PüU >;(R['krɰe~y2ɆP 9` -^duCiG/0 jRY#nE252k^qR@cʃHmG qدfJ<$0-K d8 ck(YC0,ߣ/ok(NvwÄg + p޵DP61?\b@\Mz)F]OS'[52GP0ߧ2:dt]:[vy 8k Y$QԔR@#7gWGm̫؃mhM@M-ψZJ_ 7>ƲPXL͐,zf=)|jKnMOw a'\%T_'tY* (>=n}S3'- d$,i pڠ{2jgLAEsE-S[ {bY1Xu Pִ&K#I9QVʡRYOkZ@WDʰU(lM->Yn$G\$Q(lhRQ'fiHƷP7RkFVHƷP7RmjuRTa @SM,h@n Ncf)umumxƾqe6(YpJmZ;JoՂSjCR=XnXpJd!@'߂S? :R-h"@,8[DYpJŷuoAf)߂&͂S*MT| P7 N45427J"yJ Z()hEJ Z()h& y-sqr+Gq*^݊Z<vyߍrdr=@l}E5vytߝkr8cXnIm7 ȷ/ڋZO\b E 100), \code{n} could be something between 1 to 5. If \code{x} has a rather small amount of unique values, \code{n} could be something between 10 to 20. If \code{n = NULL}, \code{pretty_range()} automatically tries to find a pretty sequence.} \item{length}{Integer value, as alternative to \code{n}, defines the number of intervals to be returned.} } \value{ A numeric vector with a range corresponding to the minimum and maximum values of \code{x}. If \code{x} is missing, a function, pre-programmed with \code{n} and \code{length} is returned. See examples. } \description{ Creates an evenly spaced, pretty sequence of numbers for a range of a vector. } \examples{ data(iris) # pretty range for vectors with decimal points pretty_range(iris$Petal.Length) # pretty range for large range, increasing by 50 pretty_range(1:1000) # increasing by 20 pretty_range(1:1000, n = 7) # return 10 intervals pretty_range(1:1000, length = 10) # same result pretty_range(1:1000, n = 2.5) # function factory range_n_5 <- pretty_range(n = 5) range_n_5(1:1000) } ggeffects/man/plot.Rd0000644000176200001440000001753614046746430014253 0ustar liggesusers% Generated by roxygen2: do not edit by hand % Please edit documentation in R/plot.R, R/themes.R \name{plot} \alias{plot} \alias{plot.ggeffects} \alias{theme_ggeffects} \alias{show_pals} \title{Plot ggeffects-objects} \usage{ \method{plot}{ggeffects}( x, ci = TRUE, ci.style = c("ribbon", "errorbar", "dash", "dot"), facets, add.data = FALSE, limit.range = FALSE, residuals = FALSE, residuals.line = FALSE, collapse.group = FALSE, colors = "Set1", alpha = 0.15, dodge = 0.25, use.theme = TRUE, dot.alpha = 0.35, jitter = 0.2, log.y = FALSE, case = NULL, show.legend = TRUE, show.title = TRUE, show.x.title = TRUE, show.y.title = TRUE, dot.size = NULL, line.size = NULL, connect.lines = FALSE, grid, one.plot = TRUE, rawdata, residuals.type, ... ) theme_ggeffects(base_size = 11, base_family = "") show_pals() } \arguments{ \item{x}{An object of class \code{ggeffects}, as returned by the functions from this package.} \item{ci}{Logical, if \code{TRUE}, confidence bands (for continuous variables at x-axis) resp. error bars (for factors at x-axis) are plotted.} \item{ci.style}{Character vector, indicating the style of the confidence bands. May be either \code{"ribbon"}, \code{"errorbar"}, \code{"dash"} or \code{"dot"}, to plot a ribbon, error bars, or dashed or dotted lines as confidence bands.} \item{facets, grid}{Logical, defaults to \code{TRUE}, if \code{x} has a column named \code{facet}, and defaults to \code{FALSE}, if \code{x} has no such column. Set \code{facets = TRUE} to wrap the plot into facets even for grouping variables (see 'Examples'). \code{grid} is an alias for \code{facets}.} \item{add.data, rawdata}{Logical, if \code{TRUE}, a layer with raw data from response by predictor on the x-axis, plotted as point-geoms, is added to the plot.} \item{limit.range}{Logical, if \code{TRUE}, limits the range of the prediction bands to the range of the data.} \item{residuals}{Logical, if \code{TRUE}, a layer with partial residuals is added to the plot. See vignette \href{https://cran.r-project.org/package=effects}{"Effect Displays with Partial Residuals"} from \pkg{effects} for more details on partial residual plots.} \item{residuals.line}{Logical, if \code{TRUE}, a loess-fit line is added to the partial residuals plot. Only applies if \code{residuals} is \code{TRUE}.} \item{collapse.group}{For mixed effects models, name of the grouping variable of random effects. If \code{collapse.group = TRUE}, data points "collapsed" by the first random effect groups are added to the plot. Else, if \code{collapse.group} is a name of a group factor, data is collapsed by that specific random effect. See \code{\link{collapse_by_group}} for further details.} \item{colors}{Character vector with color values in hex-format, valid color value names (see \code{demo("colors")}) or a name of a ggeffects-color-palette. Following options are valid for \code{colors}: \itemize{ \item If not specified, the color brewer palette "Set1" will be used. \item If \code{"gs"}, a greyscale will be used. \item If \code{"bw"}, the plot is black/white and uses different line types to distinguish groups. \item There are some pre-defined color-palettes in this package that can be used, e.g. \code{colors = "metro"}. See \code{\link[=show_pals]{show_pals()}} to show all available palettes. \item Else specify own color values or names as vector (e.g. \code{colors = c("#f00000", "#00ff00")}). }} \item{alpha}{Alpha value for the confidence bands.} \item{dodge}{Value for offsetting or shifting error bars, to avoid overlapping. Only applies, if a factor is plotted at the x-axis (in such cases, the confidence bands are replaced by error bars automatically), or if \code{ci.style = "errorbars"}.} \item{use.theme}{Logical, if \code{TRUE}, a slightly tweaked version of ggplot's minimal-theme, \code{theme_ggeffects()}, is applied to the plot. If \code{FALSE}, no theme-modifications are applied.} \item{dot.alpha}{Alpha value for data points, when \code{add.data = TRUE}.} \item{jitter}{Numeric, between 0 and 1. If not \code{NULL} and \code{add.data = TRUE}, adds a small amount of random variation to the location of data points dots, to avoid overplotting. Hence the points don't reflect exact values in the data. May also be a numeric vector of length two, to add different horizontal and vertical jittering. For binary outcomes, raw data is not jittered by default to avoid that data points exceed the axis limits.} \item{log.y}{Logical, if \code{TRUE}, the y-axis scale is log-transformed. This might be useful for binomial models with predicted probabilities on the y-axis.} \item{case}{Desired target case. Labels will automatically converted into the specified character case. See \code{?sjlabelled::convert_case} for more details on this argument.} \item{show.legend}{Logical, shows or hides the plot legend.} \item{show.title}{Logical, shows or hides the plot title-} \item{show.x.title}{Logical, shows or hides the plot title for the x-axis.} \item{show.y.title}{Logical, shows or hides the plot title for the y-axis.} \item{dot.size}{Numeric, size of the point geoms.} \item{line.size}{Numeric, size of the line geoms.} \item{connect.lines}{Logical, if \code{TRUE} and plot has point-geoms with error bars (this is usually the case when the x-axis is discrete), points of same groups will be connected with a line.} \item{one.plot}{Logical, if \code{TRUE} and \code{x} has a \code{panel} column (i.e. when four \code{terms} were used), a single, integrated plot is produced.} \item{residuals.type}{Deprecated. Formally was the residual type. Now is always \code{"working"}.} \item{...}{Further arguments passed down to \code{ggplot::scale_y*()}, to control the appearance of the y-axis.} \item{base_size}{Base font size.} \item{base_family}{Base font family.} } \value{ A ggplot2-object. } \description{ A generic plot-method for \code{ggeffects}-objects. } \details{ For proportional odds logistic regression (see \code{?MASS::polr}) or cumulative link models in general, plots are automatically facetted by \code{response.level}, which indicates the grouping of predictions based on the level of the model's response. } \note{ Load \code{library(ggplot2)} and use \code{theme_set(theme_ggeffects())} to set the \pkg{ggeffects}-theme as default plotting theme. You can then use further plot-modifiers from \pkg{sjPlot}, like \code{legend_style()} or \code{font_size()} without losing the theme-modifications. \cr \cr There are pre-defined colour palettes in this package. Use \code{show_pals()} to show all available colour palettes. } \section{Partial Residuals}{ For \strong{generalized linear models} (glms), residualized scores are computed as \code{inv.link(link(Y) + r)} where \code{Y} are the predicted values on the response scale, and \code{r} are the \emph{working} residuals. \cr\cr For (generalized) linear \strong{mixed models}, the random effect are also partialled out. } \examples{ library(sjlabelled) data(efc) efc$c172code <- as_label(efc$c172code) fit <- lm(barthtot ~ c12hour + neg_c_7 + c161sex + c172code, data = efc) dat <- ggpredict(fit, terms = "c12hour") plot(dat) \donttest{ # facet by group, use pre-defined color palette dat <- ggpredict(fit, terms = c("c12hour", "c172code")) plot(dat, facet = TRUE, colors = "hero") # don't use facets, b/w figure, w/o confidence bands dat <- ggpredict(fit, terms = c("c12hour", "c172code")) plot(dat, colors = "bw", ci = FALSE) # factor at x axis, plot exact data points and error bars dat <- ggpredict(fit, terms = c("c172code", "c161sex")) plot(dat) # for three variables, automatic facetting dat <- ggpredict(fit, terms = c("c12hour", "c172code", "c161sex")) plot(dat)} # show all color palettes show_pals() } ggeffects/DESCRIPTION0000644000176200001440000000452214100527662013723 0ustar liggesusersPackage: ggeffects Type: Package Encoding: UTF-8 Title: Create Tidy Data Frames of Marginal Effects for 'ggplot' from Model Outputs Version: 1.1.1 Authors@R: c( person("Daniel", "Lüdecke", role = c("aut", "cre"), email = "d.luedecke@uke.de", comment = c(ORCID = "0000-0002-8895-3206")), person("Frederik", "Aust", role = "ctb", comment = c(ORCID = "0000-0003-4900-788X")), person("Sam", "Crawley", role = "ctb", email = "sam@crawley.nz", comment = c(ORCID = "0000-0002-7847-0411")), person("Mattan S.", "Ben-Shachar", role = c("ctb"), email = "matanshm@post.bgu.ac.il", comment = c(ORCID = "0000-0002-4287-4801")) ) Maintainer: Daniel Lüdecke Description: Compute marginal effects and adjusted predictions from statistical models and returns the result as tidy data frames. These data frames are ready to use with the 'ggplot2'-package. Effects and predictions can be calculated for many different models. Interaction terms, splines and polynomial terms are also supported. The main functions are ggpredict(), ggemmeans() and ggeffect(). There is a generic plot()-method to plot the results using 'ggplot2'. License: GPL-3 Depends: R (>= 3.4) Imports: graphics, insight (>= 0.13.0), MASS, sjlabelled (>= 1.1.2), stats Suggests: AER, aod, betareg, brms, clubSandwich, effects (>= 4.1-2), emmeans (>= 1.4.1), gam, gamlss, gamm4, gee, geepack, ggplot2, GLMMadaptive, glmmTMB (>= 1.0.0), gridExtra, haven, httr, knitr, lme4, logistf, magrittr, margins, Matrix, mice, MCMCglmm, mgcv, nlme, ordinal, parameters, prediction, pscl, quantreg, rmarkdown, rms, robustbase, rstanarm, rstantools, sandwich, see, sjstats, sjmisc (>= 2.8.2), survey, survival, testthat, VGAM URL: https://strengejacke.github.io/ggeffects/ BugReports: https://github.com/strengejacke/ggeffects/issues/ RoxygenNote: 7.1.1 VignetteBuilder: knitr Config/testthat/edition: 3 NeedsCompilation: no Packaged: 2021-07-29 12:08:51 UTC; mail Author: Daniel Lüdecke [aut, cre] (), Frederik Aust [ctb] (), Sam Crawley [ctb] (), Mattan S. Ben-Shachar [ctb] () Repository: CRAN Date/Publication: 2021-07-29 13:40:02 UTC ggeffects/build/0000755000176200001440000000000014100515121013274 5ustar liggesusersggeffects/build/vignette.rds0000644000176200001440000000034014100515121015630 0ustar liggesusersmQ0- ?ГGpbM-/"If:{oBLb-L-}n8xn 2֠C D\ *2i I\5bRT۱`BÜ佁6 Jh4* D??Npoe=jfü9W:>ON-[aC/ 苺1SOjsv|ggeffects/build/partial.rdb0000644000176200001440000006647014100515120015435 0ustar liggesusers |[^ږdK&[WK^$ڼʖmy-۲}e\KR]wwTU[ֽs=,3B $@BBH ,,Y@K΃H-,ݺ$'wK;yks"hImJtWV£;m';o{V`g ܗ+v~te\wfg>XD%f3k1d~O{i^elғ,cod퐶%+* |:5x{v>fby+gUv/$:'BEߺw~kr:k_nѓAAd:e*i9>o{PrC-+i+;f\axK?94;_t1+}ܹ}a⢽`|S R6oYT%}T"Q;گʮP+%P7lڞ~ 'U:C<{m$c{-E+㦋9;X晓g7Mr9я%NeQ rC8 >#Kߋ!*tDﯞܞQCՕUTLu(5jMZMv+nIwp2 %3x THZe#,P};VKlīؠ:Bw9&XbS5*`Zc3,Co%ɬ̻E0W~,%x:/elss̳=Ϲ;[g;LKh[8AV;΃7̔|:x\=+gGnQ|-gH }./l .KT*5 速i!،#ģcKQV*TrMELаjz7I یɪZZvh!4ui`z^yUμn $]AڎDK ۄԶMo>x /-f?06o/\/{:my}DxC,0kO/VQk#-&fxGQSL|CiGhC/TVo?4~ZX)Pzj[wr?d9bTV(4MlB_+2OT@IQ_؋"PT'X{j\ cst),oW 9F[AC:N qBU)o-֚@Z'&RSR'[TϧN 1L эw')?!&+u5U&&قyWwtd,u )6_$) ہ0KVbzv,r9'%7yxn3I(UV-VD:c Pq.N}QQ:3b1Yұ`슚U.ohUol~ENߊ!Ma#)䴰)CޱWx.B6mPrm}m%Ͱe{bSj0fͥu %; /_jgvښdjKM .-G>))-T&O VyإC~fV:|r{vf: < ~R6?m-q?Yiϭz)+9_5z-(b=RgvCX˳MNu#F&[ۆ$ٱzO\+ڇjtA{ہ;VbfFNN@&+:k[N< ''T0^ +mF|]J^^[dDT5TVƒTt5=pY:o \d:>}7!=OePMӮ9!ɗw%=B ɽU\<+ c/.\HZ+fK,X?nDӓbZ5S;FiURL۴mӳsi ݝ_?$'bKNzb1yeVLMNEwhk慇ٳ?YtB,pYʦbkǴv欗N 3_ #]LᏄigjj-z%۳ym@ϩ|C{Q>+[,K2SSI BAph1+_X"]^h浘Z6/fj5j6)6$Aalϲlfne6ڄpđ\eFmWV9Cga3$f ci *Icaam:G]rl&u#UΈR-UjLoOZYe7tMUr^p¦/igIG5$Y8ggtIXъ-~9ˬ߷mvYԀtr 47fK/, O( f#l%󉊊{]+k2 ~y8$5k(7Ӓ3v|= {_֛.cXCY7*Q`N+JrEɥ<Ժeo0fvl6V4RU.7ked%F 4z-(YYVoix6fm8 RB̰nWڙ5taX{N:ɇ|0E{~K[} x\;3 [ip%z(O &#V7\ˆrҡ.B7aҀ ?i0dB͂h(Q1*)8 a;i;2E>MBQ(X}*:!ٕd|f2rdyGj;Le,/lspM3Aˋx+PJ8 >l \ߒ} \it%W(uFy> Tcx\1#I^s4cws拼ĹWM&膼jۙA6tfOnzi Yγ(Qmږl{"SwـL{: %]O[o}=*:_ߵԢ,Shy~RKAn!?+_U%g/ż ;B,j{PC<2vR#lFL w7Xg)l7xwr ~!f˸7acwP ީ\)IV`x^gN8k|K?/?}ɰ=|/6^na?.WI`/y,}vU ;whme+iҰ\mPo :< ~9<~J6)ٲD1)%[}n} SjQkNo+z4ދ(N gF|9:bcM6ʇWH2 JtG@ݔj^SDZW5sKX1pF-+ ^x7ufro!驖[(=ΗspB޷F|'+"$i3pxVM(6^pU@>~,sM+U*96)V.y}޷|17Vl&}iim;CKiO2;(=>%\QQC>sò>5OoOQr^p}޸3יћyC,ne %"x-@kPR-´7+&K+(K27 +B!}/z/ 'v'YyV o 1ZGѾlt Δ.Q A\ qII0}<7_L3%x8b[T6Qy>'mH06ng@w΁ʹ%ܿ `#"{)p>5'2Jfrya3ʏRN;p?0MDABp ,ٲEmӌrͨ0FnlC5o-S$*P+6G9;ffU1 nnC-P~MS'Znd7qn]Vdx9cE$; ?=vvLr-QOx\[WwY333,5g%1 +:kUdd4ȏdf d0C&r|  U'kRP Y”\;?4 -&e}dZ΢GKRv|zymHaÍo(.#JJ0Cpnneѓ`\)Dv8>m4}l2MRnXF{̄a/6O-UeqS%c4srdFh>)M珥70v~C mBsu=HRҥY'^n5K ;}&w`ԩf)Vqp/agKʺ;_ Έ1KcW"tf~| ޢ:^Ug_#x[@<MY+Pm1t+|oVkU\A$y]JSirU%  I< ׭3>Ip4y?$=UDTQnB2(=V޷!O$Uf+>&1w:< ^WZ=lJ45mnfצ?!Iu4qVjߕ.E<-oñc>$NF&5(߈!MW򥱺j78J#b^9UpEZ=/EOu8 >xG?H? {,ʍNx5UB`C&bPN`xAרkߡm櫗~NI@ ;C$n4|ˬ8 HmI+RgdsWGv-S.T&ή,S=5w_D}?r Zz΂W3S:v2$*JZV~SHx='6VP;,8ДO+]J-{Vx-A:Ia&\' AmC)-6Cs=WYl 5'Ipsִ jEe#j5W锞pF;⑗kݺ?ɘ!ͷ"6qйd3 h.l}6Ub)* a,Dsӛ"&oN̊_Ez:;:rU % 7N.DF0djV9mU6@T{fW;v_.eD5ˤ^׏Xi*| X[]88֛~!LQopltF+\2pmM){cH1I?jkD$& 5fzkD)Y%YD?<^4sUW4ٝaBd,<v^1A |b`zT+u-Jn;p \LiμDq L~`!XԱJW"uyb^3Ϳk׊lI3ϝ-ä \iG\:ծέHNeޕ$W^2)e&0JϼJQ⑗a&>!ͷU_"5d`EGpF{$fx\i*գ~Lx 2{6rs§Cj?5}O-6I>WZ9?"|HqYַ(76fT]h'=co7=*{In>pszUċM5f^z/ФhJx:}ԲdLͧ9mFrJReRkW[2|1w(_I&JNOh[k+]qM8"%WQTXVc:G|ib]'(ї^khs(D|uȕ_JxfHAQpsd%4*=jI?5P9 $#CI v]1 ZI\S ';9NZ>I}k ͏9eZO[HՖDH <LJXq@Yo"%7oj+(]i!z C˧Ō_W¿oM5ֹo=  J`EW.嫠bxcKO6~5vw+-ٜl%!];_z)NfL͡% z=Z rx oRrG rc XY('M<,B1`a5CCJR)9nnmM /_Z'/+iiM37b;J'*mrxMDt hPrɜ{lO=H6`7xw݃kWk+>x\mf=JT€1A`SK[h/mOq46<5X~k9Ui/KI46%ZյB4vGiyae{e6 6qHVhIAi]風T^ϖ,O>1);ބ3V\"+p 8jY‹Ynrf (GNXd ;40~8`g1#/TȑmI| hn&q!2Y~qWh^L/`i︤HBc/~lMx\0`4wL*}CzzLLIsxܾz(m+ZUBc@sFkis#, W[1ѷQg -^\,O_XzEYx?6|~J ,.K+W)>_v+Kvwf)1l o [~X۞YO%Dgp7s)[ԳK2g)(5V'=}lMGRI&-{<3]'4Yw'y)C2]ͺ >)N6K*l^ŕp |J;/]YtV$zK-[e("nrPb?j?C<͖W"lLT Qn߮\R|J܅ҙzaLN턵GQANpڡ=zGT{et\) #pScGSƶR(wVJ, %،= 3dc psw׺ORw%DIx\c:w8~א8a+6ZE`_4tRYJ x \ p~e?ghۣ@ʘ뗩%p8>-C7br(RVodh;~~2A6?]{/B@e{#w&ƒj3`5;N>-vwCMcg_$(CsjD ߃!c3A1hK/Oe 3AZ(ԋPO&Ĕa'x0.h57R\) 57rx \i!WPrJݯ7Փç:9s N(6f<'laÜȊ(CCDCUeKYZE%'_()pKO+>Hi4k|D~"-gފK/_7  [I85m-v&:zک\3eq\_5O?[4~ )He7p| ѶZˀl+M>u7#gYz}mJ\ 4[A{2^b^R&D_p{BwOơ*U}OX~1WN,歜U w xZ!5뉦A0Bs>6F+d}tS5NFq#WC[̲c-U]{J@f>patEj dH~!ps '9LKevE!83Y<GAX}?n3x^Fɝil_U[_!N;ԽZݪrUȒ>$wjvxvPCPaT#===/N gFRTs x\N:ԋ߱ v k`($"ڮh{eWh]]dKcsHvzk?"1pk\ira'EQvJk.nx[%H֣~~Dv&WۍlW:)5:{hFe|K + LoIQP6BiWk(vF[Iq7 Τ&e:FSc2>@ RRޏ&DJؕ ^%h~"&J4?nD šʾe ?QZ=e\v/#*Νעl|M^%w8 ?/]d)1ģ)U}qs0{X*7ge]m?xWڀ==6Gz0`B:Gc^W+5P 4[?-ʞhc ۶6W_vΠblZ,E 4QkIm]+TI'ˆ3xOԭ%IA1&죥v{{ˤlHO+Ɵ"KEe %"x ~\$H+[bGTוS 1k!keb Wi|չ e/s TU_n`q^Hnb = )i|%I֟~Z8%av0Oi7[qYU?Uv0ORs ӭ>.t3cgJ7o6mUlx)[x% k֪P\ڼxa*3'bCqqcFR%]uImpIpSShy(r%B > mwL㻥 n7OF/ϺՀ?'\ckRME{MBƮ7IêQ,GfP~ַ \]>>?Ȥ&6y7}F _K8zs_T**ش8 >m7+7?YVhb?w] *V0MGĎc Y )2&u"Qr? Ӎ(-ibg괟^˱uZmMee*o%TO%]n1;%Uh+5rQrG/Ϻ/})%gQ/}!PYtwbG9.+/{G/Ϻ_@[,_@q#<t !<~Ęl:iPrGf;J7`VIG!mc5>\b.T-$X&)͓+_A6ɕWPqBEv*ݤO6`x;[Ph+_yny凞Ī@_Z*ܒu_C;J@y'IgmHWa/WSr<# AAcQ_&=gWB_[&ж$ 0jl&-W MU kŎicij3vw Wv+$?C|14jSH_8$%C~I7$ {۬t'gsIw .@\'ڴSӑr+E:=|X[vlaNEroo+~.Dh؉w6>~[ѫ~&m]y% <VKW$2}G{{Lvls8 >0~W5' at4Z6 |96| Xz΂j{1di~9S, q |̘ݶbSdzD5R54SIɷbگ+wb-Bm{$3zI&!8 o߮HFO h c,pK&JkV>&)[0X ?>4KT/Z+?r{Fc6VelTS9u)w 8%& m>n5J9S++d=6`7ƌ ) lnNJ"p:al7؟ҿ4S~)t)=JhnJOiF@!A 8ބ)=J 4^ȯ&N2aǠR;|E؜+.u^ῆ͆/],y]jG|v>:uMUCLu˜_tʿ% %CRM4B>J80u!RF/]6iY#A׀w%.*ߑ{Eg=~\i&ogTyfdOӛ+˱!ڕu8hZ | PzDW; lv#W0 IUvRÆ?4 ބÐY-pÐ5+%;xtQk礫 x_: O$?)$<$DsSD\^'l|?"C)"<EY(V!pX%(vWɞ.diU.Z$01cuMIJ(?pR|ope>/bO{1lO. ZP; Dƀ:@uLV'<%72lӪl&w3jOy%Cań@t]PttpןWogMؐu8[@ɵy$H|I$H[b@wC:gƬXK)p[ƀw H2 k_*T>fFnFr޳3X憴&xH7365l1_!pEG?ZY$0q?Ӝ?1ڎ| 9xϋ΀+2֙>mfJQ h8XoKrdfJGex\m t LFW뇒K''-YRDA:wP h'Z(za b&3(@s $}(Rtxzsl3 Զ̴͖%[I p #ЊFb< aEZQi>+9%@Whn[{VRM_}(9* ?ق@z{Mˍ d *^C?vr$#\j.u)Ry x\'CQN`g;)VsCKFS" NjǖAdlT0KO\C_3Wl=4{|Gɷpk5_oG3ab2nѧDZo?*uyu+E79Y54ɘ!MC[-E+h@icN NA8>nRaLo\LQrS+[òQ̅h IƦEPmu3EjD=:)NE8uElqRKlG)7$L r_`R"CWQq^9KKˣS 'wMh%ra5 ]<Δ hAO+mRA:,lIFf?!h[i9s{idvHͳGcJIi' Ҵ\UEɵjNaMR$.'Rlx\0[v%#{Q '#d̘Ý9\(Hc\W*r.G>ןA/Kaw;ټe hYQgS;Oյ܌PUikF՚8NN`/xokSǁ'GȈ$^' 6:?DGUy ީdCHeWR?oв+m;_]1<[,g!Xi,/?4^TZ&boCh9bD_nx_/mK>{/^"R?@_6FG)𔶍6WcE0XGExoee{\\'M/ʓkn`[vL=~Kt%(Y.e2] \e<:-e,/2 GYwYܲcӥv'2#UU<yxrY B (%@F^FK ;$8aKL(QX;QbV6Ml{.T.ҕnL&ؼdk鏦þ1nV~|P6Cq7AaǷJ] Do/lJyF3i \'vx]x~W!rj(%cJ? >53~12&m!D y_#clW[|*(^we.fYoKyrCAP:zO!v㖣Q+!'xn_3|jp2Z;?`'m඀J g1o`X-ĿT3Vs 5]u@,0/8a-FBBS \ǎJlnǘmcǘ\FCㇴlC^;iVtŮl· 5إRuO3z޴a;:!ٽjtB~lEҺ'W 5VIMn)S~ wSSul*>%\F\FC~@~-upHqs ;xK*zp8a*'u}j+Bru}ixMpBMnY\|B_U{ȡU-!\|H ;}CsJx\pŨ>OOF'1J,.ɍ:i<'ѓsXbmL8dGxѨHAM'w 8&پ)1hi i].Y,LhOf]1y]E-tC#i?ټ_8N+Ȥ'""]fK]w6NDY{).FlyF͍ߏ1a;bSjNb҅~dcʅګt2e5?L>~:ɔ\/'l 6Eaobh+p{+KJKJ~s nuJqQ̱wfIB{#)i>4m {A"ROm,×Weo?RPB3.[AFe:ɞNOLɝ^ѭIZd^5)'#rv؋Zuq)' fTHl''viZA=#~/>b&,8a?׷);/#vCGĈ,XA\,DoI؏ E44ދ`X:gJtCA$pZp@e= \ DɝYwg7F硸0;䆞pif/Qd٨*W;臍RSVVVh&!VbXBp0$1#A?r/vUV}@~u v\ZW<h=^`k;I`CLj%^=0:?{*OJOJ~s ?g{žmQ_ r*&lzVϾf:gY<2FOmq8k5׻OuL|7N`}Q@5c% 8&ڞ.yHע$|G+I-'aՓp qc55⊶ v\eMɵ N[doG\f,XLjs2j9 xr) G:5ج&2l&YC; 𼓷JwH:"N؍"m̓#ѩߑSy>TO% xVpū"&|P֗H97վӞ3V` v NO(V 5BOaEZv 6;Q{U&='7z) 4;Rl:/ӖoKϷ!U l 9'{vrU]f>1]T q*s/h\u d&Y" 7S=ͪnAK4}dsNF8 '61 RX=o z6$\;pQ&78 jV N8 P11) aC~ )>Q $+.+zGz^$g7\u1 1|kü]QzZU!E,f2,x39(A s%Gn9>@!Vڵ;:4d.晝y!WgERA?KInJ;fR3h;3n;f(e4cFQ8fPH vVݍrPDw21ԐHK /|xՋ9K,;\偽^eRxPp{R+IExNdYu0 Ǩ+k>9 >z,x.]MVS~Z$ŲA3X(tuJ:) WZ7pJ/ҰxPp_1 90%wxZpۦ;KaR!P7I;#iV П^ԮaU?C ;eȽ\P.=3=ٳU)f9jU5S]{VҊq%Vi-um澥7ۧ[4fŰiZݬD:yH׌$Lf3^0bFHQ' Xs4;)fP]﹯4yJrOxi-;= E)>_Ij* ]6jQH]Z(+mi >=σM9;05; ̔-D/E,/W'/o>=;3ysQ 7VcjM%:YE.EeJKMSSZdD]ToruCPKV{]60T6So+ye><-RzlõXΨdwwQ&]tEH} <#/.!݀n"y9Xe@i7\FQߐG~!?Av1v.CrVUs`V(d|vVo`F Mn`F\q $  (c>? O$ ܇%㥊Y{N)݋kb^{zʖvݜt=K+wzsqe{~1J< FOyfeo<0G$ GIƬt-.{6!+ d喯df*Ah.2x|Tu ϱ\yFT\mӻ\)v n\-u$jsk 5ZmCT_BhE브BɷqݖQPpo|KS8/鿢.-BfNLCJ%o?*8iFfo88> mxWphl+yYXrQD |L!S Sm lp:ǭH|c|[m8ظ%b6{:sC!;]; x6uL21zKD/)e&CH%i !l\c9ÝjZe LT$xTpW.Lkj{6L xn:|4J\1L-lZ*zl;ִ9jm2,ڙq6`sQK*W;{%o8.Q3efN~T- 7sW̧el!pBO2%H9wzi /*S@%G_|d|<£'\$j;͸[2^{k ULvxNp!:r#c]\m&ZX灷W[l'!()̈́Yk?v D,Zɖ.ki/V]ҡ12pQ~?LXNA˹=% <$ZX>&c3qw3OuhF#G#Ň7p18b#8(™Ԣ OHN,\Ds6FG.HO>>4ZP1V(. ]3]~#|O`SKcxl$'^'cuyؙXJ2$'K{^J&: b7ൄTak-Q)YrNqJmxNp~g=@ q&)txOpYsV}K=1IYސPX3$.} aU$57c7 %wxKpi6͒u,qPp=%ځ)} av`* =[M܁N La&dTqCMYo"{7_+濛s83du8iBɵ{&M<朰W+`>!.%<*Zm}m$XUJNN4i+YOI UpgֆknP>EwNh$ Rx@pfXd'9?">0+г٢=+]N~1mTFUwt\b'; "3+#"$l1'=ϠZ j:TO6tG)~l Q]-W,(V> 5m3 |8`j/ޚS{ؚ9b@K5Vh~}7M2ҝ 0\px"t,|,V$JM[,u,bN6/Yq|KAe]ݺ#h˻9kI- 舟7kW d8;R Kf\Eɵw nzK9;Xr3Bf۰R(v׵+~Eg7>[=XJýw\:Kʐ~g{N}Y"ӂ𩍌baKJÛ/ox)Iy=| 5) tЂIwUIvVX̅jf*#4mE2Jɷq&G EAkNlyPHA`jzv#=v(Ebz"2YŢ%5PÝ$Mh~W͙AAUmծlj%?mLUS;C˘gB:sA}C+{9k%lD,w,DJ([dvK&xp2BsO-,aI|:4khmC+Ih=i@ G݆|*aqTggnX[Rb+/SuR=PP?.ډ&46;SHLX %VFL^v j-"^:* G˫uH)`hYg޳~pDogvf@Z2,F_L[텴D甦]d\KO[]Yzo(&r&R~ViG*@߃h}Tʿ!}Q^m#uKRZw@^pՎ?"s*YY }jFiꧡF}u_qGbЦ^A:RՔ /@uCǗps:xvv&{Ui)SҲ !{X[WTQN OCҁKM{t|Q{_cE6\:C_r})[b F/#W\:x±uGFRϲ~:E?tT\-_qdpP2TI8٣.1V5PJ}Jn)KƮI6NafkBmf |::Ez_Nˆ=ɠ I>s7z}B1uf u#٠_n6.S]W'Zn.: e 3<–|ud3F!j{ ؽ7$DXcWVoN2YN ۇVM6߈Jh3 [ Փg!p'dV2 4VMKh+^_?N2_&^10sIhnE=. }n@ϻ~z}+97n~!֜a^F,.la&PR#^ǍՒd| d 5cxv9GX.[KM悕I즕N8coWNכ`BNCjBaB^a az/B?F*x\7XװSOn+KLxc0ϖ=iΓ,_^K{ŷA$aD4~vvC/`n`t1T qNV_b$_yOb|#ɱw^o^~ Cx?׼;;T izWzP]Ghn5˪PC5j䇋Zy8A:P?6j  mZۨ$\tCY{gReu(%<ǵ1&{ J4J72}8Vse4Uǯ"[v}rq;=ݢ "F"}8G睼ȉ\!gJ,MdpY«>!4?J9^Z}qIZC8QU7) ;δ~}@Y J ծY/u!LQ.IF5>xG>?\^G.k碛a>߆.nn~C4b3*tybźA^G`e2b+ Kt}w›Ȉ~ċaIKf;H= 6?*Wlіgc׏V#,fԈf45fJ[d[2HR5e}(&47K״N^ֹ)%;ߏAˏM[Á_/vg[ϩK2QǖG䫪Gul@C p\V=VQcaz<⟞#o p"F_OA+Ih=jy FӬ4j9P%?<^Ej^iRzl8 aqFb7(vvp:_gBsS_;Xq/Y0熓5?<>DjҪNbf /A%#m̺'B1{U'7UY 4<ƒwЌ %}}hT_ઠPV?@Bs *Jq@PXS|+ BIx 2iSnNiMVR/AHSIfs THb)E{BuR=MxlԗEG7knWF{㟿 A[1nY .Ғ$Vؒa1̮Xώf Q 귊uϑBs{足(O^Կ(]Jl!w mZujeLy!!-Ң-(Vm.p_A&!Lf ꜋eHJW: D{UWo$DTd`(٢+7tV&$-%^28^ɜع:ɷ;70]vs>d=o$a%;c/|X"Ul>/I_MQ{:sݰV>mI /y'+l2cKM>f{`t<# q 'o[q^?=s <<:"Zʹ'˂'n2I? ?*f3޳|ʹ F ~+52::z^۳37gg'8\p[0Ռ,/|LO-ECKfo>gQ\)[#elWjzX#F~ 5w2tJPO\9cj::s5Kz2.zcvP9Yǜ꼫"5UZ+WF[NuFKuc{jђR}fsKQd6CJEԫufl&!3I:hW#BȚvq^-f,ܢe-sSY ܆?{5l-zܚn CA5gwtD'E>\UoÿFF^GeҞKېF5A|7VPVf9Z+|nK]i͡鸸UQsX%wg>g!Ew"1*}aOMǣʊ'&**:^JɯAWJ>G~%FJ_k= rkԷnuTj}5+޻:n 52F#_zk+{u^9W|!+WMc.يu"-GuIƾYmf**ͫߔ::J^nfJ_ۻ*EZccV)k(s}Zjpu#k>7Ա9u+ރh:Pwo0*0NAyҭ_3g~M|z#r\_(,|:5#ޛ?L0`/ ǏL"?7:CgݙّO%˳f3Cb8amƱ[Æ!j& [V` =pVvi8 r.=,cVsi腅%Pj}sǪg,]ӫjdz}#5]'mVnXObݯO#)| R#aky ]b֢. {mXZYմܜ|`95g>>@9:C%=c9y6d6ϕ$ 's xxMY5jC9V􇿟*R(2}B8 ˇA~?ګm{?&?psggeffects/tests/0000755000176200001440000000000014100452307013345 5ustar liggesusersggeffects/tests/testthat/0000755000176200001440000000000014100527662015214 5ustar liggesusersggeffects/tests/testthat/test-gamlss.R0000644000176200001440000000175414036250437017612 0ustar liggesusers.runThisTest <- Sys.getenv("RunAllggeffectsTests") == "yes" if (.runThisTest && require("testthat") && require("ggeffects") && require("gamlss") && packageVersion("insight") > "0.13.2") { data(iris) m1 <- gamlss(Sepal.Length ~ Sepal.Width + random(Species), sigma.formula = ~ Sepal.Width, data = iris) test_that("ggpredict", { p <- ggpredict(m1, "Sepal.Width") expect_equal(p$predicted, c(3.84881, 4.00918, 4.16956, 4.32994, 4.49031, 4.65069, 4.81107, 4.97144, 5.13182, 5.2922, 5.45257, 5.61295, 5.77333), tolerance = 1e-2) expect_equal(colnames(p), c("x", "predicted", "std.error", "conf.low", "conf.high", "group")) expect_equal(p$conf.low, c(3.66712, 3.85232, 4.03678, 4.22004, 4.40113, 4.5782, 4.74796, 4.90714, 5.05622, 5.19881, 5.33789, 5.47507, 5.61117), tolerance = 1e-2) }) } ggeffects/tests/testthat/test-clmm.R0000644000176200001440000000153014036250437017244 0ustar liggesusers.runThisTest <- Sys.getenv("RunAllggeffectsTests") == "yes" if (.runThisTest && require("testthat") && require("ggeffects") && require("ordinal") && require("MASS") && getRversion() >= "3.6.0") { data(wine, package = "ordinal") m1 <- clmm(rating ~ temp + contact + (1 | judge), data = wine) test_that("ggpredict", { p <- ggpredict(m1, "temp") expect_equal(p$predicted[1], 0.09760731, tolerance = 1e-3) ggpredict(m1, c("temp", "contact")) }) test_that("ggeffect", { p <- ggeffect(m1, "temp") expect_equal(p$predicted[1], 0.0730260420584538, tolerance = 1e-3) ggeffect(m1, c("temp", "contact")) }) test_that("ggemmeans", { p <- ggemmeans(m1, "contact") expect_equal(p$predicted[1], 0.08691649, tolerance = 1e-5) ggemmeans(m1, c("temp", "contact")) }) } ggeffects/tests/testthat/test-Gam2.R0000644000176200001440000000151414036250437017104 0ustar liggesusersif (require("testthat") && require("ggeffects") && require("gam") && getRversion() >= "3.6.0") { data(kyphosis) m1 <- gam::gam( Kyphosis ~ s(Age, 4) + Number, family = binomial, data = kyphosis, trace = FALSE ) test_that("ggpredict", { p <- ggpredict(m1, "Age") expect_equal(p$predicted[1], 0.02043849, tolerance = 1e-3) expect_s3_class(ggpredict(m1, c("Age", "Number")), "data.frame") }) test_that("ggeffect", { p <- ggeffect(m1, "Age") expect_equal(p$predicted[1], 0.106151, tolerance = 1e-3) expect_s3_class(ggeffect(m1, c("Age", "Number")), "data.frame") }) test_that("ggemmeans", { p <- ggemmeans(m1, "Age") expect_equal(p$predicted[1], 0.02043849, tolerance = 1e-3) expect_s3_class(ggemmeans(m1, c("Age", "Number")), "data.frame") }) } ggeffects/tests/testthat/test-geeglm.R0000644000176200001440000000173414046746430017566 0ustar liggesusersif (require("testthat") && require("ggeffects") && require("geepack")) { data(dietox) m1 <- suppressWarnings(geeglm( Weight ~ Cu * Time + I(Time ^ 2) + I(Time ^ 3), data = dietox, id = Pig, family = poisson("identity"), corstr = "ar1" )) m2 <- suppressWarnings(geeglm( Weight ~ Cu * Time + I(Time ^ 2) + I(Time ^ 3), data = dietox, id = Pig, family = poisson() )) test_that("ggpredict", { p <- ggpredict(m1, c("Cu", "Time")) expect_equal(p$predicted[1], 35.47711, tolerance = 1e-2) }) test_that("ggemmeans", { p <- ggemmeans(m1, c("Cu", "Time")) expect_equal(p$predicted[1], 35.47711, tolerance = 1e-2) }) test_that("ggpredict", { p <- ggpredict(m2, c("Cu", "Time")) expect_equal(p$predicted[1], 35.63929, tolerance = 1e-2) }) test_that("ggemmeans", { p <- ggemmeans(m2, c("Cu", "Time")) expect_equal(p$predicted[1], 35.63929, tolerance = 1e-2) }) } ggeffects/tests/testthat/test-lmer.R0000644000176200001440000000764514030655632017270 0ustar liggesusers.runThisTest <- Sys.getenv("RunAllggeffectsTests") == "yes" if (.runThisTest) { if (suppressWarnings( require("testthat") && require("ggeffects") && require("sjlabelled") && require("lme4") && require("sjmisc") )) { # lmer ---- data(efc) efc$grp = to_label(efc$e15relat) fit <- lmer(neg_c_7 ~ c12hour + e42dep + c161sex + c172code + (1|grp), data = efc) test_that("ggpredict, lmer", { expect_s3_class(ggpredict(fit, "c12hour"), "data.frame") expect_s3_class(ggpredict(fit, c("c12hour", "c161sex")), "data.frame") expect_s3_class(ggpredict(fit, c("c12hour", "c161sex", "c172code")), "data.frame") expect_s3_class(ggpredict(fit, "c12hour", type = "re"), "data.frame") expect_s3_class(ggpredict(fit, c("c12hour", "c161sex"), type = "re"), "data.frame") expect_s3_class(ggpredict(fit, c("c12hour", "c161sex", "c172code"), type = "re"), "data.frame") }) test_that("ggpredict, lmer", { pr <- ggpredict(fit, "c12hour") expect_equal(pr$std.error[1:5], c(0.2911, 0.2852, 0.2799, 0.2752, 0.2713), tolerance = 1e-3) pr <- ggpredict(fit, c("c12hour", "c161sex", "c172code"), type = "re") expect_equal(pr$std.error[1:5], c(3.5882, 3.58185, 3.58652, 3.58162, 3.57608), tolerance = 1e-3) }) test_that("ggpredict, lmer-simulate", { expect_s3_class(ggpredict(fit, "c12hour", type = "sim"), "data.frame") expect_s3_class(ggpredict(fit, c("c12hour", "c161sex"), type = "sim"), "data.frame") expect_s3_class(ggpredict(fit, c("c12hour", "c161sex", "c172code"), type = "sim"), "data.frame") }) test_that("ggeffect, lmer", { expect_s3_class(ggeffect(fit, "c12hour"), "data.frame") expect_s3_class(ggeffect(fit, c("c12hour", "c161sex")), "data.frame") expect_s3_class(ggeffect(fit, c("c12hour", "c161sex", "c172code")), "data.frame") }) data(efc) efc$cluster <- as.factor(efc$e15relat) efc <- std(efc, c160age, e42dep) m <- lmer( neg_c_7 ~ c160age_z * e42dep_z + c161sex + (1 | cluster), data = efc ) test_that("ggeffect, lmer", { p1 <- ggpredict(m, terms = c("c160age_z", "e42dep_z [-1.17,2.03]")) p2 <- ggemmeans(m, terms = c("c160age_z", "e42dep_z [-1.17,2.03]")) expect_equal(p1$predicted[1], p2$predicted[1], tolerance = 1e-3) }) data(efc) efc$cluster <- as.factor(efc$e15relat) efc <- as_label(efc, e42dep, c172code, c161sex) efc$c172code[efc$c172code == "intermediate level of education"] <- NA m <- lmer( neg_c_7 ~ c172code + e42dep + c161sex + (1 | cluster), data = efc ) test_that("ggeffect, lmer", { expect_s3_class(ggpredict(m, terms = "e42dep"), "data.frame") expect_s3_class(ggemmeans(m, terms = "e42dep"), "data.frame") }) test_that("ggeffect, lmer", { p1 <- ggpredict(m, terms = "e42dep") p2 <- ggemmeans(m, terms = "e42dep") p3 <- ggemmeans(m, terms = "e42dep", condition = c(c161sex = "Male", c172code = "low level of education")) expect_equal(p1$predicted[1], 8.902934, tolerance = 1e-3) expect_equal(p2$predicted[1], 9.742945, tolerance = 1e-3) expect_equal(p1$predicted[1], p3$predicted[1], tolerance = 1e-3) }) m <- suppressWarnings(lmer( log(Reaction) ~ Days + I(Days^2) + (1 + Days + exp(Days) | Subject), data = sleepstudy )) test_that("ggeffect, lmer", { p1 <- ggpredict(m, terms = "Days") p2 <- ggemmeans(m, terms = "Days") p3 <- ggeffect(m, terms = "Days") expect_equal(p1$predicted[1], 253.5178, tolerance = 1e-3) expect_equal(p2$predicted[1], 253.5178, tolerance = 1e-3) expect_equal(p3$predicted[1], 5.535434, tolerance = 1e-3) }) test_that("ggeffect, lmer", { expect_s3_class(ggpredict(m, terms = c("Days", "Subject [sample=5]"), type = "re"), "data.frame") }) } } ggeffects/tests/testthat/test-MixMod.R0000644000176200001440000000375114036250437017520 0ustar liggesusers.runThisTest <- Sys.getenv("RunAllggeffectsTests") == "yes" if (.runThisTest) { if (require("testthat") && require("ggeffects") && require("GLMMadaptive") && require("lme4")) { # fish <- read.csv("https://stats.idre.ucla.edu/stat/data/fish.csv") # fish$nofish <- as.factor(fish$nofish) # fish$livebait <- as.factor(fish$livebait) # fish$camper <- as.factor(fish$camper) data(fish) set.seed(123) m1 <- GLMMadaptive::mixed_model( count ~ child + camper, random = ~ 1 | persons, zi_fixed = ~ child + livebait, zi_random = ~ 1 | persons, data = fish, family = GLMMadaptive::zi.poisson() ) m2 <- GLMMadaptive::mixed_model( nofish ~ xb + zg, random = ~ 1 | persons, data = fish, family = binomial ) test_that("ggpredict", { # this test fails on osx, but not on windows skip_on_cran() set.seed(123) expect_warning(p <- ggpredict(m1, c("child", "camper"), type = "fe.zi")) expect_equal(p$predicted[1], 2.045537, tolerance = 1e-2) set.seed(123) p <- ggpredict(m1, c("child", "camper"), type = "re.zi", condition = c(count = 3.296)) expect_equal(p$predicted[1], 4.982773, tolerance = 1e-2) set.seed(123) p <- ggpredict(m1, c("child", "camper"), type = "re.zi", condition = c(count = 0)) expect_equal(p$predicted[1], 0.5115884, tolerance = 1e-2) }) test_that("ggemmeans", { set.seed(123) p <- ggemmeans(m1, c("child", "camper"), type = "fe.zi") expect_equal(p$predicted[1], 1.816723, tolerance = 1e-2) set.seed(123) p <- ggemmeans(m1, c("child", "camper"), type = "re.zi") expect_equal(p$predicted[1], 3.457011, tolerance = 1e-2) }) test_that("ggpredict", { expect_warning(expect_message(ggpredict(m1, c("child", "camper"), type = "fe"))) expect_warning(expect_message(ggpredict(m2, "zg", type = "fe.zi"))) }) } } ggeffects/tests/testthat/test-contrasts2.R0000644000176200001440000000257314100515057020420 0ustar liggesusersif (suppressWarnings( require("testthat") && require("ggeffects") && require("lme4") && require("sjlabelled") )) { data(efc) efc$e15relat <- as_label(efc$e15relat) efc$e42dep <- as_label(efc$e42dep) efc$c172code <- as.factor(efc$c172code) m <- lmer(neg_c_7 ~ e42dep + c172code + (1 | e15relat), data = efc) test_that("ggpredict, contrasts-1", { options(contrasts = rep("contr.sum", 2)) pr <- ggpredict(m, c("c172code", "e42dep")) expect_false(anyNA(pr$std.error)) }) test_that("ggpredict, contrasts-2", { options(contrasts = rep("contr.sum", 2)) pr <- ggpredict(m, "c172code") expect_false(anyNA(pr$std.error)) }) test_that("ggpredict, contrasts-3", { options(contrasts = rep("contr.sum", 2)) pr <- ggpredict(m, "e42dep") expect_false(anyNA(pr$std.error)) }) test_that("ggpredict, contrasts-4", { options(contrasts = rep("contr.treatment", 2)) pr <- ggpredict(m, c("c172code", "e42dep")) expect_false(anyNA(pr$std.error)) }) test_that("ggpredict, contrasts-5", { options(contrasts = rep("contr.treatment", 2)) pr <- ggpredict(m, "c172code") expect_false(anyNA(pr$std.error)) }) test_that("ggpredict, contrasts-6", { options(contrasts = rep("contr.treatment", 2)) pr <- ggpredict(m, "e42dep") expect_false(anyNA(pr$std.error)) }) } ggeffects/tests/testthat/test-brms-ppd.R0000644000176200001440000000256113753253775020062 0ustar liggesusers.runThisTest <- Sys.getenv("RunAllggeffectsTests") == "yes" if (.runThisTest) { if (suppressWarnings( require("testthat") && require("brms") && require("ggeffects") )) { ## TODO enable once rstan works w/o problems again... if (FALSE) { x <- rnorm(10, 0) b <- runif(2) s <- ifelse(diag(2) == 0, 0.23, 1) er <- cbind(rnorm(10, 0, s), rnorm(10, 0, s)) y <- apply(t(b), 2, `*`, x) + er d <- data.frame(y1 = y[,1], y2 = y[,2], x) m1 <- suppressWarnings(brm(bf(mvbind(y1, y2) ~ 1 + x) + set_rescor(TRUE), data = d, chains = 2, iter = 500, refresh = 0)) m2 <- suppressWarnings(brm(y1 ~ x, data = d, chains = 2, iter = 500, refresh = 0)) test_that("ggpredict, brms-ppd", { expect_type(ggpredict(m1, ppd = TRUE), "ggalleffects") expect_s3_class(ggpredict(m1, "x", ppd = TRUE), "data.frame") expect_type(ggpredict(m2, ppd = TRUE), "ggalleffects") expect_s3_class(ggpredict(m2, "x", ppd = TRUE), "data.frame") }) test_that("ggpredict, brms-ppd", { expect_type(ggpredict(m1, ppd = FALSE), "ggalleffects") expect_s3_class(ggpredict(m1, "x", ppd = FALSE), "data.frame") expect_type(ggpredict(m2, ppd = FALSE), "ggalleffects") expect_s3_class(ggpredict(m2, "x", ppd = FALSE), "data.frame") }) } } } ggeffects/tests/testthat/test-brms-categ-cum.R0000644000176200001440000000211614046746430021127 0ustar liggesusers.runThisTest <- Sys.getenv("RunAllggeffectsTests") == "yes" if (.runThisTest && suppressWarnings( require("testthat") && require("brms") && require("ggeffects") && require("insight") )) { m1 <- insight::download_model("brms_ordinal_1") m2 <- insight::download_model("brms_ordinal_1_wt") m3 <- insight::download_model("brms_categorical_1_num") m4 <- insight::download_model("brms_categorical_1_fct") m5 <- insight::download_model("brms_categorical_1_wt") test_that("ggpredict, brms-categ-cum", { p1 <- ggpredict(m1, c("mpg")) p2 <- ggpredict(m2, c("mpg")) p3 <- ggpredict(m3, c("mpg")) p4 <- ggpredict(m4, c("mpg")) p5 <- ggpredict(m5, c("mpg")) # m3/m4 are the same, except response is numeric/factor, so predictions should be the same p4$response.level <- as.numeric(p4$response.level) for (resp.level in c(3:5)) { expect_equal( p3[p3$response.level == resp.level, ], p4[p4$response.level == resp.level, ], ignore_attr = TRUE, tolerance = 0.05 ) } }) } ggeffects/tests/testthat/test-lrm.R0000644000176200001440000000115514036250437017111 0ustar liggesusersif (suppressWarnings( require("testthat") && require("ggeffects") && require("sjmisc") && require("rms") )) { data(efc) efc$neg_c_7d <- dicho(efc$neg_c_7) m1 <- lrm(neg_c_7d ~ c12hour + e42dep + c161sex + c172code, data = efc) test_that("ggpredict, lrm", { pr <- ggpredict(m1, "c12hour") expect_equal(pr$predicted[1], 0.4008948, tolerance = 1e-2) }) test_that("ggeffect, lrm", { expect_null(ggeffect(m1, "c12hour")) }) test_that("ggemmeans, lrm", { pr <- ggemmeans(m1, "c12hour") expect_equal(pr$predicted[1], 0.4008948, tolerance = 1e-2) }) } ggeffects/tests/testthat/test-tobit.R0000644000176200001440000000117313725723673017453 0ustar liggesusersif (suppressWarnings( require("testthat") && require("ggeffects") && require("AER") )) { unloadNamespace("VGAM") data("Affairs") m1 <- tobit(affairs ~ age + yearsmarried + religiousness + occupation + rating, data = Affairs) test_that("ggpredict, tobit", { pr <- ggpredict(m1, "yearsmarried") expect_equal(pr$predicted[1], -10.15089, tolerance = 1e-4) }) test_that("ggeffect, tobit", { expect_null(ggeffect(m1, "yearsmarried")) }) test_that("ggemmeans, tobit", { pr <- ggemmeans(m1, "yearsmarried") expect_equal(pr$predicted[1], -10.15089, tolerance = 1e-4) }) } ggeffects/tests/testthat/test-glmrob_base.R0000644000176200001440000000077313753253775020615 0ustar liggesusersif (suppressWarnings( require("testthat") && require("ggeffects") && require("sjmisc") && require("robustbase") )) { data(efc) efc$neg_c_7d <- dicho(efc$neg_c_7) m1 <- glmrob(neg_c_7d ~ c12hour + e42dep + c161sex + c172code, data = efc, family = binomial) test_that("ggpredict, lrm", { pr <- ggpredict(m1, "c12hour") expect_equal(pr$predicted[1], 0.4035267, tolerance = 1e-4) }) test_that("ggemmeans, lrm", { expect_null(ggemmeans(m1, "c12hour")) }) } ggeffects/tests/testthat/test-get_titles.R0000644000176200001440000000206314036250437020461 0ustar liggesusersif (suppressWarnings( require("testthat") && require("ggeffects") && require("haven") && require("sjlabelled") && require("sjmisc") )) { data(efc) efc$c172code <- sjmisc::to_factor(efc$c172code) fit <- lm(barthtot ~ c12hour + neg_c_7 + c161sex + c172code, data = efc) mydf <- ggpredict(fit, terms = c("c12hour", "c161sex", "c172code")) test_that("ggpredict, get_x_title", { expect_equal(get_x_title(mydf), "average number of hours of care per week") }) test_that("ggpredict, get_y_title", { expect_equal(get_y_title(mydf), "Total score BARTHEL INDEX") }) test_that("ggpredict, get_legend_labels", { expect_equal(get_legend_labels(mydf), c("Male", "Female")) }) test_that("ggpredict, get_legend_title", { expect_equal(get_legend_title(mydf), "carer's gender") }) mydf <- ggpredict(fit, terms = "c172code") test_that("ggpredict, get_x_labels", { expect_equal(get_x_labels(mydf), c("low level of education", "intermediate level of education", "high level of education")) }) } ggeffects/tests/testthat/test-rstanarm-ppd.R0000644000176200001440000000263413753253775020747 0ustar liggesusers.runThisTest <- Sys.getenv("RunAllggeffectsTests") == "yes" if (.runThisTest) { if (suppressWarnings( require("testthat") && require("rstanarm") && require("ggeffects") )) { x <- rnorm(30, 0) b <- runif(2) s <- ifelse(diag(2) == 0, 0.23, 1) er <- cbind(rnorm(30, 0, s), rnorm(30, 0, s)) y <- apply(t(b), 2, `*`, x) + er d <- data.frame(y1 = y[,1], y2 = y[,2], x) d$group <- sample(c("a", "b", "c"), size = nrow(d), replace = TRUE) m1 <- suppressWarnings(rstanarm::stan_mvmer( list( y1 ~ x + (1 | group), y2 ~ x + (1 | group) ), data = d, chains = 2, iter = 500, refresh = 0 )) m2 <- suppressWarnings(rstanarm::stan_glm(y1 ~ x, data = d, chains = 2, iter = 500, refresh = 0)) test_that("ggpredict, rstanarm-ppd", { expect_s3_class(ggpredict(m1, ppd = TRUE), "ggalleffects") expect_s3_class(ggpredict(m1, "x", ppd = TRUE), "data.frame") expect_s3_class(ggpredict(m2, ppd = TRUE), "ggalleffects") expect_s3_class(ggpredict(m2, "x", ppd = TRUE), "data.frame") }) test_that("ggpredict, rstanarm-ppd", { expect_error(ggpredict(m1, ppd = FALSE)) expect_error(ggpredict(m1, "x", ppd = FALSE)) expect_s3_class(ggpredict(m2, ppd = FALSE), "ggalleffects") expect_s3_class(ggpredict(m2, "x", ppd = FALSE), "data.frame") }) } } ggeffects/tests/testthat/test-gee.R0000644000176200001440000000075214036250437017061 0ustar liggesusersif (require("testthat") && require("ggeffects") && require("emmeans") && require("gee")) { data(warpbreaks) m1 <- suppressMessages(gee(breaks ~ tension, id = wool, data = warpbreaks, silent = TRUE)) test_that("ggpredict", { p <- ggpredict(m1, "tension") expect_equal(p$predicted[1], 36.38889, tolerance = 1e-3) }) test_that("ggemmeans", { p <- ggemmeans(m1, "tension") expect_equal(p$predicted[1], 36.38889, tolerance = 1e-3) }) } ggeffects/tests/testthat/test-orm.R0000644000176200001440000000114614036250437017114 0ustar liggesusersif (suppressWarnings( require("testthat") && require("ggeffects") && require("rms") )) { #example data set.seed(123) d <- data.frame( y = ifelse(rnorm(100) > 0, 1, 0), x = rnorm(100) ) m <- orm(y ~ x, data = d) test_that("ggpredict, orm", { pr <- ggpredict(m, "x [-2:2 by=1]") expect_equal(pr$predicted, c(0.55423, 0.5362, 0.51807, 0.49989, 0.48171), tolerance = 1e-2) }) test_that("ggemmeans, orm", { pr <- ggemmeans(m, "x [-2:2 by=1]") expect_equal(pr$predicted, c(0.55423, 0.5362, 0.51807, 0.49989, 0.48171), tolerance = 1e-2) }) } ggeffects/tests/testthat/test-brms-trial.R0000644000176200001440000000077714046746430020410 0ustar liggesusers.runThisTest <- Sys.getenv("RunAllggeffectsTests") == "yes" if (.runThisTest && suppressWarnings( require("testthat") && require("brms") && require("ggeffects") && require("insight") )) { m1 <- insight::download_model("brms_mixed_6") m2 <- insight::download_model("brms_mv_4") m3 <- insight::download_model("brms_2") test_that("ggpredict, brms-trial", { ggpredict(m1, c("Base", "Trt")) ggpredict(m2, "Species") ggpredict(m3, c("treat", "c2")) }) } ggeffects/tests/testthat/test-nlme.R0000644000176200001440000000235613753253775017273 0ustar liggesusersif (suppressWarnings( require("testthat") && require("ggeffects") && require("sjlabelled") && require("nlme") && require("lme4") && require("sjmisc") )) { # lme ---- data(Orthodont) fit <- lme(distance ~ age + Sex, data = Orthodont, random = ~ 1 | Subject) test_that("ggpredict, lme", { expect_s3_class(ggpredict(fit, "age"), "data.frame") expect_s3_class(ggpredict(fit, c("age", "Sex")), "data.frame") expect_s3_class(ggpredict(fit, "age", type = "re"), "data.frame") expect_s3_class(ggpredict(fit, c("age", "Sex"), type = "re"), "data.frame") }) test_that("ggeffect, lme", { expect_s3_class(ggeffect(fit, "age"), "data.frame") expect_s3_class(ggeffect(fit, c("age", "Sex")), "data.frame") }) m5 <- lmer(distance ~ age * Sex + (age|Subject), data = Orthodont) m6 <- lme(distance ~ age * Sex, data = Orthodont, random = ~ age | Subject) test_that("ggpredict, lme", { expect_s3_class(ggpredict(m5, c("age", "Sex")), "data.frame") expect_s3_class(ggpredict(m6, c("age", "Sex")), "data.frame") expect_s3_class(ggpredict(m5, c("age", "Sex"), type = "re"), "data.frame") expect_s3_class(ggpredict(m6, c("age", "Sex"), type = "re"), "data.frame") }) } ggeffects/tests/testthat/test-lmrob_base.R0000644000176200001440000000111713753253775020437 0ustar liggesusersif (suppressWarnings( require("testthat") && require("ggeffects") && require("sjmisc") && require("robustbase") )) { data(efc) m1 <- lmrob(neg_c_7 ~ c12hour + e42dep + c161sex + c172code, data = efc) test_that("ggpredict, lmrob", { pr <- ggpredict(m1, "c12hour") expect_equal(pr$predicted[1], 11.02581, tolerance = 1e-4) }) test_that("ggeffect, lmrob", { pr <- ggeffect(m1, "c12hour") expect_equal(pr$predicted[1], 11.02581, tolerance = 1e-4) }) test_that("ggemmeans, lmrob", { expect_null(ggemmeans(m1, "c12hour")) }) } ggeffects/tests/testthat/test-vgam.R0000644000176200001440000000051113753253775017261 0ustar liggesusersunloadNamespace("gam") if (require("testthat") && require("ggeffects") && require("VGAM")) { data("hunua") m1 <- vgam(agaaus ~ vitluc + s(altitude, df = 2), binomialff, data = hunua) test_that("ggpredict", { p <- ggpredict(m1, "vitluc") expect_equal(p$predicted[1], 0.2745789, tolerance = 1e-3) }) } ggeffects/tests/testthat/test-rstanarm.R0000644000176200001440000000546513753253775020173 0ustar liggesusers.runThisTest <- Sys.getenv("RunAllggeffectsTests") == "yes" if (.runThisTest) { if (suppressWarnings( require("testthat") && require("lme4") && require("sjmisc") && require("rstanarm") && require("ggeffects") )) { # fit linear model data(sleepstudy) data(efc) sleepstudy$age <- round(runif(nrow(sleepstudy), min = 20, max = 60)) sleepstudy$Rdicho <- dicho(sleepstudy$Reaction) efc <- to_label(efc, e42dep, c161sex, c172code) m <- suppressWarnings(rstanarm::stan_glmer( Reaction ~ Days + age + (1 | Subject), data = sleepstudy, QR = TRUE, # this next line is only to keep the example small in size! chains = 2, cores = 1, seed = 12345, iter = 500, refresh = 0 )) m2 <- suppressWarnings(rstanarm::stan_glmer( Rdicho ~ Days + age + (1 | Subject), data = sleepstudy, QR = TRUE, family = binomial, chains = 2, iter = 500, refresh = 0 )) m3 <- suppressWarnings(rstanarm::stan_glm( tot_sc_e ~ neg_c_7 + e42dep + barthtot + c172code + c161sex, data = efc, family = poisson("log"), chains = 2, iter = 500 )) test_that("ggpredict, rstan", { expect_s3_class(ggpredict(m, "Days"), "data.frame") expect_s3_class(ggpredict(m, c("Days", "age")), "data.frame") expect_s3_class(ggpredict(m, "Days", type = "re"), "data.frame") expect_s3_class(ggpredict(m, c("Days", "age"), type = "re"), "data.frame") expect_s3_class(ggpredict(m, "Days", ppd = TRUE), "data.frame") expect_s3_class(ggpredict(m, c("Days", "age"), ppd = TRUE), "data.frame") expect_s3_class(ggpredict(m, "Days", type = "re", ppd = TRUE), "data.frame") expect_s3_class(ggpredict(m, c("Days", "age"), type = "re", ppd = TRUE), "data.frame") }) test_that("ggpredict, rstan", { expect_s3_class(ggpredict(m2, "Days"), "data.frame") expect_s3_class(ggpredict(m2, c("Days", "age")), "data.frame") expect_s3_class(ggpredict(m2, "Days", type = "re"), "data.frame") expect_s3_class(ggpredict(m2, c("Days", "age"), type = "re"), "data.frame") expect_s3_class(ggpredict(m2, "Days", ppd = TRUE), "data.frame") expect_s3_class(ggpredict(m2, c("Days", "age"), ppd = TRUE), "data.frame") expect_s3_class(ggpredict(m2, "Days", type = "re", ppd = TRUE), "data.frame") expect_s3_class(ggpredict(m2, c("Days", "age"), type = "re", ppd = TRUE), "data.frame") }) test_that("ggpredict, rstan", { expect_s3_class(ggpredict(m3, "neg_c_7"), "data.frame") expect_s3_class(ggpredict(m3, c("neg_c_7", "e42dep")), "data.frame") expect_s3_class(ggpredict(m3, "neg_c_7", ppd = TRUE), "data.frame") expect_s3_class(ggpredict(m3, c("neg_c_7", "e42dep"), ppd = TRUE), "data.frame") }) } } ggeffects/tests/testthat/test-print.R0000644000176200001440000002324314046746430017461 0ustar liggesusers.runThisTest <- Sys.getenv("RunAllggeffectsTests") == "yes" if (.runThisTest && suppressWarnings( require("testthat") && require("ggeffects") && require("haven") && require("sjlabelled") && require("sjmisc") )) { # lm, linear regression ---- data(efc) efc$c172code <- to_label(efc$c172code) efc$e42dep <- to_label(efc$e42dep) efc$c82cop1 <- as.numeric(efc$c82cop1) fit <- lm(barthtot ~ c12hour + neg_c_7 + c82cop1 + e42dep + c161sex + c172code, data = efc) test_that("ggpredict, print", { ggpredict(fit, terms = "c12hour") ggpredict(fit, terms = "c172code") ggpredict(fit, terms = "c161sex") ggpredict(fit, terms = c("c12hour", "c172code")) ggpredict(fit, terms = c("c12hour", "c161sex")) ggpredict(fit, terms = c("e42dep", "c161sex")) ggpredict(fit, terms = c("e42dep", "c172code")) ggpredict(fit, terms = c("c12hour", "c172code", "c161sex")) ggpredict(fit, terms = c("e42dep", "c172code", "c161sex")) ggpredict(fit, terms = c("c12hour", "c172code", "e42dep")) ggpredict(fit, terms = c("c161sex", "c172code", "e42dep")) ggpredict(fit, terms = c("c12hour", "neg_c_7")) ggpredict(fit, terms = c("c12hour", "neg_c_7 [all]")) ggpredict(fit, terms = c("c12hour", "neg_c_7 [quart2]")) ggpredict(fit, terms = c("c12hour", "neg_c_7 [quart2]", "c161sex")) ggpredict(fit, terms = c("c12hour", "neg_c_7", "c161sex")) out <- utils::capture.output(ggpredict(fit, terms = c("c12hour", "neg_c_7 [quart2]", "c82cop1"))) expect_equal( out, c("# Predicted values of Total score BARTHEL INDEX", "", "# neg_c_7 = 9", "# c82cop1 = 1", "", "c12hour | Predicted | 95% CI", "-------------------------------------", " 0 | 95.03 | [87.82, 102.24]", " 45 | 91.98 | [84.68, 99.29]", " 85 | 89.28 | [81.72, 96.83]", " 170 | 83.52 | [74.97, 92.07]", "", "# neg_c_7 = 11", "# c82cop1 = 1", "", "c12hour | Predicted | 95% CI", "-------------------------------------", " 0 | 93.03 | [85.96, 100.10]", " 45 | 89.98 | [82.83, 97.13]", " 85 | 87.27 | [79.88, 94.67]", " 170 | 81.52 | [73.13, 89.90]", "", "# neg_c_7 = 14", "# c82cop1 = 1", "", "c12hour | Predicted | 95% CI", "------------------------------------", " 0 | 90.02 | [83.05, 97.00]", " 45 | 86.98 | [79.93, 94.02]", " 85 | 84.27 | [76.99, 91.54]", " 170 | 78.51 | [70.25, 86.77]", "", "# neg_c_7 = 9", "# c82cop1 = 2", "", "c12hour | Predicted | 95% CI", "-------------------------------------", " 0 | 94.45 | [88.66, 100.24]", " 45 | 91.40 | [85.53, 97.27]", " 85 | 88.69 | [82.53, 94.85]", " 170 | 82.93 | [75.64, 90.23]", "", "# neg_c_7 = 11", "# c82cop1 = 2", "", "c12hour | Predicted | 95% CI", "------------------------------------", " 0 | 92.44 | [86.74, 98.14]", " 45 | 89.40 | [83.63, 95.17]", " 85 | 86.69 | [80.64, 92.74]", " 170 | 80.93 | [73.75, 88.12]", "", "# neg_c_7 = 14", "# c82cop1 = 2", "", "c12hour | Predicted | 95% CI", "------------------------------------", " 0 | 89.44 | [83.71, 95.17]", " 45 | 86.39 | [80.61, 92.17]", " 85 | 83.68 | [77.65, 89.72]", " 170 | 77.93 | [70.78, 85.07]", "", "# neg_c_7 = 9", "# c82cop1 = 3", "", "c12hour | Predicted | 95% CI", "------------------------------------", " 0 | 93.86 | [88.88, 98.84]", " 45 | 90.82 | [85.78, 95.86]", " 85 | 88.11 | [82.77, 93.45]", " 170 | 82.35 | [75.77, 88.93]", "", "# neg_c_7 = 11", "# c82cop1 = 3", "", "c12hour | Predicted | 95% CI", "------------------------------------", " 0 | 91.86 | [86.88, 96.84]", " 45 | 88.81 | [83.78, 93.84]", " 85 | 86.10 | [80.79, 91.42]", " 170 | 80.35 | [73.81, 86.88]", "", "# neg_c_7 = 14", "# c82cop1 = 3", "", "c12hour | Predicted | 95% CI", "------------------------------------", " 0 | 88.86 | [83.68, 94.03]", " 45 | 85.81 | [80.61, 91.00]", " 85 | 83.10 | [77.65, 88.55]", " 170 | 77.34 | [70.73, 83.95]", "", "# neg_c_7 = 9", "# c82cop1 = 4", "", "c12hour | Predicted | 95% CI", "------------------------------------", " 0 | 93.28 | [88.19, 98.36]", " 45 | 90.23 | [85.12, 95.34]", " 85 | 87.52 | [82.14, 92.90]", " 170 | 81.77 | [75.21, 88.33]", "", "# neg_c_7 = 11", "# c82cop1 = 4", "", "c12hour | Predicted | 95% CI", "------------------------------------", " 0 | 91.28 | [86.08, 96.47]", " 45 | 88.23 | [83.02, 93.43]", " 85 | 85.52 | [80.06, 90.98]", " 170 | 79.76 | [73.17, 86.36]", "", "# neg_c_7 = 14", "# c82cop1 = 4", "", "c12hour | Predicted | 95% CI", "------------------------------------", " 0 | 88.27 | [82.75, 93.79]", " 45 | 85.22 | [79.71, 90.73]", " 85 | 82.52 | [76.79, 88.24]", " 170 | 76.76 | [69.97, 83.55]", "", "Adjusted for:", "* e42dep = independent", "* c161sex = 1.76", "* c172code = low level of education"), ignore_attr = TRUE) out <- utils::capture.output(ggpredict(fit, terms = c("c12hour", "neg_c_7", "c82cop1"))) expect_equal( out, c("# Predicted values of Total score BARTHEL INDEX", "", "# neg_c_7 = 8", "# c82cop1 = 1", "", "c12hour | Predicted | 95% CI", "-------------------------------------", " 0 | 96.03 | [88.72, 103.34]", " 45 | 92.99 | [85.58, 100.39]", " 85 | 90.28 | [82.62, 97.93]", " 170 | 84.52 | [75.87, 93.17]", "", "# neg_c_7 = 11.8", "# c82cop1 = 1", "", "c12hour | Predicted | 95% CI", "------------------------------------", " 0 | 92.23 | [85.20, 99.26]", " 45 | 89.18 | [82.08, 96.28]", " 85 | 86.47 | [79.12, 93.82]", " 170 | 80.71 | [72.37, 89.06]", "", "# neg_c_7 = 15.7", "# c82cop1 = 1", "", "c12hour | Predicted | 95% CI", "------------------------------------", " 0 | 88.32 | [81.32, 95.32]", " 45 | 85.27 | [78.22, 92.33]", " 85 | 82.57 | [75.29, 89.84]", " 170 | 76.81 | [68.57, 85.05]", "", "# neg_c_7 = 8", "# c82cop1 = 2", "", "c12hour | Predicted | 95% CI", "-------------------------------------", " 0 | 95.45 | [89.58, 101.31]", " 45 | 92.40 | [86.45, 98.35]", " 85 | 89.69 | [83.45, 95.93]", " 170 | 83.94 | [76.56, 91.32]", "", "# neg_c_7 = 11.8", "# c82cop1 = 2", "", "c12hour | Predicted | 95% CI", "------------------------------------", " 0 | 91.64 | [85.95, 97.33]", " 45 | 88.60 | [82.84, 94.35]", " 85 | 85.89 | [79.86, 91.91]", " 170 | 80.13 | [72.97, 87.29]", "", "# neg_c_7 = 15.7", "# c82cop1 = 2", "", "c12hour | Predicted | 95% CI", "------------------------------------", " 0 | 87.74 | [81.90, 93.57]", " 45 | 84.69 | [78.82, 90.56]", " 85 | 81.98 | [75.87, 88.09]", " 170 | 76.22 | [69.04, 83.41]", "", "# neg_c_7 = 8", "# c82cop1 = 3", "", "c12hour | Predicted | 95% CI", "------------------------------------", " 0 | 94.86 | [89.85, 99.88]", " 45 | 91.82 | [86.73, 96.90]", " 85 | 89.11 | [83.72, 94.50]", " 170 | 83.35 | [76.73, 89.98]", "", "# neg_c_7 = 11.8", "# c82cop1 = 3", "", "c12hour | Predicted | 95% CI", "------------------------------------", " 0 | 91.06 | [86.04, 96.07]", " 45 | 88.01 | [82.96, 93.06]", " 85 | 85.30 | [79.97, 90.64]", " 170 | 79.55 | [73.01, 86.08]", "", "# neg_c_7 = 15.7", "# c82cop1 = 3", "", "c12hour | Predicted | 95% CI", "------------------------------------", " 0 | 87.15 | [81.78, 92.53]", " 45 | 84.11 | [78.73, 89.48]", " 85 | 81.40 | [75.78, 87.01]", " 170 | 75.64 | [68.91, 82.37]", "", "# neg_c_7 = 8", "# c82cop1 = 4", "", "c12hour | Predicted | 95% CI", "------------------------------------", " 0 | 94.28 | [89.21, 99.35]", " 45 | 91.23 | [86.13, 96.34]", " 85 | 88.52 | [83.14, 93.90]", " 170 | 82.77 | [76.20, 89.34]", "", "# neg_c_7 = 11.8", "# c82cop1 = 4", "", "c12hour | Predicted | 95% CI", "------------------------------------", " 0 | 90.47 | [85.21, 95.74]", " 45 | 87.43 | [82.16, 92.69]", " 85 | 84.72 | [79.21, 90.23]", " 170 | 78.96 | [72.33, 85.59]", "", "# neg_c_7 = 15.7", "# c82cop1 = 4", "", "c12hour | Predicted | 95% CI", "------------------------------------", " 0 | 86.57 | [80.78, 92.36]", " 45 | 83.52 | [77.76, 89.28]", " 85 | 80.81 | [74.85, 86.77]", " 170 | 75.06 | [68.09, 82.03]", "", "Adjusted for:", "* e42dep = independent", "* c161sex = 1.76", "* c172code = low level of education"), ignore_attr = TRUE) }) } ggeffects/tests/testthat/test-backtransform_response.R0000644000176200001440000000316413754272616023104 0ustar liggesusersif (require("testthat") && require("ggeffects") && require("emmeans")) { reprex <- data.frame( time = as.factor(c(1, 2, 3, 4, 5, 1, 2, 3, 4, 5, 1, 2, 3, 4, 5, 1, 2, 3, 4, 5, 1, 2, 3, 4, 5, 1, 2, 3, 4, 5)), group = c(1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2), id = c(10, 10, 10, 10, 10, 15, 15, 15, 15, 15, 20, 20, 20, 20, 20, 25, 25, 25, 25,25, 30, 30, 30, 30, 30, 35, 35, 35, 35, 35), factor1 = as.factor(c(0, 0, 0, 0, 0, 0, 0,0, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0,0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1)), factor2 = as.factor(c(1, 1, 1, 1,1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1)), score = c(7, 7, 7, 4, 2, 2, 2, 2, 1, 1, 5, 5, 7, 8, 6, 0, 2, 3, 3, 3, 6, 8, 8, 8, 8, 2, 4, 4, 3, 2) ) m1 <- lm(sqrt(score + 5) ~ as.numeric(time) * group, data = reprex) m2 <- lm(log(score + 1) ~ as.numeric(time) * group, data = reprex) test_that("ggpredict-log-response", { p1 <- suppressMessages(ggpredict(m1, c("time", "group"))) p2 <- as.data.frame(emmeans(m1, c("time", "group"), at = list(time = 1:5), type = "response")) expect_equal(p1$predicted[1], p2$response[1], tolerance = 1e-3) expect_equal(p1$predicted[1], 6.677575, tolerance = 1e-3) }) test_that("ggpredict-sqrt-response", { p1 <- suppressMessages(ggpredict(m2, c("time", "group"))) p2 <- as.data.frame(emmeans(m2, c("time", "group"), at = list(time = 1:5), type = "response")) expect_equal(p1$predicted[1], p2$response[1], tolerance = 1e-3) expect_equal(p1$predicted[1], 6.743365, tolerance = 1e-3) }) } ggeffects/tests/testthat/test-zi_prob.R0000644000176200001440000000335014046746430017766 0ustar liggesusers.runThisTest <- Sys.getenv("RunAllggeffectsTests") == "yes" if (.runThisTest && require("testthat") && require("ggeffects") && require("GLMMadaptive") && require("glmmTMB") && require("pscl")) { data(fish) set.seed(123) m1 <- GLMMadaptive::mixed_model( count ~ child + camper, random = ~ 1 | persons, zi_fixed = ~ child + livebait, zi_random = ~ 1 | persons, data = fish, family = GLMMadaptive::zi.poisson() ) m2 <- glmmTMB( count ~ child + camper + (1 | persons), ziformula = ~ child + livebait + (1 | persons), data = fish, family = poisson() ) data(Salamanders) m3 <- zeroinfl(count ~ mined | mined, dist = "poisson", data = Salamanders) set.seed(123) nd <- new_data(m1, "livebait") p1 <- predict(m1, newdata = nd, type_pred = "response", type = "zero_part") p2 <- suppressWarnings(ggpredict(m1, "livebait", type = "zi_prob")) test_that("ggpredict", { expect_equal(unname(p1), p2$predicted, tolerance = 1e-3) }) set.seed(123) nd <- new_data(m2, "livebait") p1 <- predict(m2, newdata = nd, type = "zprob") p2 <- suppressWarnings(ggpredict(m2, "livebait", type = "zi_prob")) test_that("ggpredict", { expect_equal(unname(p1), p2$predicted, tolerance = 1e-3) }) set.seed(123) nd <- new_data(m3, "mined") p1 <- predict(m3, newdata = nd, type = "zero") p2 <- suppressWarnings(ggpredict(m3, "mined", type = "zi_prob")) test_that("ggpredict", { expect_equal(unname(p1), p2$predicted, tolerance = 1e-3) }) set.seed(123) p3 <- suppressWarnings(ggemmeans(m3, "mined", type = "zi_prob")) test_that("ggpredict", { expect_equal(p3$predicted, c(0.8409091, 0.3809524), tolerance = 1e-3) }) } ggeffects/tests/testthat/test-contrasts.R0000644000176200001440000000270714100515062020331 0ustar liggesusersif (suppressWarnings( require("testthat") && require("ggeffects") && require("lme4") && require("sjlabelled") )) { data(efc) efc$e15relat <- as_label(efc$e15relat) efc$e42dep <- as_label(efc$e42dep) efc$c172code <- as.factor(efc$c172code) m <- lmer(neg_c_7 ~ e42dep + c172code + c12hour + c82cop1 + (1 | e15relat), data = efc) test_that("ggpredict, contrasts-1", { options(contrasts = rep("contr.sum", 2)) pr <- ggpredict(m, c("c12hour", "e42dep")) expect_false(anyNA(pr$std.error)) }) test_that("ggpredict, contrasts-2", { options(contrasts = rep("contr.sum", 2)) pr <- ggpredict(m, c("c12hour", "c172code")) expect_false(anyNA(pr$std.error)) }) test_that("ggpredict, contrasts-3", { options(contrasts = rep("contr.sum", 2)) pr <- ggpredict(m, c("c12hour", "c82cop1")) expect_false(anyNA(pr$std.error)) }) test_that("ggpredict, contrasts-4", { options(contrasts = rep("contr.treatment", 2)) pr <- ggpredict(m, c("c12hour", "e42dep")) expect_false(anyNA(pr$std.error)) }) test_that("ggpredict, contrasts-5", { options(contrasts = rep("contr.treatment", 2)) pr <- ggpredict(m, c("c12hour", "c172code")) expect_false(anyNA(pr$std.error)) }) test_that("ggpredict, contrasts-6", { options(contrasts = rep("contr.treatment", 2)) pr <- ggpredict(m, c("c12hour", "c82cop1")) expect_false(anyNA(pr$std.error)) }) } ggeffects/tests/testthat/test-clm.R0000644000176200001440000000153714036250437017076 0ustar liggesusers.runThisTest <- Sys.getenv("RunAllggeffectsTests") == "yes" if (.runThisTest && getRversion() >= "3.6.0") { if (require("testthat") && require("ggeffects") && require("ordinal") && require("MASS")) { data(wine, package = "ordinal") m1 <- clm(rating ~ temp * contact, data = wine) test_that("ggpredict", { p <- ggpredict(m1, "temp") expect_equal(p$predicted[1], 0.1960351, tolerance = 1e-3) ggpredict(m1, c("temp", "contact")) }) test_that("ggeffect", { p <- ggeffect(m1, "temp") expect_equal(p$predicted[1], 0.110564082334497, tolerance = 1e-3) ggeffect(m1, c("temp", "contact")) }) test_that("ggemmeans", { p <- ggemmeans(m1, "contact") expect_equal(p$predicted[1], 0.1097049, tolerance = 1e-3) ggemmeans(m1, c("temp", "contact")) }) } } ggeffects/tests/testthat/test-logistf.R0000644000176200001440000000106114036250437017762 0ustar liggesusersif (suppressWarnings( require("testthat") && require("ggeffects") && require("logistf") && getRversion() >= "3.6.0")) { data(sex2) m1 <- logistf(case ~ age + oc, data = sex2) test_that("ggpredict, logistf", { pr <- ggpredict(m1, "age") expect_equal(pr$predicted[1], 0.5763746, tolerance = 1e-4) }) test_that("ggeffect, logistf", { pr <- ggeffect(m1, "age") expect_equal(pr$predicted[1], 0.5762638, tolerance = 1e-4) }) test_that("ggemmeans, logistf", { expect_null(ggemmeans(m1, "age")) }) } ggeffects/tests/testthat/test-clm2.R0000644000176200001440000000126714036250437017160 0ustar liggesusers.runThisTest <- Sys.getenv("RunAllggeffectsTests") == "yes" if (.runThisTest && getRversion() >= "3.6.0") { if (require("testthat") && require("ggeffects") && require("ordinal") && require("MASS")) { data(housing) m1 <- clm2(Sat ~ Infl + Type + Cont, weights = Freq, data = housing) test_that("ggpredict", { expect_error(ggpredict(m1, "Infl")) }) test_that("ggeffect", { p <- ggeffect(m1, "Infl") expect_equal(p$predicted[1], 0.457877729905463, tolerance = 1e-3) expect_s3_class(ggeffect(m1, c("Infl", "Type")), "data.frame") }) test_that("ggemmeans", { expect_error(ggemmeans(m1, "Infl")) }) } } ggeffects/tests/testthat/test-negbin.R0000644000176200001440000000300513753253775017572 0ustar liggesusersif (suppressWarnings( require("testthat") && require("ggeffects") && require("sjmisc") && require("MASS") )) { data(efc) efc$e42dep <- to_label(efc$e42dep) fit <- glm.nb( tot_sc_e ~ neg_c_7 + I(neg_c_7 ^ 2) + neg_c_7:e42dep + I(neg_c_7 ^ 2):e42dep + c12hour + c172code, data = efc, init.theta = 1.133641349, link = log ) test_that("ggpredict, negbin", { expect_s3_class(ggpredict(fit, "neg_c_7"), "data.frame") expect_s3_class(ggeffect(fit, "neg_c_7"), "data.frame") # still fails on windows old-rel, so re-activate once emmeans is built on all platforms # expect_null(ggemmeans(fit, "neg_c_7")) expect_s3_class(ggpredict(fit, c("neg_c_7", "e42dep")), "data.frame") expect_s3_class(ggeffect(fit, c("neg_c_7", "e42dep")), "data.frame") # expect_null(ggemmeans(fit, c("neg_c_7", "e42dep"))) }) data(efc) fit <- glm.nb( tot_sc_e ~ neg_c_7 + I(neg_c_7 ^ 2) + neg_c_7:e42dep + I(neg_c_7 ^ 2):e42dep + c12hour + c172code, data = efc, init.theta = 1.133641349, link = log ) test_that("ggpredict, negbin", { expect_s3_class(ggpredict(fit, "neg_c_7"), "data.frame") expect_s3_class(ggeffect(fit, "neg_c_7"), "data.frame") # expect_null(ggemmeans(fit, "neg_c_7")) expect_s3_class(ggpredict(fit, c("neg_c_7", "e42dep")), "data.frame") expect_s3_class(ggeffect(fit, c("neg_c_7", "e42dep")), "data.frame") # expect_null(ggemmeans(fit, c("neg_c_7", "e42dep"))) }) } ggeffects/tests/testthat/test-extract_values.R0000644000176200001440000000063513774144220021352 0ustar liggesusersif (require("testthat") && require("ggeffects")) { test_that("values_at / pretty_range", { x <- 1:1000 expect_equal(pretty_range(n = 5)(x), pretty_range(x, n = 5)) expect_equal(values_at(values = "meansd")(x), values_at(x, values = "meansd")) expect_equal(values_at(values = "minmax")(x), values_at(x, values = "minmax")) }) } ggeffects/tests/testthat/test-poly-zeroinf.R0000644000176200001440000000317313753253775020773 0ustar liggesusersif (suppressWarnings( require("testthat") && require("ggeffects") && require("glmmTMB") && require("pscl") )) { data(Salamanders) m1 <- glmmTMB( count ~ spp + poly(cover, 3) + mined + (1 | site), ziformula = ~DOY, dispformula = ~spp, data = Salamanders, family = nbinom2 ) m2 <- suppressWarnings(glmmTMB( count ~ spp + poly(cover, 3) + mined + (1 | site), ziformula = ~poly(DOY, 3), dispformula = ~spp, data = Salamanders, family = nbinom2 )) m3 <- zeroinfl(count ~ spp + poly(cover, 3) + mined | DOY, data = Salamanders) m4 <- zeroinfl(count ~ spp + poly(cover, 3) + mined | poly(DOY, 3), data = Salamanders) test_that("ggpredict, glmmTMB", { pr <- ggpredict(m1, c("cover", "mined", "spp"), type = "fe.zi") expect_equal(ncol(pr), 7) expect_equal( colnames(pr), c("x", "predicted", "std.error", "conf.low", "conf.high", "group", "facet") ) pr <- ggpredict(m1, c("mined", "spp"), type = "fe.zi") expect_equal(ncol(pr), 6) pr <- ggpredict(m2, c("cover", "mined", "spp"), type = "fe.zi") expect_equal(ncol(pr), 7) pr <- ggpredict(m2, c("mined", "spp"), type = "fe.zi") expect_equal(ncol(pr), 6) pr <- ggpredict(m3, c("mined", "spp"), type = "fe.zi") expect_equal(ncol(pr), 6) pr <- ggpredict(m3, c("cover", "mined", "spp"), type = "fe.zi") expect_equal(ncol(pr), 7) pr <- ggpredict(m4, c("mined", "spp"), type = "fe.zi") expect_equal(ncol(pr), 6) pr <- ggpredict(m4, c("cover", "mined", "spp"), type = "fe.zi") expect_equal(ncol(pr), 7) }) } ggeffects/tests/testthat/test-glmmTMB.R0000644000176200001440000002174513753253775017642 0ustar liggesusers.runThisTest <- Sys.getenv("RunAllggeffectsTests") == "yes" if (.runThisTest) { if (require("testthat") && require("ggeffects") && require("glmmTMB")) { data(Owls) data(Salamanders) m1 <- glmmTMB(SiblingNegotiation ~ SexParent + ArrivalTime + (1 | Nest), data = Owls, family = nbinom1) m2 <- glmmTMB(SiblingNegotiation ~ SexParent + ArrivalTime + (1 | Nest), data = Owls, family = nbinom2) m4 <- glmmTMB(SiblingNegotiation ~ FoodTreatment + ArrivalTime + SexParent + (1 | Nest), data = Owls, ziformula = ~ 1, family = truncated_poisson(link = "log")) test_that("ggpredict, glmmTMB", { expect_s3_class(ggpredict(m1, c("ArrivalTime", "SexParent")), "data.frame") expect_s3_class(ggpredict(m2, c("ArrivalTime", "SexParent")), "data.frame") expect_s3_class(ggpredict(m4, c("FoodTreatment", "ArrivalTime [21,24,30]", "SexParent")), "data.frame") expect_s3_class(ggpredict(m1, c("ArrivalTime", "SexParent"), type = "re"), "data.frame") expect_s3_class(ggpredict(m2, c("ArrivalTime", "SexParent"), type = "re"), "data.frame") expect_s3_class(ggpredict(m4, c("FoodTreatment", "ArrivalTime [21,24,30]", "SexParent"), type = "re"), "data.frame") }) test_that("ggpredict, glmmTMB", { expect_message(ggpredict(m1, c("ArrivalTime", "SexParent"), type = "fe.zi")) }) test_that("ggpredict, glmmTMB", { p1 <- ggpredict(m1, c("ArrivalTime", "SexParent")) p2 <- ggpredict(m2, c("ArrivalTime", "SexParent")) p3 <- ggemmeans(m1, c("ArrivalTime", "SexParent")) p4 <- ggemmeans(m2, c("ArrivalTime", "SexParent")) expect_equal(p1$predicted[1], p3$predicted[1], tolerance = 1e-3) expect_equal(p2$predicted[1], p4$predicted[1], tolerance = 1e-3) }) m3 <- glmmTMB(count ~ spp + mined + (1 | site), ziformula = ~ spp + mined, family = truncated_poisson, data = Salamanders) m4 <- glmmTMB(count ~ spp + mined + (1 | site), ziformula = ~ spp + mined + (1 | site), family = truncated_poisson, data = Salamanders) m5 <- glmmTMB(count ~ spp + mined + cover + (1 | site), ziformula = ~ spp + mined, family = truncated_poisson, data = Salamanders) test_that("ggpredict, glmmTMB", { p1 <- ggpredict(m3, "mined", type = "fe") p2 <- ggpredict(m3, "mined", type = "fe.zi") p3 <- ggpredict(m3, "mined", type = "re") p4 <- ggpredict(m3, "mined", type = "re.zi") expect_gt(p3$conf.high[1], p1$conf.high[1]) expect_gt(p4$conf.high[1], p2$conf.high[1]) expect_s3_class(ggpredict(m3, "mined", type = "fe.zi", nsim = 50), "data.frame") }) test_that("ggpredict, glmmTMB", { p1 <- ggpredict(m5, c("mined", "spp", "cover"), type = "fe") p2 <- ggemmeans(m5, c("mined", "spp", "cover"), type = "fe.zi") p3 <- ggpredict(m5, c("mined", "spp", "cover"), type = "fe") p4 <- ggemmeans(m5, c("mined", "spp", "cover"), type = "fe.zi") expect_equal(p1$predicted[1], p3$predicted[1], tolerance = 1e-3) expect_equal(p2$predicted[1], p4$predicted[1], tolerance = 1e-3) }) test_that("ggpredict, glmmTMB", { p1 <- ggpredict(m3, "mined", type = "fe") p2 <- ggpredict(m3, c("mined", "spp"), type = "fe.zi") p3 <- ggemmeans(m3, "mined", type = "fe", condition = c(spp = "GP")) p4 <- ggemmeans(m3, c("mined", "spp"), type = "fe.zi") p5 <- ggpredict(m3, c("mined", "spp"), type = "fe") p6 <- ggemmeans(m3, c("mined", "spp"), type = "fe") expect_equal(p1$predicted[1], p3$predicted[1], tolerance = 1e-3) expect_equal(p2$predicted[1], p4$predicted[1], tolerance = 1e-3) expect_equal(p5$predicted[1], p6$predicted[1], tolerance = 1e-3) }) test_that("ggpredict, glmmTMB", { p1 <- ggpredict(m4, "mined", type = "fe") p2 <- ggpredict(m4, "mined", type = "fe.zi") p3 <- ggpredict(m4, "mined", type = "re") p4 <- ggpredict(m4, "mined", type = "re.zi") expect_gt(p3$conf.high[1], p1$conf.high[1]) expect_gt(p4$conf.high[1], p2$conf.high[1]) p1 <- ggpredict(m4, c("spp", "mined"), type = "fe") p2 <- ggpredict(m4, c("spp", "mined"), type = "fe.zi") p3 <- ggpredict(m4, c("spp", "mined"), type = "re") p4 <- ggpredict(m4, c("spp", "mined"), type = "re.zi") expect_gt(p3$conf.high[1], p1$conf.high[1]) expect_gt(p4$conf.high[1], p2$conf.high[1]) }) test_that("ggpredict, glmmTMB", { p <- ggpredict(m3, "spp", type = "fe.zi") expect_true(all(p$conf.low > 0)) set.seed(100) p <- ggpredict(m3, "spp", type = "fe.zi") expect_true(all(p$conf.low > 0)) }) test_that("ggpredict, glmmTMB-simulate", { expect_s3_class(ggpredict(m3, "mined", type = "sim"), "data.frame") expect_s3_class(ggpredict(m3, c("spp", "mined"), type = "sim"), "data.frame") expect_s3_class(ggpredict(m4, "mined", type = "sim"), "data.frame") expect_s3_class(ggpredict(m4, c("spp", "mined"), type = "sim"), "data.frame") }) md <- glmmTMB( count ~ spp + mined + (1 | site), ziformula = ~ spp + mined, dispformula = ~ DOY, family = truncated_poisson, data = Salamanders ) test_that("ggpredict, glmmTMB", { p1 <- ggpredict(md, c("spp", "mined"), type = "fe") p2 <- ggpredict(md, c("spp", "mined"), type = "fe.zi") p3 <- suppressWarnings(ggpredict(md, c("spp", "mined"), type = "re")) p4 <- suppressWarnings(ggpredict(md, c("spp", "mined"), type = "re.zi")) expect_gt(p3$conf.high[1], p1$conf.high[1]) expect_gt(p4$conf.high[1], p2$conf.high[1]) }) data(efc_test) m5 <- glmmTMB( negc7d ~ c12hour + e42dep + c161sex + c172code + (1 | grp), data = efc_test, ziformula = ~ c172code, family = binomial(link = "logit") ) test_that("ggpredict, glmmTMB", { expect_s3_class(ggpredict(m5, "c161sex", type = "fe"), "data.frame") expect_s3_class(ggpredict(m5, "c161sex", type = "fe.zi"), "data.frame") expect_s3_class(ggpredict(m5, "c161sex", type = "re"), "data.frame") expect_s3_class(ggpredict(m5, "c161sex", type = "re.zi"), "data.frame") }) data(efc_test) m6 <- glmmTMB( negc7d ~ c12hour + e42dep + c161sex + c172code + (1 | grp), data = efc_test, family = binomial(link = "logit") ) test_that("ggpredict, glmmTMB", { expect_s3_class(ggpredict(m6, "c161sex", type = "fe"), "data.frame") expect_s3_class(ggpredict(m6, "c161sex", type = "re"), "data.frame") }) data(efc_test) efc_test$tot_sc_e <- as.numeric(efc_test$tot_sc_e) efc_test$c172code <- as.factor(efc_test$c172code) m7 <- glmmTMB( tot_sc_e ~ neg_c_7 * c172code + c161sex + (1 | grp), data = efc_test, ziformula = ~ c172code, family = nbinom1 ) test_that("ggpredict, glmmTMB", { expect_s3_class(ggpredict(m7, "neg_c_7"), "data.frame") expect_s3_class(ggpredict(m7, "neg_c_7 [all]"), "data.frame") expect_s3_class(ggpredict(m7, "neg_c_7", type = "fe.zi"), "data.frame") expect_s3_class(ggpredict(m7, "neg_c_7 [all]", type = "fe.zi"), "data.frame") expect_s3_class(ggpredict(m7, c("neg_c_7", "c172code")), "data.frame") expect_s3_class(ggpredict(m7, c("neg_c_7 [all]", "c172code")), "data.frame") expect_s3_class(ggpredict(m7, c("neg_c_7", "c172code"), type = "fe.zi"), "data.frame") expect_s3_class(ggpredict(m7, c("neg_c_7 [all]", "c172code"), type = "fe.zi"), "data.frame") }) m8 <- glmmTMB( tot_sc_e ~ neg_c_7 * c172code + (1 | grp), data = efc_test, ziformula = ~ c172code, family = nbinom1 ) test_that("ggpredict, glmmTMB", { expect_s3_class(ggpredict(m8, "neg_c_7"), "data.frame") expect_s3_class(ggpredict(m8, "neg_c_7 [all]"), "data.frame") expect_s3_class(ggpredict(m8, "neg_c_7", type = "fe.zi"), "data.frame") expect_s3_class(ggpredict(m8, "neg_c_7 [all]", type = "fe.zi"), "data.frame") expect_s3_class(ggpredict(m8, c("neg_c_7", "c172code")), "data.frame") expect_s3_class(ggpredict(m8, c("neg_c_7 [all]", "c172code")), "data.frame") expect_s3_class(ggpredict(m8, c("neg_c_7", "c172code"), type = "fe.zi"), "data.frame") expect_s3_class(ggpredict(m8, c("neg_c_7 [all]", "c172code"), type = "fe.zi"), "data.frame") }) data(Salamanders) m9 <- glmmTMB( count ~ spp + cover + mined + (1 | site), ziformula = ~ DOY, dispformula = ~ spp, data = Salamanders, family = nbinom2 ) test_that("ggpredict, glmmTMB", { expect_s3_class(ggpredict(m9, c("cover", "mined", "spp"), type = "fe"), "data.frame") expect_s3_class(ggpredict(m9, c("cover", "mined", "spp"), type = "fe.zi"), "data.frame") expect_s3_class(suppressWarnings(ggpredict(m9, c("cover", "mined", "spp"), type = "re")), "data.frame") expect_s3_class(suppressWarnings(ggpredict(m9, c("cover", "mined", "spp"), type = "re.zi")), "data.frame") }) } } ggeffects/tests/testthat/test-survey.R0000644000176200001440000000166214100515051017643 0ustar liggesusersif (suppressWarnings( require("testthat") && require("ggeffects") && require("sjlabelled") && require("survey") && require("sjstats") && require("sjmisc") )) { # svyglm ----- data(nhanes_sample) nhanes_sample$total <- dicho(nhanes_sample$total) # create survey design des <- svydesign( id = ~SDMVPSU, strat = ~SDMVSTRA, weights = ~WTINT2YR, nest = TRUE, data = nhanes_sample ) # fit negative binomial regression fit <- suppressWarnings(svyglm(total ~ RIAGENDR + age + RIDRETH1, des, family = binomial(link = "logit"))) test_that("ggpredict, svyglm", { expect_s3_class(ggpredict(fit, "age"), "data.frame") expect_s3_class(ggpredict(fit, c("age", "RIAGENDR")), "data.frame") }) test_that("ggeffect, svyglm", { expect_s3_class(ggeffect(fit, "age"), "data.frame") expect_s3_class(ggeffect(fit, c("age", "RIAGENDR")), "data.frame") }) } ggeffects/tests/testthat/test-correct_se_sorting.R0000644000176200001440000000466414046746430022230 0ustar liggesusers.runThisTest <- Sys.getenv("RunAllggeffectsTests") == "yes" if (.runThisTest) { if (suppressWarnings( require("testthat") && require("ggeffects") && require("lme4") && require("sjmisc") )) { set.seed(123) dat <- data.frame( outcome = rbinom(n = 100, size = 1, prob = 0.35), var_binom = as.factor(rbinom(n = 100, size = 1, prob = 0.2)), var_cont = rnorm(n = 100, mean = 10, sd = 7), group = sample(letters[1:4], size = 100, replace = TRUE) ) dat$var_cont <- sjmisc::std(dat$var_cont) m1 <- glmer( outcome ~ var_binom + var_cont + (1 | group), data = dat, family = binomial(link = "logit") ) test_that("se-sorting", { pr <- ggpredict(m1, "var_cont") expect_equal(pr$predicted, c(0.336719595864838, 0.343075324628438, 0.349487808877511, 0.355955205473809, 0.362475595153102, 0.369046984214646, 0.375667306420833, 0.38233442510547, 0.389046135488166, 0.395800167191347, 0.402594186955394, 0.409425801546448, 0.41629256085041), tolerance = 1e-4) expect_equal(pr$std.error, c(0.618699912753018, 0.526519784780116, 0.441130838598037, 0.367300396177996, 0.313309075157131, 0.290440016857388, 0.305758174839891, 0.354345242445446, 0.424938667902817, 0.508453560698829, 0.599513975290497, 0.695161003669588, 0.793738286055424), tolerance = 1e-4) }) m2 <- glmer( outcome ~ var_binom * poly(var_cont, degree = 2, raw = TRUE) + (1 | group), data = dat, family = binomial(link = "logit") ) test_that("se-sorting", { pr <- ggpredict(m2, c("var_cont", "var_binom")) expect_equal( pr$predicted[1:10], c(0.166784864613204, 0.0615489873135249, 0.224095606762232, 0.141279183688389, 0.281091987683061, 0.250512693421789, 0.331840641853432, 0.354819954638596, 0.371819795779451, 0.422913790544266), tolerance = 1e-4) expect_equal( pr$std.error[1:10], c(1.34423391467447, 3.65581221675649, 0.920590886385926, 2.34007695224355, 0.595294475516507, 1.35709636952096, 0.384285954721907, 0.760109860798146, 0.302556537107688, 0.594810096113016), tolerance = 1e-4) }) } } ggeffects/tests/testthat/test-decimals.R0000644000176200001440000001532214036250437020101 0ustar liggesusers.runThisTest <- Sys.getenv("RunAllggeffectsTests") == "yes" if (.runThisTest && suppressWarnings( require("testthat") && require("ggeffects") && require("emmeans") && require("effects") && require("MASS") )) { y <- c(180,111, 0, 60, 317, 224, 76, 760, 1, 64, 80, 50, 147, 324, 149, 487, 222, 222, 31, 2, 131, 198, 256, 364, 544, 789, 4, 113, 52, 375, 444, 239, 1033, 303, 129, 118, 210, 99, 398, 101, 291, 154, 244, 519, 99, 0, 13, 5, 416, 489, 462, 244, 74, 276, 1270, 81, 375, 1254, 36, 368, 114, 89, 248, 115) trat <- c("FERTIL", "FERTIL", "FERTIL", "CTRL", "CTRL", "CTRL", "CTRL", "FERTIL", "FERTIL", "FERTIL", "FERTIL", "FERTIL", "FERTIL", "FERTIL", "FERTIL", "FERTIL", "CTRL", "CTRL", "FERTIL", "FERTIL", "FERTIL", "FERTIL", "FERTIL", "FERTIL", "FERTIL", "FERTIL", "FERTIL", "CTRL", "CTRL", "CTRL", "FERTIL", "FERTIL", "FERTIL", "FERTIL", "FERTIL", "CTRL", "CTRL", "FERTIL", "FERTIL", "CTRL", "CTRL", "CTRL", "CTRL", "CTRL", "CTRL", "FERTIL", "FERTIL", "FERTIL", "FERTIL", "FERTIL", "CTRL", "CTRL", "CTRL", "CTRL", "CTRL", "CTRL", "FERTIL", "FERTIL", "FERTIL", "FERTIL", "FERTIL", "CTRL", "CTRL", "CTRL") C <- c("X2062", "X2062", "X1002", "X1002", "X1002", "X1002", "X2062", "X1002", "X1002", "X1002", "X1043", "X1043", "X1002", "X1002", "X1043", "X1043", "X1043", "X1043", "X1043", "X1043", "X1043", "X2062", "X2062", "X2062", "X2062", "X2062", "X1043", "X1043", "X1043", "X1002", "X2062", "X2062", "X1002", "X1002", "X1002", "X1043", "X1002", "X1002", "X2062", "X1002", "X1002", "X1043", "X1043", "X2062", "X2062", "X1043", "X1043", "X1043", "X1002", "X1002", "X2062", "X2062", "X2062", "X2062", "X2062", "X1002", "X2062", "X2062", "X1043", "X1043", "X1043", "X1002", "X1002", "X1043") x1 <- c(1.0510557018036901233415, -0.2181436362234073500499, -1.4873429742505046569079, -1.4873429742505046569079, 1.0510557018036901233415, -0.2181436362234073500499, 1.0510557018036901233415, -0.2181436362234073500499, -1.4873429742505046569079, 1.0510557018036901233415, -1.4873429742505046569079, -0.2181436362234073500499, -1.4873429742505046569079, 1.0510557018036901233415, -1.4873429742505046569079, -0.2181436362234073500499, -1.4873429742505046569079, 1.0510557018036901233415, 1.0510557018036901233415, -1.4873429742505046569079, -0.2181436362234073500499, -1.4873429742505046569079, -0.2181436362234073500499, 1.0510557018036901233415, 1.0510557018036901233415, -0.2181436362234073500499, -1.4873429742505046569079, -0.2181436362234073500499, 1.0510557018036901233415, -1.4873429742505046569079, 1.0510557018036901233415, -1.4873429742505046569079, 1.0510557018036901233415, -1.4873429742505046569079, -0.2181436362234073500499, -0.2181436362234073500499, -0.2181436362234073500499, 1.0510557018036901233415, 1.0510557018036901233415, -0.2181436362234073500499, 1.0510557018036901233415, 1.0510557018036901233415, -0.2181436362234073500499, 1.0510557018036901233415, -0.2181436362234073500499, 1.0510557018036901233415, -1.4873429742505046569079, -0.2181436362234073500499, -0.2181436362234073500499, 1.0510557018036901233415, 1.0510557018036901233415, -0.2181436362234073500499, 1.0510557018036901233415, -0.2181436362234073500499, 1.0510557018036901233415, 1.0510557018036901233415, -0.2181436362234073500499, 1.0510557018036901233415, -1.4873429742505046569079, -0.2181436362234073500499, 1.0510557018036901233415, 1.0510557018036901233415, -0.2181436362234073500499, -0.2181436362234073500499) x2 <- c(-0.16315747994073070969989, -1.05360463900379586199563, -0.93415267997080730921766 , -0.99252850004998782740273, 0.42735235724712267169068, -0.48798991709717626230614 , 2.23014349851008431002697, -0.21660376882282647570399, -0.72058928240081254745775 , 1.88298518683389759509339, -0.39593086292497708944538, 0.47803447414888039501690 , -0.87730234961981912178430, 1.45009767915056353437819, -0.64258282192604954108361 , -0.19048163415359734007559, -0.99134159919342790256991, 0.54366244394847862420050 , 1.66531762304370256622121, -0.79390525090304220334758, -0.01135437096435308254594 , -1.12494052429199431486495, -0.85482479546253875568596, -0.01159930187192333150836 , 0.75465437362335030524463, -0.54144904373718538703741, -0.85598224435921810293593 , -0.44893610617967599774047, 1.63402389894952282389795, -1.09431441740485424496399 , 1.22772575275634254765578, -0.88120822563015821504706, 2.18965726407797056651816 , -0.88830486762534022204818, 0.49904785683978480914647, -0.96391262983003389575032 , -0.84584241895151246470874, 0.01820625581143533663897, 2.04426796652258913056244 , -0.51725662678387962500892, 0.68076969847060686547024, 0.31955573105719387028145 , -0.76372130858969833333560, 0.48931143096917706358795, -0.63996954524487148496803 , -0.74922815557619715232107, -1.22071057920300574117789, -1.06313667425498303842346 , -0.39394827284068967276909, 1.14055737651990241054989, 0.08203018278837372057044 , -0.81693043678611854119254, 0.30102881937280212554242, -0.71118362891575670481359 , 0.16401615885635717484448, 0.07371942371769488189237, -0.34288372588738608159886 , 2.66190601319726560802792, -0.71051489168642878446747, 0.02034948571161497923865 , 1.92939368169321356916157, 0.75986042229322825480864, -0.70121494097762349095859 , -0.05413653704868248706106) dat <- data.frame(y, trat, C, x1, x2) m <- glm.nb(y~trat*C + x1+ I(x1^2) + x1:trat + I(x1^2):trat+x2 , dat) test_that("ggeffect, decimals", { testthat::expect_gt(nrow(ggeffect(m, c("x1"))), 0) testthat::expect_gt(nrow(ggeffect(m, c("x2"))), 0) testthat::expect_gt(nrow(ggeffect(m, c("x1", "x2"))), 0) testthat::expect_gt(nrow(ggeffect(m, c("x1", "trat"))), 0) }) test_that("ggemmeans, decimals", { testthat::expect_gt(nrow(ggemmeans(m, c("x1"))), 0) testthat::expect_gt(nrow(ggemmeans(m, c("x2"))), 0) testthat::expect_gt(nrow(ggemmeans(m, c("x1", "x2"))), 0) testthat::expect_gt(nrow(ggemmeans(m, c("x1", "trat"))), 0) }) test_that("ggpredict, decimals", { testthat::expect_gt(nrow(ggpredict(m, c("x1"))), 0) testthat::expect_gt(nrow(ggpredict(m, c("x2"))), 0) testthat::expect_gt(nrow(ggpredict(m, c("x1", "x2"))), 0) testthat::expect_gt(nrow(ggpredict(m, c("x1", "trat"))), 0) }) } ggeffects/tests/testthat/test-betareg.R0000644000176200001440000000302414036250437017725 0ustar liggesusersif (require("testthat") && require("ggeffects") && require("emmeans") && require("effects") && require("betareg")) { data("GasolineYield") m1 <- betareg(yield ~ batch + temp, data = GasolineYield) test_that("ggpredict", { p <- ggpredict(m1, "batch") expect_equal(p$predicted, unname(predict(m1, newdata = new_data(m1, "batch"))), tolerance = 1e-3) expect_s3_class(ggpredict(m1, c("batch", "temp")), "data.frame") }) test_that("ggeffect", { p <- ggeffect(m1, "batch") expect_equal(p$predicted[1], 0.3122091, tolerance = 1e-3) expect_s3_class(ggeffect(m1, c("batch", "temp")), "data.frame") }) test_that("ggemmeans", { p <- ggemmeans(m1, "batch") expect_equal(p$predicted[1], 0.3122091, tolerance = 1e-3) expect_s3_class(ggemmeans(m1, c("batch", "temp")), "data.frame") }) if (getRversion() >= "4.0.0") { #create df df2 <- data.frame( ratio = c(0.5, 0.5, 0.6, 0.6, 0.7, 0.8, 0.9, 0.9), GD = c(0.5, 0.4, 0.6, 0.7, 0.8, 1.0, 1.0, 1.0), Source_Salinity = c( "Brackish", "Fresh", "Brackish", "Fresh", "Brackish", "Fresh", "Fresh", "Brackish" ) ) #run beta model m <- betareg(ratio ~ GD + Source_Salinity, data = df2) test_that("ggpredict", { p <- ggemmeans(m, "GD") expect_equal(p$conf.low, c(0.34618, 0.43997, 0.53218, 0.61566, 0.68704, 0.79684), tolerance = 1e-2) }) } } ggeffects/tests/testthat/test-coxph.R0000644000176200001440000000436414030655632017445 0ustar liggesusersif (require("testthat") && require("ggeffects") && require("emmeans") && require("survival") && packageVersion("survival") >= "3.2.9") { data("lung2") m1 <- survival::coxph(survival::Surv(time, status) ~ sex + age + ph.ecog, data = lung2) test_that("ggpredict", { p <- ggpredict(m1, "sex") expect_equal(p$predicted[1], 1, tolerance = 1e-2) ggpredict(m1, c("sex", "age")) }) test_that("ggemmeans", { if (packageVersion("emmeans") > "1.4.5") { p <- ggemmeans(m1, "sex") expect_equal(p$predicted[1], 0.7521603, tolerance = 1e-2) ggemmeans(m1, c("sex", "age")) } }) test_that("ggpredict", { p <- ggpredict(m1, "sex", type = "surv") expect_equal(p$predicted[1], 0.9966796, tolerance = 1e-2) p <- ggpredict(m1, "sex", type = "cumhaz") expect_equal(p$predicted[1], 0.003325958, tolerance = 1e-2) }) # test start of time df1 <- data.frame( production = c(15, 12, 10, 9, 6, 8, 9, 5, 3, 3, 2, 1, 0, 0, 0, 0), Treatment_Num = c(1, 1, 1, 1, 2, 2, 2, 2, 3, 3, 3, 3, 4, 4, 4, 4), Genotype = c(1, 1, 2, 2, 1, 1, 2, 2, 1, 1, 2, 2, 1, 1, 2, 2), Source_Salinity = c("Fresh", "Fresh", "Brackish", "Brackish", "Fresh", "Fresh", "Brackish", "Brackish", "Fresh", "Fresh", "Brackish", "Brackish", "Fresh", "Fresh", "Brackish", "Brackish"), Days_to_death = c(500, 500, 500, 500, 400, 350, 300, 500, 200, 202, 260, 280, 150, 150, 160, 140), censored = c(1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0) ) m <- coxph(Surv(time = Days_to_death, event = censored) ~ Treatment_Num + Source_Salinity, data = df1) test_that("ggpredict", { p <- ggpredict(m, c("Treatment_Num [all]", "Source_Salinity [all]"),type = "survival") expect_equal(head(p$x, 10), c(1, 1, 1, 1, 1, 1, 1, 1, 140, 140), tolerance = 1e-2) expect_equal(head(p$predicted, 10), c(1, 1, 1, 1, 1, 1, 1, 1, 1, 1), tolerance = 1e-2) p <- ggpredict(m, c("Treatment_Num [all]", "Source_Salinity [all]"),type = "cumulative_hazard") expect_equal(head(p$x, 10), c(1, 1, 1, 1, 1, 1, 1, 1, 140, 140), tolerance = 1e-2) expect_equal(head(p$predicted, 10), c(0, 0, 0, 0, 0, 0, 0, 0, 0, 0), tolerance = 1e-2) }) } ggeffects/tests/testthat/test-gamm4.R0000644000176200001440000000132714046746430017331 0ustar liggesusers.runThisTest <- Sys.getenv("RunAllggeffectsTests") == "yes" if (.runThisTest) { unloadNamespace("gam") if (require("testthat") && require("ggeffects") && require("gamm4")) { set.seed(123) dat <- gamSim(1, n = 400, scale = 2) ## simulate 4 term additive truth dat$fac <- fac <- as.factor(sample(1:20, 400, replace = TRUE)) dat$y <- dat$y + model.matrix( ~ fac - 1) %*% rnorm(20) * .5 set.seed(123) m1 <- gamm4(y ~ s(x0) + x1 + s(x2), data = dat, random = ~ (1 | fac)) test_that("ggpredict", { p <- ggpredict(m1, "x1") expect_equal(p$predicted[1], 5.885441, tolerance = 1e-4) expect_s3_class(ggpredict(m1, c("x1", "x2")), "data.frame") }) } } ggeffects/tests/testthat/test-polr.R0000644000176200001440000000410414036250437017270 0ustar liggesusersif (suppressWarnings( require("testthat") && require("ggeffects") && require("emmeans") && require("effects") && require("MASS") )) { options(contrasts = c("contr.treatment", "contr.poly")) fit <- polr(Sat ~ Infl + Type + Cont, weights = Freq, data = housing) test_that("ggpredict, polr", { ggpredict(fit, "Infl") ggpredict(fit, c("Infl", "Type")) ggpredict(fit, c("Infl", "Type", "Cont")) }) test_that("ggemmeans, polr", { ggemmeans(fit, "Infl") ggemmeans(fit, c("Infl", "Type")) ggemmeans(fit, c("Infl", "Type", "Cont")) }) test_that("ggpredict, polr", { ggpredict(fit, "Infl [Low,High]") ggpredict(fit, c("Infl [Low,High]", "Type [Tower]")) ggpredict(fit, c("Infl [Medium,Low]", "Type [Terrace]", "Cont [Low]")) }) test_that("ggemmeans, polr", { ggemmeans(fit, "Infl [Low,High]") ggemmeans(fit, c("Infl [Low,High]", "Type [Tower]")) ggemmeans(fit, c("Infl [Medium,Low]", "Type [Terrace]", "Cont [Low]")) }) test_that("ggpredict, polr", { ggpredict(fit, "Infl [Low,High]", condition = c(Type = "Tower")) ggpredict(fit, c("Infl [Low,High]", "Type [Tower]"), condition = c(Cont = "Low")) }) test_that("ggemmeans, polr", { ggemmeans(fit, "Infl [Low,High]", condition = c(Type = "Tower")) ggemmeans(fit, c("Infl [Low,High]", "Type [Tower]"), condition = c(Cont = "Low")) }) test_that("ggemmeans, polr", { p1 <- ggemmeans(fit, "Infl", condition = c(Type = "Tower", Cont = "Low")) p2 <- ggpredict(fit, "Infl") expect_equal( p1$predicted[p1$x == 1 & p1$response.level == "Low"], p2$predicted[p2$x == 1 & p2$response.level == "Low"], tolerance = 1e-3 ) }) test_that("ggeffect, polr", { ggeffect(fit, "Infl") ggeffect(fit, c("Infl", "Type")) ggeffect(fit, c("Infl", "Type", "Cont")) }) test_that("ggeffect, polr", { ggeffect(fit, "Infl [Low,High]") ggeffect(fit, c("Infl [Low,High]", "Type [Tower]")) ggeffect(fit, c("Infl [Medium,Low]", "Type [Terrace]", "Cont [Low]")) }) } ggeffects/tests/testthat/test-survreg.R0000644000176200001440000000121314100515044017775 0ustar liggesusersif (suppressWarnings( require("testthat") && require("ggeffects") && require("survival") && getRversion() >= "3.6.0" && packageVersion("survival") >= "3.2.9" )) { m1 <- survreg(Surv(futime, fustat) ~ ecog.ps + rx, data = ovarian, dist = "exponential") test_that("ggpredict, survreg", { pr <- ggpredict(m1, "ecog.ps") expect_equal(pr$predicted[1], 1637.551, tolerance = 1e-4) }) test_that("ggeffect, survreg", { expect_null(ggeffect(m1, "ecog.ps")) }) test_that("ggemmeans, survreg", { pr <- ggemmeans(m1, "ecog.ps") expect_equal(pr$predicted[1], 1637.551, tolerance = 1e-4) }) } ggeffects/tests/testthat/test-contrasts3.R0000644000176200001440000000245214100515054020412 0ustar liggesusersif (suppressWarnings( require("testthat") && require("ggeffects") && require("lme4") && require("sjlabelled") )) { data(efc) efc$e15relat <- as_label(efc$e15relat) m <- lmer(neg_c_7 ~ c160age + c12hour + (1 | e15relat), data = efc) test_that("ggpredict, contrasts-1", { options(contrasts = rep("contr.sum", 2)) pr <- ggpredict(m, c("c160age", "c12hour")) expect_false(anyNA(pr$std.error)) }) test_that("ggpredict, contrasts-2", { options(contrasts = rep("contr.sum", 2)) pr <- ggpredict(m, "c160age") expect_false(anyNA(pr$std.error)) }) test_that("ggpredict, contrasts-3", { options(contrasts = rep("contr.sum", 2)) pr <- ggpredict(m, "c12hour") expect_false(anyNA(pr$std.error)) }) test_that("ggpredict, contrasts-4", { options(contrasts = rep("contr.treatment", 2)) pr <- ggpredict(m, c("c160age", "c12hour")) expect_false(anyNA(pr$std.error)) }) test_that("ggpredict, contrasts-5", { options(contrasts = rep("contr.treatment", 2)) pr <- ggpredict(m, "c160age") expect_false(anyNA(pr$std.error)) }) test_that("ggpredict, contrasts-6", { options(contrasts = rep("contr.treatment", 2)) pr <- ggpredict(m, "c12hour") expect_false(anyNA(pr$std.error)) }) } ggeffects/tests/testthat/test-svyglmnb.R0000644000176200001440000000124313753253775020173 0ustar liggesusersif (suppressWarnings( require("testthat") && require("ggeffects") && require("sjlabelled") && require("survey") && require("sjstats") && require("sjmisc") )) { # svyglm.nb ----- data(nhanes_sample) # create survey design des <- svydesign( id = ~SDMVPSU, strat = ~SDMVSTRA, weights = ~WTINT2YR, nest = TRUE, data = nhanes_sample ) # fit negative binomial regression fit <- svyglm.nb(total ~ RIAGENDR + age + RIDRETH1, des) test_that("ggpredict, svyglm.nb", { expect_s3_class(ggpredict(fit, "age"), "data.frame") expect_s3_class(ggpredict(fit, c("age", "RIAGENDR")), "data.frame") }) } ggeffects/tests/testthat/test-glm.R0000644000176200001440000000605113753253775017113 0ustar liggesusersif (suppressWarnings( require("testthat") && require("ggeffects") && require("lme4") && require("sjlabelled") && require("sjmisc") )) { # glm, logistic regression ---- data(efc) efc$neg_c_7d <- dicho(efc$neg_c_7) fit <- glm(neg_c_7d ~ c12hour + e42dep + c161sex + c172code, data = efc, family = binomial(link = "logit")) m <- glm( cbind(incidence, size - incidence) ~ period, family = binomial, data = lme4::cbpp ) test_that("ggpredict, glm", { expect_s3_class(ggpredict(fit, "c12hour"), "data.frame") expect_s3_class(ggpredict(fit, c("c12hour", "c161sex")), "data.frame") expect_s3_class(ggpredict(fit, c("c12hour", "c161sex", "c172code")), "data.frame") }) test_that("ggeffect, glm", { expect_s3_class(ggeffect(fit, "c12hour"), "data.frame") expect_s3_class(ggeffect(fit, c("c12hour", "c161sex")), "data.frame") expect_s3_class(ggeffect(fit, c("c12hour", "c161sex", "c172code")), "data.frame") }) test_that("ggemmeans, glm", { expect_s3_class(ggemmeans(fit, "c12hour"), "data.frame") expect_s3_class(ggemmeans(fit, c("c12hour", "c161sex")), "data.frame") expect_s3_class(ggemmeans(fit, c("c12hour", "c161sex", "c172code")), "data.frame") }) p1 <- ggpredict(m, "period") p2 <- ggeffect(m, "period") p3 <- ggemmeans(m, "period") test_that("ggeffects, glm", { expect_equal(p1$predicted[1], 0.2194245, tolerance = 1e-3) expect_equal(p2$predicted[1], 0.2194245, tolerance = 1e-3) expect_equal(p3$predicted[1], 0.2194245, tolerance = 1e-3) }) test_that("ggpredict, glm, robust", { expect_s3_class(ggpredict(fit, "c12hour", vcov.fun = "vcovHC", vcov.type = "HC1"), "data.frame") expect_s3_class(ggpredict(fit, c("c12hour", "c161sex"), vcov.fun = "vcovHC", vcov.type = "HC1"), "data.frame") expect_s3_class(ggpredict(fit, c("c12hour", "c161sex", "c172code"), vcov.fun = "vcovHC", vcov.type = "HC1"), "data.frame") }) test_that("ggeffects, glm, robust", { expect_s3_class(ggpredict(m, "period", vcov.fun = "vcovHC", vcov.type = "HC1"), "data.frame") }) data(cbpp) cbpp$trials <- cbpp$size - cbpp$incidence m1 <- glmer(cbind(incidence, trials) ~ period + (1 | herd), data = cbpp, family = binomial) m2 <- glmer(cbind(incidence, size - incidence) ~ period + (1 | herd), data = cbpp, family = binomial) m3 <- glm(cbind(incidence, trials) ~ period, data = cbpp, family = binomial) m4 <- glm(cbind(incidence, size - incidence) ~ period, data = cbpp, family = binomial) test_that("ggeffects, glm-matrix-columns", { expect_s3_class(ggpredict(m1, "period"), "data.frame") expect_s3_class(ggpredict(m2, "period"), "data.frame") expect_s3_class(ggpredict(m3, "period"), "data.frame") expect_s3_class(ggpredict(m4, "period"), "data.frame") expect_s3_class(ggemmeans(m1, "period"), "data.frame") expect_s3_class(ggemmeans(m2, "period"), "data.frame") expect_s3_class(ggemmeans(m3, "period"), "data.frame") expect_s3_class(ggemmeans(m4, "period"), "data.frame") }) } ggeffects/tests/testthat/test-clean_vars.R0000644000176200001440000000421514046746430020440 0ustar liggesusers.runThisTest <- Sys.getenv("RunAllggeffectsTests") == "yes" if (.runThisTest && suppressWarnings( require("testthat") && require("emmeans") && require("ggeffects") )) { # lm, linear regression ---- data(efc) efc$c172code <- as.factor(efc$c172code) efc$e42dep <- as.factor(efc$e42dep) fit <- lm(barthtot ~ c12hour + c172code + e42dep, data = efc) test_that("ggpredict, clean_vars", { expect_equal(nrow(ggpredict(fit, "c172code")), 3) expect_equal(nrow(ggpredict(fit, "c172code [1,3]")), 2) expect_equal(nrow(ggpredict(fit, "c172code[1,3]")), 2) }) test_that("ggpredict, clean_vars", { expect_equal(nrow(ggemmeans(fit, "c172code")), 3) expect_equal(nrow(ggemmeans(fit, "c172code [1,3]")), 2) expect_equal(nrow(ggemmeans(fit, "c172code[1,3]")), 2) }) test_that("ggpredict, clean_vars", { expect_equal(nrow(ggpredict(fit, "e42dep")), 4) expect_equal(nrow(ggpredict(fit, "e42dep [1,3]")), 2) expect_equal(nrow(ggpredict(fit, "e42dep[1,3]")), 2) }) test_that("ggpredict, clean_vars", { expect_equal(nrow(ggpredict(fit, c("c172code", "e42dep"))), 12) expect_equal(nrow(ggpredict(fit, c("c172code [1,3]", "e42dep"))), 8) expect_equal(nrow(ggpredict(fit, c("c172code", "e42dep [1,3]"))), 6) expect_equal(nrow(ggpredict(fit, c("c172code [1,3]", "e42dep [1,3]"))), 4) expect_equal(nrow(ggpredict(fit, c("c172code[1,3]", "e42dep"))), 8) expect_equal(nrow(ggpredict(fit, c("c172code", "e42dep[1,3]"))), 6) expect_equal(nrow(ggpredict(fit, c("c172code[1,3]", "e42dep[1,3]"))), 4) }) test_that("ggemmeans, clean_vars", { expect_equal(nrow(ggemmeans(fit, c("c172code", "e42dep"))), 12) expect_equal(nrow(ggemmeans(fit, c("c172code [1,3]", "e42dep"))), 8) expect_equal(nrow(ggemmeans(fit, c("c172code", "e42dep [1,3]"))), 6) expect_equal(nrow(ggemmeans(fit, c("c172code [1,3]", "e42dep [1,3]"))), 4) expect_equal(nrow(ggemmeans(fit, c("c172code[1,3]", "e42dep"))), 8) expect_equal(nrow(ggemmeans(fit, c("c172code", "e42dep[1,3]"))), 6) expect_equal(nrow(ggemmeans(fit, c("c172code[1,3]", "e42dep[1,3]"))), 4) }) } ggeffects/tests/testthat/test-linear-models.R0000644000176200001440000004135514100457406021055 0ustar liggesusersif (suppressWarnings( require("testthat") && require("ggeffects") && require("sjlabelled") && require("emmeans") && require("effects") && require("haven") && require("sjmisc") )) { # lm, linear regression ---- data(efc) fit <- lm(barthtot ~ c12hour + neg_c_7 + c161sex + c172code, data = efc) test_that("ggpredict, lm", { expect_s3_class(ggpredict(fit, "c12hour"), "data.frame") expect_s3_class(ggpredict(fit, c("c12hour", "c161sex")), "data.frame") }) test_that("ggpredict, lm print", { x <- ggpredict(fit, c("c12hour", "c161sex", "c172code")) out <- utils::capture.output(print(x)) expect_equal( out, c("# Predicted values of Total score BARTHEL INDEX", "", "# c161sex = Male", "# c172code = [1] low level of education", "", "c12hour | Predicted | 95% CI", "------------------------------------", " 0 | 73.95 | [69.35, 78.55]", " 45 | 62.56 | [58.23, 66.88]", " 85 | 52.42 | [47.90, 56.95]", " 170 | 30.89 | [24.85, 36.94]", "", "# c161sex = Female", "# c172code = [1] low level of education", "", "c12hour | Predicted | 95% CI", "------------------------------------", " 0 | 75.00 | [71.41, 78.59]", " 45 | 63.60 | [60.46, 66.74]", " 85 | 53.46 | [50.13, 56.80]", " 170 | 31.93 | [26.83, 37.04]", "", "# c161sex = Male", "# c172code = [2] intermediate level of education", "", "c12hour | Predicted | 95% CI", "------------------------------------", " 0 | 74.67 | [71.06, 78.29]", " 45 | 63.27 | [59.88, 66.66]", " 85 | 53.14 | [49.40, 56.89]", " 170 | 31.61 | [25.98, 37.24]", "", "# c161sex = Female", "# c172code = [2] intermediate level of education", "", "c12hour | Predicted | 95% CI", "------------------------------------", " 0 | 75.71 | [73.31, 78.12]", " 45 | 64.32 | [62.42, 66.21]", " 85 | 54.18 | [51.81, 56.55]", " 170 | 32.65 | [27.94, 37.36]", "", "# c161sex = Male", "# c172code = [3] high level of education", "", "c12hour | Predicted | 95% CI", "------------------------------------", " 0 | 75.39 | [71.04, 79.74]", " 45 | 63.99 | [59.73, 68.26]", " 85 | 53.86 | [49.23, 58.49]", " 170 | 32.33 | [25.95, 38.71]", "", "# c161sex = Female", "# c172code = [3] high level of education", "", "c12hour | Predicted | 95% CI", "------------------------------------", " 0 | 76.43 | [72.89, 79.98]", " 45 | 65.03 | [61.68, 68.39]", " 85 | 54.90 | [51.16, 58.65]", " 170 | 33.37 | [27.70, 39.05]", "", "Adjusted for:", "* neg_c_7 = 11.84") ) x <- ggpredict(fit, c("c12hour", "c161sex", "neg_c_7")) out <- utils::capture.output(print(x)) expect_equal( out, c("# Predicted values of Total score BARTHEL INDEX", "", "# c161sex = Male", "# neg_c_7 = 11.8", "", "c12hour | Predicted | 95% CI", "------------------------------------", " 0 | 74.74 | [71.12, 78.36]", " 45 | 63.34 | [59.95, 66.73]", " 85 | 53.21 | [49.46, 56.95]", " 170 | 31.68 | [26.05, 37.30]", "", "# c161sex = Female", "# neg_c_7 = 11.8", "", "c12hour | Predicted | 95% CI", "------------------------------------", " 0 | 75.78 | [73.38, 78.18]", " 45 | 64.38 | [62.49, 66.28]", " 85 | 54.25 | [51.89, 56.61]", " 170 | 32.72 | [28.01, 37.42]", "", "# c161sex = Male", "# neg_c_7 = 15.7", "", "c12hour | Predicted | 95% CI", "------------------------------------", " 0 | 65.78 | [61.54, 70.02]", " 45 | 54.38 | [50.50, 58.26]", " 85 | 44.25 | [40.20, 48.29]", " 170 | 22.72 | [17.11, 28.33]", "", "# c161sex = Female", "# neg_c_7 = 15.7", "", "c12hour | Predicted | 95% CI", "------------------------------------", " 0 | 66.82 | [63.70, 69.94]", " 45 | 55.42 | [52.94, 57.91]", " 85 | 45.29 | [42.65, 47.93]", " 170 | 23.76 | [19.18, 28.34]", "", "# c161sex = Male", "# neg_c_7 = 8", "", "c12hour | Predicted | 95% CI", "------------------------------------", " 0 | 83.47 | [79.72, 87.21]", " 45 | 72.07 | [68.36, 75.78]", " 85 | 61.94 | [57.76, 66.11]", " 170 | 40.41 | [34.28, 46.54]", "", "# c161sex = Female", "# neg_c_7 = 8", "", "c12hour | Predicted | 95% CI", "------------------------------------", " 0 | 84.51 | [81.75, 87.27]", " 45 | 73.11 | [70.51, 75.71]", " 85 | 62.98 | [59.83, 66.13]", " 170 | 41.45 | [36.07, 46.83]", "", "Adjusted for:", "* c172code = 1.97") ) }) test_that("ggpredict, lm by", { expect_equal(nrow(ggpredict(fit, "c12hour [10:20]")), 11) expect_equal(nrow(ggpredict(fit, "c12hour [10:20 by=.2]")), 51) expect_equal(nrow(ggpredict(fit, "c12hour [10:20 by = .2]")), 51) expect_equal(nrow(ggpredict(fit, "c12hour [10:20by=.2]")), 51) }) test_that("ggpredict, lm-vcov", { expect_s3_class(ggpredict(fit, c("c12hour", "c161sex"), vcov.fun = "vcovHC", vcov.type = "HC1"), "data.frame") }) test_that("ggpredict, lm-prediction-interval", { pr <- ggpredict(fit, c("c12hour", "c161sex"), interval = "predict") expect_equal(pr$conf.low[1], 27.43113, tolerance = 1e-4) pr <- ggpredict(fit, c("c12hour", "c161sex"), interval = "conf") expect_equal(pr$conf.low[1], 71.02894, tolerance = 1e-4) pr <- ggpredict(fit, c("c12hour", "c161sex"), interval = "predict", vcov.fun = "vcovHC", vcov.type = "HC1") expect_equal(pr$conf.low[1], 27.44084, tolerance = 1e-4) expect_s3_class(ggpredict(fit, c("c12hour", "c161sex"), interval = "predict", ci.lvl = NA), "data.frame") expect_s3_class(ggpredict(fit, c("c12hour", "c161sex"), interval = "conf", ci.lvl = NA), "data.frame") }) test_that("ggpredict, lm-noci", { expect_s3_class(ggpredict(fit, c("c12hour", "c161sex"), ci.lvl = NA), "data.frame") }) test_that("ggpredict, lm, ci.lvl", { expect_s3_class(ggpredict(fit, "c12hour", ci.lvl = .8), "data.frame") expect_s3_class(ggpredict(fit, c("c12hour", "c161sex"), ci.lvl = .8), "data.frame") expect_s3_class(ggpredict(fit, c("c12hour", "c161sex", "c172code"), ci.lvl = .8), "data.frame") }) test_that("ggpredict, lm, typical", { expect_s3_class(ggpredict(fit, "c12hour", ci.lvl = .8, typical = "median"), "data.frame") expect_s3_class(ggpredict(fit, c("c12hour", "c161sex"), ci.lvl = .8, typical = "median"), "data.frame") expect_s3_class(ggpredict(fit, c("c12hour", "c161sex", "c172code"), ci.lvl = .8, typical = "median"), "data.frame") }) test_that("ggpredict, lm, condition", { expect_s3_class(ggpredict(fit, "c172code", condition = c(c12hour = 40), ci.lvl = .8, typical = "median"), "data.frame") expect_s3_class(ggpredict(fit, c("c172code", "c161sex"), condition = c(c12hour = 40), ci.lvl = .8, typical = "median"), "data.frame") }) test_that("ggpredict, lm, pretty", { expect_s3_class(ggpredict(fit, "c12hour", full.data = TRUE, ci.lvl = .8, typical = "median"), "data.frame") expect_s3_class(ggpredict(fit, c("c12hour", "c161sex"), full.data = TRUE, ci.lvl = .8, typical = "median"), "data.frame") }) test_that("ggpredict, lm, full.data", { expect_s3_class(ggpredict(fit, "c172code", full.data = TRUE, ci.lvl = .8, typical = "median"), "data.frame") expect_s3_class(ggpredict(fit, c("c172code", "c161sex"), full.data = TRUE, ci.lvl = .8, typical = "median"), "data.frame") }) test_that("ggeffect, lm", { expect_s3_class(ggeffect(fit, "c12hour"), "data.frame") expect_s3_class(ggeffect(fit, c("c12hour", "c161sex")), "data.frame") expect_s3_class(ggeffect(fit, c("c12hour", "c161sex", "c172code")), "data.frame") }) test_that("ggemmeans, lm", { expect_s3_class(ggemmeans(fit, "c12hour"), "data.frame") expect_s3_class(ggemmeans(fit, c("c12hour", "c161sex")), "data.frame") expect_s3_class(ggemmeans(fit, c("c12hour", "c161sex", "c172code")), "data.frame") }) test_that("ggemmeans, lm, ci.lvl", { expect_s3_class(ggemmeans(fit, "c12hour", ci.lvl = .8), "data.frame") expect_s3_class(ggemmeans(fit, c("c12hour", "c161sex"), ci.lvl = .8), "data.frame") expect_s3_class(ggemmeans(fit, c("c12hour", "c161sex", "c172code"), ci.lvl = .8), "data.frame") }) test_that("ggemmeans, lm, typical", { expect_s3_class(ggemmeans(fit, "c12hour", ci.lvl = .8, typical = "median"), "data.frame") expect_s3_class(ggemmeans(fit, c("c12hour", "c161sex"), ci.lvl = .8, typical = "median"), "data.frame") expect_s3_class(ggemmeans(fit, c("c12hour", "c161sex", "c172code"), ci.lvl = .8, typical = "median"), "data.frame") }) test_that("ggemmeans, lm, condition", { expect_s3_class(ggemmeans(fit, "c172code", condition = c(c12hour = 40), ci.lvl = .8, typical = "median"), "data.frame") expect_s3_class(ggemmeans(fit, c("c172code", "c161sex"), condition = c(c12hour = 40), ci.lvl = .8, typical = "median"), "data.frame") }) test_that("ggemmeans, lm, pretty", { expect_s3_class(ggemmeans(fit, "c12hour", full.data = TRUE, ci.lvl = .8, typical = "median"), "data.frame") expect_s3_class(ggemmeans(fit, c("c12hour", "c161sex"), full.data = TRUE, ci.lvl = .8, typical = "median"), "data.frame") }) data(efc) efc$c172code <- to_label(efc$c172code) fit <- lm(barthtot ~ c12hour + neg_c_7 + c161sex + c172code, data = efc) test_that("ggpredict, lm", { expect_s3_class(ggpredict(fit, "c12hour [20,30,40]"), "data.frame") expect_s3_class(ggpredict(fit, "c12hour [30:60]"), "data.frame") expect_s3_class(ggpredict(fit, c("c12hour [30:60]", "c161sex", "c172code [high level of education,low level of education]")), "data.frame") }) test_that("ggpredict, lm", { expect_s3_class(ggpredict(fit, "c12hour [meansd]"), "data.frame") expect_s3_class(ggpredict(fit, "c12hour [minmax]"), "data.frame") expect_s3_class(ggpredict(fit, c("c12hour [quart]", "c161sex", "c172code [high level of education,low level of education]")), "data.frame") expect_s3_class(ggpredict(fit, c("c12hour [zeromax]", "c161sex", "c172code [high level of education,low level of education]")), "data.frame") expect_s3_class(ggpredict(fit, c("c12hour [quart2]", "c161sex", "c172code [high level of education,low level of education]")), "data.frame") }) test_that("ggeffect, lm", { expect_s3_class(ggeffect(fit, "c12hour [20,30,40]"), "data.frame") expect_s3_class(ggeffect(fit, "c12hour [30:60]"), "data.frame") expect_s3_class(ggeffect(fit, c("c12hour [30:60]", "c161sex", "c172code [high level of education,low level of education]")), "data.frame") }) test_that("ggeffect, lm", { expect_s3_class(ggeffect(fit, "c12hour [meansd]"), "data.frame") expect_s3_class(ggeffect(fit, "c12hour [minmax]"), "data.frame") expect_s3_class(ggeffect(fit, c("c12hour [quart]", "c161sex", "c172code [high level of education,low level of education]")), "data.frame") expect_s3_class(ggeffect(fit, c("c12hour [zeromax]", "c161sex", "c172code [high level of education,low level of education]")), "data.frame") expect_s3_class(ggeffect(fit, c("c12hour [quart2]", "c161sex", "c172code [high level of education,low level of education]")), "data.frame") }) test_that("ggemmeans, lm", { expect_s3_class(ggemmeans(fit, "c12hour [20,30,40]"), "data.frame") expect_s3_class(ggemmeans(fit, "c12hour [30:60]"), "data.frame") expect_s3_class(ggemmeans(fit, c("c12hour [30:60]", "c161sex", "c172code [high level of education,low level of education]")), "data.frame") }) test_that("ggemmeans, lm", { expect_s3_class(ggemmeans(fit, "c12hour [meansd]"), "data.frame") expect_s3_class(ggemmeans(fit, "c12hour [minmax]"), "data.frame") expect_s3_class(ggemmeans(fit, c("c12hour [quart]", "c161sex", "c172code [high level of education,low level of education]")), "data.frame") expect_s3_class(ggemmeans(fit, c("c12hour [zeromax]", "c161sex", "c172code [high level of education,low level of education]")), "data.frame") expect_s3_class(ggemmeans(fit, c("c12hour [quart2]", "c161sex", "c172code [high level of education,low level of education]")), "data.frame") }) data(efc) efc$c172code <- to_label(efc$c172code) fit <- lm(barthtot ~ log(c12hour) + c161sex + c172code, data = efc) test_that("ggpredict, lm, log", { expect_warning(ggpredict(fit, "c12hour [meansd]")) expect_s3_class(ggpredict(fit, "c12hour [minmax]"), "data.frame") expect_s3_class(ggpredict(fit, c("c12hour", "c172code [high level of education,low level of education]")), "data.frame") expect_s3_class(ggpredict(fit, c("c12hour [exp]", "c172code [high level of education,low level of education]")), "data.frame") }) test_that("ggeffect, lm, log", { expect_s3_class(ggeffect(fit, "c12hour [meansd]"), "data.frame") expect_s3_class(ggeffect(fit, "c12hour [minmax]"), "data.frame") expect_s3_class(ggeffect(fit, c("c12hour", "c172code [high level of education,low level of education]")), "data.frame") expect_s3_class(suppressWarnings(ggeffect(fit, c("c12hour [exp]", "c172code [high level of education,low level of education]"))), "data.frame") }) test_that("ggeffect, lm, no_space", { expect_s3_class(ggeffect(fit, "c12hour[meansd]"), "data.frame") expect_s3_class(ggeffect(fit, "c12hour[minmax]"), "data.frame") expect_s3_class(ggeffect(fit, c("c12hour", "c172code[high level of education,low level of education]")), "data.frame") expect_s3_class(suppressWarnings(ggeffect(fit, c("c12hour[exp]", "c172code[high level of education,low level of education]"))), "data.frame") }) test_that("ggemmeans, lm, log", { expect_s3_class(ggemmeans(fit, "c12hour [meansd]"), "data.frame") expect_s3_class(ggemmeans(fit, "c12hour [minmax]"), "data.frame") expect_s3_class(ggemmeans(fit, c("c12hour", "c172code [high level of education,low level of education]")), "data.frame") expect_s3_class(ggemmeans(fit, c("c12hour [exp]", "c172code [high level of education,low level of education]")), "data.frame") }) test_that("ggemmeans, lm, no_space", { expect_s3_class(ggemmeans(fit, "c12hour[meansd]"), "data.frame") expect_s3_class(ggemmeans(fit, "c12hour[minmax]"), "data.frame") expect_s3_class(ggemmeans(fit, c("c12hour", "c172code[high level of education,low level of education]")), "data.frame") expect_s3_class(ggemmeans(fit, c("c12hour[exp]", "c172code[high level of education,low level of education]")), "data.frame") }) test_that("ggpredict, lm formula", { expect_s3_class(ggpredict(fit, ~ c12hour), "data.frame") expect_s3_class(ggpredict(fit, ~ c12hour + c161sex), "data.frame") expect_s3_class(ggpredict(fit, ~ c12hour + c161sex + c172code), "data.frame") }) d <- subset(efc, select = c(barthtot, c12hour, neg_c_7, c172code)) d <- na.omit(d) m1 <- lm(barthtot ~ c12hour + poly(neg_c_7, 2) + c172code, data = d) m2 <- lm(barthtot ~ c12hour + poly(neg_c_7, 3, raw = TRUE) + c172code, data = d) m3 <- lm(barthtot ~ scale(c12hour) + poly(neg_c_7, 2) + c172code, data = d) test_that("ggpredict, lm", { expect_s3_class(ggpredict(m1, "neg_c_7"), "data.frame") expect_s3_class(ggpredict(m2, "neg_c_7"), "data.frame") expect_s3_class(ggpredict(m3, "neg_c_7"), "data.frame") expect_s3_class(ggpredict(m3, "c12hour"), "data.frame") }) test_that("ggemmeans, lm", { expect_s3_class(ggemmeans(m1, "neg_c_7"), "data.frame") expect_s3_class(ggemmeans(m2, "neg_c_7"), "data.frame") expect_s3_class(ggemmeans(m3, "neg_c_7"), "data.frame") expect_s3_class(ggemmeans(m3, "c12hour"), "data.frame") }) data(efc) fit <- lm(barthtot ~ c12hour + neg_c_7, data = efc) test_that("ggemmeans, lm", { p1 <- ggemmeans(fit, "neg_c_7") p2 <- ggeffect(fit, "neg_c_7") p3 <- ggpredict(fit, "neg_c_7") expect_equal(p1$predicted[1], 78.2641, tolerance = 1e-3) expect_equal(p2$predicted[1], 78.2641, tolerance = 1e-3) expect_equal(p3$predicted[1], 78.2641, tolerance = 1e-3) }) test_that("ggemmeans, lm", { p1 <- ggemmeans(fit, "neg_c_7 [5,10]") p2 <- ggeffect(fit, "neg_c_7 [5,10]") p3 <- ggpredict(fit, "neg_c_7 [5,10]") expect_equal(p1$predicted[1], 80.58504, tolerance = 1e-3) expect_equal(p2$predicted[1], 80.58504, tolerance = 1e-3) expect_equal(p3$predicted[1], 80.58504, tolerance = 1e-3) }) } ggeffects/tests/testthat/test-condition.R0000644000176200001440000001025214030655632020303 0ustar liggesusersif (suppressWarnings( require("testthat") && require("ggeffects") && require("lme4") && require("sjmisc") )) { data(efc) efc$e42dep <- to_label(efc$e42dep) fit <- lm(barthtot ~ c12hour + neg_c_7 + e42dep + c172code, data = efc) test_that("ggpredict, condition", { expect_s3_class(ggpredict(fit, "c172code"), "data.frame") expect_s3_class(ggpredict(fit, "c172code", condition = c(c12hour = 40)), "data.frame") expect_s3_class(ggpredict(fit, "c172code", condition = c(c12hour = 40, e42dep = "severely dependent")), "data.frame") expect_s3_class(ggpredict(fit, "c172code", condition = c(e42dep = "severely dependent")), "data.frame") }) test_that("ggemmeans, condition", { expect_s3_class(ggemmeans(fit, "c172code"), "data.frame") expect_s3_class(ggemmeans(fit, "c172code", condition = c(c12hour = 40)), "data.frame") expect_s3_class(ggemmeans(fit, "c172code", condition = c(c12hour = 40, e42dep = "severely dependent")), "data.frame") expect_s3_class(ggemmeans(fit, "c172code", condition = c(e42dep = "severely dependent")), "data.frame") }) efc$neg_c_7d <- dicho(efc$neg_c_7) m1 <- glm( neg_c_7d ~ c12hour + e42dep + c161sex + c172code, data = efc, family = binomial(link = "logit") ) test_that("ggpredict, glm", { expect_s3_class(ggpredict(m1, "c12hour", condition = c(e42dep = "severely dependent")), "data.frame") expect_s3_class(ggpredict(m1, c("c12hour", "c161sex"), condition = c(e42dep = "severely dependent")), "data.frame") expect_s3_class(ggpredict(m1, c("c12hour", "c161sex", "c172code"), condition = c(e42dep = "severely dependent")), "data.frame") }) test_that("ggpredict, glm", { expect_s3_class(ggemmeans(m1, "c12hour", condition = c(e42dep = "severely dependent")), "data.frame") expect_s3_class(ggemmeans(m1, c("c12hour", "c161sex"), condition = c(e42dep = "severely dependent")), "data.frame") expect_s3_class(ggemmeans(m1, c("c12hour", "c161sex", "c172code"), condition = c(e42dep = "severely dependent")), "data.frame") }) efc$neg_c_7d <- dicho(efc$neg_c_7) m2 <- glm( neg_c_7d ~ c12hour + e42dep + c161sex + c172code, data = efc, family = binomial(link = "logit") ) test_that("ggpredict, glm", { expect_s3_class(ggpredict(m2, "c12hour", condition = c(c172code = 1)), "data.frame") expect_s3_class(ggpredict(m2, c("c12hour", "c161sex"), condition = c(c172code = 2)), "data.frame") }) data(efc) efc$grp <- to_label(efc$e15relat) efc$e42dep <- to_label(efc$e42dep) m3 <- lmer(neg_c_7 ~ c12hour + e42dep + c161sex + c172code + (1|grp), data = efc) test_that("ggpredict, condition-lmer", { pr <- ggpredict(m3, "c12hour", type = "re") expect_equal(pr$predicted[1], 8.962075, tolerance = 1e-3) expect_equal(pr$std.error[1], 3.601748, tolerance = 1e-3) pr <- ggpredict(m3, "c12hour", type = "re", condition = c(c172code = 1)) expect_equal(pr$predicted[1], 8.62045, tolerance = 1e-3) expect_equal(pr$std.error[1], 3.606084, tolerance = 1e-3) pr <- ggpredict(m3, "c12hour", type = "re", condition = c(e42dep = "severely dependent")) expect_equal(pr$predicted[1], 12.83257, tolerance = 1e-3) expect_equal(pr$std.error[1], 3.601748, tolerance = 1e-3) pr <- ggpredict(m3, "c12hour", type = "re", condition = c(e42dep = "severely dependent", c172code = 3)) expect_equal(pr$predicted[1], 13.19621, tolerance = 1e-3) expect_equal(pr$std.error[1], 3.608459, tolerance = 1e-3) pr <- ggpredict(m3, "c12hour", type = "re", condition = c(e42dep = "severely dependent", c172code = 3, grp = "sibling")) expect_equal(pr$predicted[1], 13.13315, tolerance = 1e-3) expect_equal(pr$std.error[1], 3.608459, tolerance = 1e-3) pr <- ggpredict(m3, "c12hour", type = "re", condition = c(c172code = 3, grp = "sibling")) expect_equal(pr$predicted[1], 9.26265, tolerance = 1e-3) expect_equal(pr$std.error[1], 3.608459, tolerance = 1e-3) pr <- ggpredict(m3, "c12hour", type = "re", condition = c(grp = "sibling")) expect_equal(pr$predicted[1], 8.89902, tolerance = 1e-3) expect_equal(pr$std.error[1], 3.601748, tolerance = 1e-3) }) } ggeffects/tests/testthat/test-plot.R0000644000176200001440000000203613753253775017311 0ustar liggesusersif (suppressWarnings( require("testthat") && require("ggeffects") && require("sjlabelled") && require("sjmisc") )) { # lm, linear regression ---- data(efc) efc$c172code <- to_label(efc$c172code) fit <- lm(barthtot ~ c12hour + neg_c_7 + c161sex + c172code, data = efc) test_that("ggpredict, lm", { pr <- ggpredict(fit, "c12hour [20,30,40]") p <- plot(pr) p <- plot(pr, ci = FALSE) p <- plot(pr, ci = TRUE, ci.style = "dot") p <- plot(pr, add.data = TRUE) p <- plot(pr, add.data = TRUE, jitter = FALSE) p <- plot(pr, colors = "bw") p <- plot(pr, colors = "gs") pr <- ggpredict(fit, c("c12hour", "c172code")) p <- plot(pr) p <- plot(pr, ci = FALSE) p <- plot(pr, ci = TRUE, ci.style = "dot") p <- plot(pr, add.data = TRUE) p <- plot(pr, add.data = TRUE, jitter = 0) p <- plot(pr, facets = TRUE) p <- plot(pr, facets = FALSE) p <- plot(pr, use.theme = FALSE) p <- plot(pr, colors = "bw") p <- plot(pr, colors = "gs") }) } ggeffects/tests/testthat/test-vglm.R0000644000176200001440000000325214046746430017270 0ustar liggesusersunloadNamespace("gam") .runThisTest <- Sys.getenv("RunAllggeffectsTests") == "yes" if (.runThisTest && require("testthat") && require("ggeffects") && require("VGAM")) { d.AD <- data.frame( treatment = gl(3, 3), outcome = gl(3, 1, 9), counts = c(18, 17, 15, 20, 10, 20, 25, 13, 12) ) m1 <- vglm(counts ~ outcome + treatment, family = poissonff, data = d.AD, trace = TRUE) test_that("ggpredict", { p <- ggpredict(m1, "outcome") expect_equal(p$predicted[1], 21, tolerance = 1e-3) }) set.seed(123) N <- 100 X1 <- rnorm(N, 175, 7) X2 <- rnorm(N, 30, 8) Ycont <- 0.5 * X1 - 0.3 * X2 + 10 + rnorm(N, 0, 6) Yord <- cut( Ycont, breaks = quantile(Ycont), include.lowest = TRUE, labels = c("--", "-", "+", "++"), ordered = TRUE ) dfOrd <- data.frame(X1, X2, Yord) m2 <- vglm(Yord ~ X1 + X2, family = propodds, data = dfOrd) test_that("ggpredict", { p <- ggpredict(m2, terms = "X1") expect_equal(p$predicted[1], 0.2633227, tolerance = 1e-3) expect_equal(nrow(p), 27) p <- ggpredict(m2, terms = "X1", ci = NA) expect_equal(p$predicted[1], 0.7366773, tolerance = 1e-3) expect_equal(nrow(p), 36) }) data(pneumo) pneumo <- transform(pneumo, let = log(exposure.time)) m3 <- vglm(cbind(normal, mild, severe) ~ let, propodds, data = pneumo) test_that("ggpredict", { p <- ggpredict(m3, "let") expect_equal(p$predicted[1], 0.005992263, tolerance = 1e-3) expect_equal(nrow(p), 16) p <- ggpredict(m3, "let", ci = NA) expect_equal(p$predicted[1], 0.9940077, tolerance = 1e-3) expect_equal(nrow(p), 24) }) } ggeffects/tests/testthat/test-offset_zeroinfl.R0000644000176200001440000000430014036250437021510 0ustar liggesusersif (suppressWarnings( require("testthat") && require("ggeffects") && require("pscl") && getRversion() > "3.5")) { #Generate some zero-inflated data set.seed(123) N <- 100 #Samples x <- runif(N, 0, 10) #Predictor off <- rgamma(N, 3, 2) #Offset variable yhat <- -1 + x * 0.5 + log(off) #Prediction on log scale dat <- data.frame(y = NA, x, logOff = log(off)) #Storage dataframe dat$y <- rpois(N, exp(yhat)) #Poisson process dat$y <- ifelse(rbinom(N, 1, 0.3), 0, dat$y) #Zero-inflation process #Fit zeroinfl model using 2 methods of offset input m1 <- zeroinfl(y ~ offset(logOff) + x | 1, data = dat, dist = "poisson") m2 <- zeroinfl(y ~ x | 1, data = dat, offset = logOff, dist = "poisson") #Fit zeroinfl model without offset data m3 <- zeroinfl(y ~ x | 1, data = dat, dist = "poisson") test_that("offset-zeroinfl-1", { pr <- ggpredict(m1, "x") expect_equal(ncol(pr), 6) expect_equal( colnames(pr), c("x", "predicted", "std.error", "conf.low", "conf.high", "group") ) expect_equal( pr$conf.low, c(0.38151, 0.64241, 1.08141, 1.81951, 3.05894, 5.1351, 8.59457, 14.28775, 23.41852, 37.68751, 59.9237), tolerance = 1e-3 ) }) if (utils::packageVersion("insight") > "0.10.0") { test_that("offset-zeroinfl-2", { pr <- ggpredict(m2, "x") expect_equal(ncol(pr), 6) expect_equal( colnames(pr), c("x", "predicted", "std.error", "conf.low", "conf.high", "group") ) expect_equal( pr$conf.low, c(0.38151, 0.64241, 1.08141, 1.81951, 3.05894, 5.1351, 8.59457, 14.28775, 23.41852, 37.68751, 59.9237), tolerance = 1e-3 ) }) } test_that("offset-zeroinfl-3", { pr <- ggpredict(m3, "x") expect_equal(ncol(pr), 6) expect_equal( colnames(pr), c("x", "predicted", "std.error", "conf.low", "conf.high", "group") ) expect_equal( pr$conf.low, c(0.76538, 1.21064, 1.91433, 3.02552, 4.77779, 7.53369, 11.84387, 18.49956, 28.52172, 43.24606, 64.8289), tolerance = 1e-3 ) }) } ggeffects/tests/testthat/test-rq.R0000644000176200001440000000105714036250437016742 0ustar liggesusersif (suppressWarnings( require("testthat") && require("ggeffects") && require("quantreg") && getRversion() >= "3.6.0" )) { data(stackloss) m1 <- rq(stack.loss ~ Air.Flow + Water.Temp, data = stackloss, tau = .25) test_that("ggpredict, rq", { expect_warning(pr <- ggpredict(m1, "Air.Flow")) expect_equal(pr$predicted[1], 10.09524, tolerance = 1e-4) }) test_that("ggeffect, rq", { expect_null(ggeffect(m1, "Air.Flow")) }) test_that("ggemmeans, rq", { expect_null(ggemmeans(m1, "Air.Flow")) }) } ggeffects/tests/testthat/test-zeroinfl.R0000644000176200001440000001154014100515041020131 0ustar liggesusersif (suppressWarnings( require("testthat") && require("ggeffects") && require("glmmTMB") && require("pscl") )) { data(Salamanders) m1 <- zeroinfl(count ~ mined | mined, dist = "poisson", data = Salamanders) m2 <- hurdle(count ~ mined | mined, dist = "poisson", zero.dist = "poisson", data = Salamanders) m3 <- hurdle(count ~ mined | mined, dist = "poisson", zero.dist = "binomial", data = Salamanders) m4 <- hurdle(count ~ mined | mined, dist = "poisson", zero.dist = "binomial", link = "log", data = Salamanders) m5 <- suppressWarnings(zeroinfl(count ~ mined | mined, dist = "negbin", link = "log", data = Salamanders)) test_that("ggpredict, pscl", { expect_s3_class(ggpredict(m1, "mined", type = "fe"), "data.frame") expect_s3_class(ggpredict(m1, "mined", type = "fe.zi"), "data.frame") expect_s3_class(ggpredict(m2, "mined", type = "fe"), "data.frame") expect_s3_class(ggpredict(m2, "mined", type = "fe.zi"), "data.frame") expect_s3_class(ggpredict(m3, "mined", type = "fe"), "data.frame") expect_s3_class(ggpredict(m3, "mined", type = "fe.zi"), "data.frame") expect_s3_class(ggpredict(m4, "mined", type = "fe"), "data.frame") expect_s3_class(ggpredict(m4, "mined", type = "fe.zi"), "data.frame") expect_s3_class(ggpredict(m5, "mined", type = "fe"), "data.frame") expect_s3_class(ggpredict(m5, "mined", type = "fe.zi"), "data.frame") }) test_that("ggpredict, pscl", { skip_on_cran() set.seed(123) pr <- ggpredict(m1, "mined", type = "fe.zi") expect_equal(pr$conf.low, c(0.1731, 2.0172), tolerance = 1e-3) model <- zeroinfl(count ~ mined * spp | mined * spp, dist = "poisson", data = Salamanders) set.seed(123) pr <- ggpredict(model, c("mined", "spp"), type = "fe.zi") expect_equal( pr$conf.low, c(0, 0, 0.03704, 1e-05, 1e-05, 0.14815, 0.13418, 1.61886, 0.04808, 1.81329, 0.48571, 3.07055, 3.1093, 1.33136), tolerance = 1e-2 ) }) test_that("ggemmeans, pscl", { expect_s3_class(ggemmeans(m1, "mined", type = "fe"), "data.frame") expect_s3_class(ggemmeans(m1, "mined", type = "fe.zi"), "data.frame") expect_s3_class(ggemmeans(m2, "mined", type = "fe"), "data.frame") expect_s3_class(ggemmeans(m2, "mined", type = "fe.zi"), "data.frame") expect_s3_class(ggemmeans(m3, "mined", type = "fe"), "data.frame") expect_s3_class(ggemmeans(m3, "mined", type = "fe.zi"), "data.frame") expect_s3_class(ggemmeans(m4, "mined", type = "fe"), "data.frame") expect_s3_class(ggemmeans(m4, "mined", type = "fe.zi"), "data.frame") expect_s3_class(ggemmeans(m5, "mined", type = "fe"), "data.frame") expect_s3_class(ggemmeans(m5, "mined", type = "fe.zi"), "data.frame") }) test_that("compare, pscl", { p1 <- ggemmeans(m1, "mined", type = "fe") p2 <- ggpredict(m1, "mined", type = "fe") expect_equal(p1$predicted[1], p2$predicted[1], tolerance = 1e-3) p1 <- ggemmeans(m1, "mined", type = "fe.zi") p2 <- ggpredict(m1, "mined", type = "fe.zi") expect_equal(p1$predicted[1], p2$predicted[1], tolerance = 1e-3) p1 <- ggemmeans(m2, "mined", type = "fe") p2 <- ggpredict(m2, "mined", type = "fe") expect_equal(p1$predicted[1], p2$predicted[1], tolerance = 1e-3) p1 <- ggemmeans(m2, "mined", type = "fe.zi") p2 <- ggpredict(m2, "mined", type = "fe.zi") expect_equal(p1$predicted[1], p2$predicted[1], tolerance = 1e-3) p1 <- ggemmeans(m5, "mined", type = "fe") p2 <- ggpredict(m5, "mined", type = "fe") expect_equal(p1$predicted[1], p2$predicted[1], tolerance = 1e-3) p1 <- ggemmeans(m5, "mined", type = "fe.zi") p2 <- ggpredict(m5, "mined", type = "fe.zi") expect_equal(p1$predicted[1], p2$predicted[1], tolerance = 1e-3) }) #Generate some data set.seed(123) N <- 100 #Samples x <- runif(N, 0, 5) #Predictor 1 z <- runif(N, 0, 5) #Predictor 2 off <- rgamma(N, 3, 2) #Offset variable yhat <- -1 + x * 0.2 + z * -0.2 + z * x * 0.2 + log(off) #Prediction on log scale dat <- data.frame(y = NA, x,z, logOff = log(off)) #Storage dataframe dat$y <- rpois(N, exp(yhat)) #Poisson process dat$y <- ifelse(rbinom(N, 1, 0.3), 0, dat$y) #Zero-inflation process #Fit zeroinfl and glm model #Interaction b/w x and z model <- zeroinfl(y ~ offset(logOff) + x * z | 1, data = dat, dist = 'poisson') test_that("pscl, offset, interaction and CI", { pr <- ggpredict(model, c("x", "z")) expect_equal( pr$conf.low, c(0.10175, 0.10842, 0.07738, 0.15311, 0.17543, 0.14137, 0.2299, 0.28352, 0.25811, 0.34404, 0.45742, 0.47084, 0.51189, 0.73575, 0.85762, 0.75364, 1.17695, 1.55786, 1.08708, 1.86238, 2.81383, 1.51007, 2.8848, 5.01418, 1.9918, 4.32397, 8.65455, 2.51388, 6.28353, 14.25571, 3.09229, 8.96366, 22.76232), tolerance = 1e-3 ) }) } ggeffects/tests/testthat/test-glmer.R0000644000176200001440000001103713753253775017442 0ustar liggesusers.runThisTest <- Sys.getenv("RunAllggeffectsTests") == "yes" if (.runThisTest) { if (suppressWarnings( require("testthat") && require("ggeffects") && require("lme4") && require("glmmTMB") )) { # glmer ---- data(efc_test) fit <- glmer( negc7d ~ c12hour + e42dep + c161sex + c172code + (1 | grp), data = efc_test, family = binomial(link = "logit") ) test_that("ggpredict, glmer", { pr <- ggpredict(fit, "c12hour") expect_equal( pr$predicted, c(0.34217, 0.34406, 0.34596, 0.34787, 0.34978, 0.3517, 0.35362, 0.35554, 0.35747, 0.35941, 0.36135, 0.36329, 0.36524, 0.36719, 0.36915, 0.37111, 0.37307, 0.37504, 0.37702, 0.37899, 0.38098, 0.38296, 0.38495, 0.38694, 0.38894, 0.39094, 0.39295, 0.39496, 0.39697, 0.39898, 0.401, 0.40302, 0.40505, 0.40708, 0.40911), tolerance = 1e-3, ignore_attr = TRUE) expect_s3_class(ggpredict(fit, "c12hour"), "data.frame") expect_s3_class(ggpredict(fit, c("c12hour", "c161sex")), "data.frame") expect_s3_class(ggpredict(fit, c("c12hour", "c161sex", "c172code")), "data.frame") expect_s3_class(ggpredict(fit, "c12hour", type = "re"), "data.frame") expect_s3_class(ggpredict(fit, c("c12hour", "c161sex"), type = "re"), "data.frame") expect_s3_class(ggpredict(fit, c("c12hour", "c161sex", "c172code"), type = "re"), "data.frame") }) test_that("ggeffect, glmer", { pr <- ggeffect(fit, "c12hour") expect_equal( pr$predicted, c(0.34217, 0.34406, 0.34596, 0.34787, 0.34978, 0.3517, 0.35362, 0.35554, 0.35747, 0.35941, 0.36135, 0.36329, 0.36524, 0.36719, 0.36915, 0.37111, 0.37307, 0.37504, 0.37702, 0.37899, 0.38098, 0.38296, 0.38495, 0.38694, 0.38894, 0.39094, 0.39295, 0.39496, 0.39697, 0.39898, 0.401, 0.40302, 0.40505, 0.40708, 0.40911), tolerance = 1e-3, ignore_attr = TRUE) expect_s3_class(ggeffect(fit, "c12hour"), "data.frame") expect_s3_class(ggeffect(fit, c("c12hour", "c161sex")), "data.frame") expect_s3_class(ggeffect(fit, c("c12hour", "c161sex", "c172code")), "data.frame") }) test_that("ggemmeans, glmer", { pr <- ggemmeans(fit, "c12hour") expect_equal( pr$predicted, c(0.34217, 0.34406, 0.34596, 0.34787, 0.34978, 0.3517, 0.35362, 0.35554, 0.35747, 0.35941, 0.36135, 0.36329, 0.36524, 0.36719, 0.36915, 0.37111, 0.37307, 0.37504, 0.37702, 0.37899, 0.38098, 0.38296, 0.38495, 0.38694, 0.38894, 0.39094, 0.39295, 0.39496, 0.39697, 0.39898, 0.401, 0.40302, 0.40505, 0.40708, 0.40911), tolerance = 1e-3, ignore_attr = TRUE) expect_s3_class(ggemmeans(fit, "c12hour"), "data.frame") expect_s3_class(ggemmeans(fit, c("c12hour", "c161sex")), "data.frame") expect_s3_class(ggemmeans(fit, c("c12hour", "c161sex", "c172code")), "data.frame") }) m <- insight::download_model("merMod_5") dd <<- insight::get_data(m) test_that("ggpredict, glmer.nb", { expect_s3_class(ggpredict(m, "f1"), "data.frame") expect_s3_class(ggpredict(m, "f1", type = "re"), "data.frame") expect_s3_class(ggpredict(m, c("f1", "f2")), "data.frame") expect_s3_class(ggpredict(m, c("f1", "f2"), type = "re"), "data.frame") expect_message(ggemmeans(m, "f1")) expect_s3_class(ggemmeans(m, c("f1", "f2")), "data.frame") }) test_that("ggpredict, glmer.nb-simulate", { expect_s3_class(ggpredict(m, c("f1", "f2"), type = "sim"), "data.frame") }) data(cbpp) cbpp$trials <- cbpp$size - cbpp$incidence m1 <- glmer(cbind(incidence, trials) ~ period + (1 | herd), data = cbpp, family = binomial) m2 <- glmer(cbind(incidence, size - incidence) ~ period + (1 | herd), data = cbpp, family = binomial) test_that("ggpredict, glmer, cbind", { expect_s3_class(ggpredict(m1, "period"), "data.frame") expect_s3_class(ggpredict(m2, "period"), "data.frame") expect_s3_class(ggpredict(m1, "period", type = "re"), "data.frame") expect_s3_class(ggpredict(m2, "period", type = "re"), "data.frame") expect_s3_class(ggemmeans(m1, "period"), "data.frame") expect_s3_class(ggemmeans(m2, "period"), "data.frame") }) test_that("compare, glmer, cbind", { p1 <- ggpredict(m1, "period") p2 <- ggemmeans(m1, "period") expect_equal(p1$predicted[1], p2$predicted[1], tolerance = 1e-3) }) } } ggeffects/tests/testthat/test-poisson.R0000644000176200001440000000147414036250437020015 0ustar liggesusersif (suppressWarnings( require("testthat") && require("haven") && require("sjlabelled") && require("ggeffects") )) { # glm, poisson regression ---- data(efc) fit <- glm(tot_sc_e ~ neg_c_7 + c12hour + e42dep + c161sex + c172code, data = efc, family = poisson(link = "log")) test_that("ggpredict, glm", { expect_s3_class(ggpredict(fit, "c12hour"), "data.frame") expect_s3_class(ggpredict(fit, c("c12hour", "c161sex")), "data.frame") expect_s3_class(ggpredict(fit, c("c12hour", "c161sex", "c172code")), "data.frame") }) test_that("ggeffect, glm", { expect_s3_class(ggeffect(fit, "c12hour"), "data.frame") expect_s3_class(ggeffect(fit, c("c12hour", "c161sex")), "data.frame") expect_s3_class(ggeffect(fit, c("c12hour", "c161sex", "c172code")), "data.frame") }) } ggeffects/tests/testthat/test-ivreg.R0000644000176200001440000000136414046746430017441 0ustar liggesusers.runThisTest <- Sys.getenv("RunAllggeffectsTests") == "yes" if (.runThisTest && suppressPackageStartupMessages((require("testthat") && require("ggeffects") && require("AER")))) { data(CigarettesSW) CigarettesSW$rprice <- with(CigarettesSW, price / cpi) CigarettesSW$rincome <- with(CigarettesSW, income / population / cpi) CigarettesSW$tdiff <- with(CigarettesSW, (taxs - tax) / cpi) m1 <- ivreg(log(packs) ~ log(rprice) + log(rincome) | log(rincome) + tdiff + I(tax / cpi), data = CigarettesSW, subset = year == "1995") test_that("ggpredict", { p <- ggpredict(m1, "rprice") expect_equal(p$predicted[1], 125.4697, tolerance = 1e-1) }) } ggeffects/tests/testthat/test-MCMCglmm.R0000644000176200001440000000162614046746430017722 0ustar liggesusers.runThisTest <- Sys.getenv("RunAllggeffectsTests") == "yes" if (.runThisTest && require("testthat") && require("ggeffects") && require("MCMCglmm")) { set.seed(123) data(PlodiaPO) m1 <- MCMCglmm( PO ~ plate, random = ~FSfamily, data = PlodiaPO, verbose = FALSE, nitt = 1300, burnin = 300, thin = 1 ) test_that("ggpredict", { p <- ggpredict(m1, "plate") expect_equal(p$predicted[1], 1.055517, tolerance = 1e-3) }) data(iris) set.seed(123) iris$grp <<- as.factor(sample(1:3, 150, TRUE)) set.seed(123) model <- MCMCglmm( Sepal.Length ~ Sepal.Width + Species, random = ~grp, verbose = FALSE, data = iris ) test_that("ggpredict", { p <- ggpredict(model, "Sepal.Width") expect_equal(p$predicted[1], 3.862325, tolerance = 1e-3) expect_equal(p$conf.low[1], 3.494669, tolerance = 1e-3) }) } ggeffects/tests/testthat/test-gls.R0000644000176200001440000000115614036250437017105 0ustar liggesusersif (require("testthat") && require("ggeffects") && require("emmeans") && require("effects") && require("nlme")) { data(Ovary) m1 <- gls(follicles ~ Time, Ovary, correlation = corAR1(form = ~ 1 | Mare)) test_that("ggpredict", { p <- ggpredict(m1, "Time") expect_equal(p$predicted[1], 11.49246, tolerance = 1e-3) }) test_that("ggeffect", { p <- ggeffect(m1, "Time") expect_equal(p$predicted[1], 11.49246, tolerance = 1e-3) }) test_that("ggemmeans", { p <- ggemmeans(m1, "Time") expect_equal(p$predicted[1], 11.49246, tolerance = 1e-3) }) } ggeffects/tests/testthat/test-gamm.R0000644000176200001440000000123114036250437017233 0ustar liggesusersunloadNamespace("gam") .runThisTest <- Sys.getenv("RunAllggeffectsTests") == "yes" if (.runThisTest && require("testthat") && require("ggeffects") && require("mgcv") && getRversion() > "3.5") { set.seed(0) dat <- gamSim(6, n = 200, scale = .2, dist = "poisson") m1 <- gamm( y ~ s(x0) + s(x1) + s(x2), family = poisson, data = dat, random = list(fac = ~ 1), verbosePQL = FALSE ) test_that("ggpredict", { p <- ggpredict(m1, "x1") expect_equal(p$predicted[1], 15.5450060160087, tolerance = 1e-3) expect_s3_class(ggpredict(m1, c("x1", "x2")), "data.frame") }) } ggeffects/tests/testthat.R0000644000176200001440000000144614100452307015335 0ustar liggesusersif (require("testthat")) { library(ggeffects) if (length(strsplit(packageDescription("ggeffects")$Version, "\\.")[[1]]) > 3) { Sys.setenv("RunAllggeffectsTests" = "yes") } else { Sys.setenv("RunAllggeffectsTests" = "no") } si <- Sys.info() osx <- tryCatch( { if (!is.null(si["sysname"])) { si["sysname"] == "Darwin" || grepl("^darwin", R.version$os) } else { FALSE } }, error = function(e) { FALSE } ) solaris <- tryCatch( { if (!is.null(si["sysname"])) { grepl("SunOS", si["sysname"], ignore.case = TRUE) } else { FALSE } }, error = function(e) { FALSE } ) if (!osx && !solaris) { test_check("ggeffects") } } ggeffects/vignettes/0000755000176200001440000000000014100515123014207 5ustar liggesusersggeffects/vignettes/content.Rmd0000644000176200001440000000430314100515003016322 0ustar liggesusers--- title: "Documentation of the ggeffects package" author: "Daniel Lüdecke" date: "`r Sys.Date()`" output: rmarkdown::html_vignette vignette: > %\VignetteIndexEntry{Documentation of the ggeffects package} %\VignetteEngine{knitr::rmarkdown} %\VignetteEncoding{UTF-8} --- ```{r set-options, echo = FALSE} knitr::opts_chunk$set( collapse = TRUE, comment = "#>", dev = "png", fig.width = 7, fig.height = 3.5, message = FALSE, warning = FALSE) options(width = 800) ``` The documentation of the *ggeffects* package, including many examples, is [available online](https://strengejacke.github.io/ggeffects/). Here you can find the content of the available documents. Click on a link to visit the related website. ## General introductions into the *ggeffects* package - [Adjusted Predictions of Regression Models](https://strengejacke.github.io/ggeffects/articles/ggeffects.html) - [Definition of Marginal Effects](https://strengejacke.github.io/ggeffects/articles/introduction_marginal_effects.html) - [Adjusted Predictions at Specific Values](https://strengejacke.github.io/ggeffects/articles/introduction_effectsatvalues.html) - [Adjusted Predictions for Random Effects Models](https://strengejacke.github.io/ggeffects/articles/introduction_randomeffects.html) - [Adding Partial Residuals to Effects Plots](https://strengejacke.github.io/ggeffects/articles/introduction_partial_residuals.html) ## Creating and customizing plots - [Plotting Adjusted Predictions](https://strengejacke.github.io/ggeffects/articles/introduction_plotmethod.html) - [Customize Plot Appearance](https://strengejacke.github.io/ggeffects/articles/introduction_plotcustomize.html) ## Working examples - [Logistic Mixed Effects Model with Interaction Term](https://strengejacke.github.io/ggeffects/articles/practical_logisticmixedmodel.html) - [(Cluster) Robust Standard Errors](https://strengejacke.github.io/ggeffects/articles/practical_robustestimation.html) ## Technical details - [Difference between ggpredict() and ggemmeans()](https://strengejacke.github.io/ggeffects/articles/technical_differencepredictemmeans.html) - [Different output between Stata and ggeffects](https://strengejacke.github.io/ggeffects/articles/technical_stata.html) ggeffects/R/0000755000176200001440000000000014100452307012404 5ustar liggesusersggeffects/R/get_predictions_survival.R0000644000176200001440000000427414046746430017667 0ustar liggesusersget_predictions_survival <- function(model, fitfram, ci.lvl, type, terms, ...) { # does user want standard errors? se <- !is.null(ci.lvl) && !is.na(ci.lvl) # compute ci, two-ways if (!is.null(ci.lvl) && !is.na(ci.lvl)) ci <- (1 + ci.lvl) / 2 else ci <- .975 if (!requireNamespace("survival", quietly = TRUE)) { stop("Package `survival` required. Please install it.", call. = FALSE) } # get survial probabilities and cumulative hazards prdat <- survival::survfit( model, newdata = fitfram, se.fit = TRUE, conf.int = ci, ... ) # check what user requested and either return surv probs # or cumulative hazards, including CI if (type == "surv") { pr <- prdat$surv lower <- prdat$lower upper <- prdat$upper } else { pr <- prdat$cumhaz lower <- pr - stats::qnorm(ci) * prdat$std.err upper <- pr + stats::qnorm(ci) * prdat$std.err # ugly fix... pr[which(pr < 0)] <- 0 lower[which(lower < 0)] <- 0 upper[which(upper < 0)] <- 0 # copy standard errors attr(fitfram, "std.error") <- prdat$std.err } # Now we need the groups, as survfit() only returns numeric indices clean_terms <- .clean_terms(terms) ff <- fitfram[clean_terms] out <- do.call(rbind, lapply(seq_len(nrow(ff)), function(i) { dat <- data.frame( time = prdat$time, predicted = pr[, i], conf.low = lower[, i], conf.high = upper[, i] ) dat2 <- lapply(seq_len(ncol(ff)), function(.x) ff[i, .x]) names(dat2) <- clean_terms dat2 <- data.frame(dat2, stringsAsFactors = FALSE) cbind(dat[, 1, drop = FALSE], dat2, dat[, 2:4]) })) if (min(out$time, na.rm = TRUE) > 1) { time <- 1 predicted <- ifelse(type == "surv", 1, 0) conf.low <- ifelse(type == "surv", 1, 0) conf.high <- ifelse(type == "surv", 1, 0) dat <- expand.grid(lapply(out[clean_terms], unique)) names(dat) <- clean_terms out <- rbind( out, cbind(time = 1, dat, predicted = predicted, conf.low = conf.low, conf.high = conf.high) ) } # sanity check - don't return NA out[stats::complete.cases(out), ] } ggeffects/R/pool_predictions.R0000644000176200001440000000642314046746430016124 0ustar liggesusers#' Pool Predictions or Estimated Marginal Means #' #' This function "pools" (i.e. combines) multiple \code{ggeffects} objects, in #' a similar fashion as \code{mice::pool()}. #' #' @param x A list of \code{ggeffects} objects, as returned by #' \code{\link{ggpredict}}, \code{\link{ggemmeans}} or \code{\link{ggeffect}}. #' @param ... Currently not used. #' #' @details Averaging of parameters follows Rubin's rules (\cite{Rubin, 1987, p. 76}). #' #' @references #' Rubin, D.B. (1987). Multiple Imputation for Nonresponse in Surveys. New York: John Wiley and Sons. #' #' @examples #' # example for multiple imputed datasets #' if (require("mice")) { #' data("nhanes2") #' imp <- mice(nhanes2, printFlag = FALSE) #' predictions <- lapply(1:5, function(i) { #' m <- lm(bmi ~ age + hyp + chl, data = complete(imp, action = i)) #' ggpredict(m, "age") #' }) #' pool_predictions(predictions) #' } #' @return A data frame with pooled predictions. #' @export pool_predictions <- function(x, ...) { # check input ----- obj_name <- deparse(substitute(x), width.cutoff = 500) original_x <- x if (!all(sapply(x, inherits, "ggeffects"))) { stop("'x' must be a list of 'ggeffects' objects, as returned by 'ggpredict()', 'ggemmeans()' or 'ggeffect()'.", call. = FALSE) } # check if all x-levels are identical if (!all(apply(as.data.frame(sapply(x, function(i) i$x), simplify = TRUE), 1, function(j) length(unique(j)) == 1))) { stop(paste0("Cannot pool predictions. The values of the focal term '", attributes(x[[1]])$terms,"' are not identical across predictions."), call. = FALSE) } # preparation ---- len <- length(x) ci <- attributes(x[[1]])$ci.lvl link_inv <- attributes(x[[1]])$link_inverse link_fun <- attributes(x[[1]])$link_function if (is.null(link_inv)) { link_inv <- function(x) x } if (is.null(link_fun)) { link_fun <- function(x) x } # pool predictions ----- pooled_predictions <- original_x[[1]] n_rows <- nrow(original_x[[1]]) for (i in 1:n_rows) { # pooled estimate pooled_pred <- unlist(lapply(original_x, function(j) { link_fun(j$predicted[i]) })) pooled_predictions$predicted[i] <- mean(pooled_pred, na.rm = TRUE) # pooled standard error pooled_se <- unlist(lapply(original_x, function(j) { j$std.error[i] })) ubar <- mean(pooled_se^2, na.rm = TRUE) tmp <- ubar + (1 + 1 / len) * stats::var(pooled_pred) pooled_predictions$std.error[i] <- sqrt(tmp) } # confidence intervals ---- alpha <- (1 + ci) / 2 fac <- stats::qnorm(alpha) pooled_predictions$conf.low <- link_inv(pooled_predictions$predicted - fac * pooled_predictions$std.error) pooled_predictions$conf.high <- link_inv(pooled_predictions$predicted + fac * pooled_predictions$std.error) # backtransform pooled_predictions$predicted <- link_inv(pooled_predictions$predicted) # constant values constant.values <- as.data.frame(do.call(rbind, lapply(original_x, function(x) { as.data.frame(attributes(x)$constant.values) }))) attr(pooled_predictions, "constant.values") <- lapply(constant.values, function(i) { if (is.numeric(i)) { mean(i) } else { unique(i) } }) pooled_predictions } ggeffects/R/utils_model_function.R0000644000176200001440000001064214046746430016773 0ustar liggesusers.get_model_function <- function(model) { # check class of fitted model lm_models <- c( "wblm", "wbm", "biglm", "speedlm", "gls", "ols", "ivreg", "gee", "plm", "lm", "rq", "rqss", "lmRob", "lm_robust", "lme", "truncreg", "nlmerMod", "lmerMod", "merModLmerTest", "rlmerMod", "bayesx", "mclogit" ) info <- insight::model_info(model) if (insight::is_multivariate(model)) { info <- info[[1]] } if (inherits(model, lm_models) && !inherits(model, "glm")) return("lm") else if (inherits(model, "coxph")) return("coxph") else if (inherits(model, "betareg")) return("betareg") else if (info$is_linear) return("lm") else return("glm") } get_predict_function <- function(model) { if (inherits(model, c("wblm", "wbm"))) return("wbm") else if (inherits(model, "mclogit")) return("mclogit") else if (inherits(model, "averaging")) return("averaging") else if (inherits(model, "orm")) return("orm") else if (inherits(model, "mlogit")) return("mlogit") else if (inherits(model, "glimML")) return("glimML") else if (inherits(model, "cgam")) return("cgam") else if (inherits(model, "ols")) return("ols") else if (inherits(model, "mixor")) return("mixor") else if (inherits(model, "glmx")) return("glmx") else if (inherits(model, "lrm")) return("lrm") else if (inherits(model, "lmrob")) return("lmrob") else if (inherits(model, "feglm")) return("feglm") else if (inherits(model, "glmrob")) return("glmrob") else if (inherits(model, "glmRob")) return("glmRob") else if (inherits(model, "brglm")) return("glm") else if (inherits(model, "bigglm")) return("glm") else if (inherits(model, "biglm")) return("lm") else if (inherits(model, "speedglm")) return("glm") else if (inherits(model, "speedlm")) return("lm") else if (inherits(model, "svyglm.nb")) return("svyglm.nb") else if (inherits(model, "svyglm")) return("svyglm") else if (inherits(model, "stanreg")) return("stanreg") else if (inherits(model, "brmsfit")) return("brmsfit") else if (inherits(model, "bayesx")) return("bayesx") else if (inherits(model, "gamlss")) return("gamlss") else if (inherits(model, "bamlss")) return("bamlss") else if (inherits(model, "gam")) return("gam") else if (inherits(model, c("tobit", "survreg"))) return("tobit") else if (inherits(model, "Gam")) return("Gam") else if (inherits(model, "MCMCglmm")) return("MCMCglmm") else if (inherits(model, "glmerMod")) return("glmer") else if (inherits(model, "glmmTMB")) return("glmmTMB") else if (inherits(model, "nlmerMod")) return("nlmer") else if (inherits(model, c("lmerMod", "merModLmerTest", "rlmerMod"))) return("lmer") else if (inherits(model, "lme")) return("lme") else if (inherits(model, "logistf")) return("logistf") else if (inherits(model, "ivreg")) return("ivreg") else if (inherits(model, "fixest")) return("fixest") else if (inherits(model, "gls")) return("gls") else if (inherits(model, "geeglm")) return("geeglm") else if (inherits(model, "clmm")) return("clmm") else if (inherits(model, "clm")) return("clm") else if (inherits(model, "clm2")) return("clm2") else if (inherits(model, "polr")) return("polr") else if (inherits(model, c("rq", "rqss"))) return("rq") else if (inherits(model, "gee")) return("gee") else if (inherits(model, "plm")) return("plm") else if (inherits(model, "negbin")) return("glm.nb") else if (inherits(model, "vgam")) return("vgam") else if (inherits(model, "vglm")) return("vglm") else if (inherits(model, "lm_robust")) return("lm") else if (inherits(model, "lmrob")) return("lm") else if (inherits(model, "lmRob")) return("lm") else if (inherits(model, "betareg")) return("betareg") else if (inherits(model, "truncreg")) return("truncreg") else if (inherits(model, "coxph")) return("coxph") else if (inherits(model, "brmultinom")) return("brmultinom") else if (inherits(model, "multinom")) return("multinom") else if (inherits(model, "bracl")) return("bracl") else if (inherits(model, "Zelig-relogit")) return("Zelig-relogit") else if (inherits(model, "zerotrunc")) return("zerotrunc") else if (inherits(model, "zeroinfl")) return("zeroinfl") else if (inherits(model, "hurdle")) return("hurdle") else if (inherits(model, "MixMod")) return("MixMod") else if (inherits(model, "glm")) return("glm") else if (inherits(model, "lm")) return("lm") else return("generic") } ggeffects/R/ggpredict.R0000644000176200001440000010066314100504730014503 0ustar liggesusers#' @title Marginal effects, adjusted predictions and estimated marginal means from regression models #' @name ggpredict #' #' @description #' The \pkg{ggeffects} package computes estimated marginal means (predicted values) for the #' response, at the margin of specific values or levels from certain model terms, #' i.e. it generates predictions by a model by holding the non-focal variables #' constant and varying the focal variable(s). \cr \cr #' \code{ggpredict()} uses \code{predict()} for generating predictions, #' while \code{ggeffect()} computes marginal effects by internally calling #' \code{effects::Effect()} and \code{ggemmeans()} uses \code{emmeans::emmeans()}. #' The result is returned as consistent data frame. #' #' @param model A fitted model object, or a list of model objects. Any model #' that supports common methods like \code{predict()}, \code{family()} #' or \code{model.frame()} should work. For \code{ggeffect()}, any model #' that is supported by \CRANpkg{effects} should work, and for #' \code{ggemmeans()}, all models supported by \CRANpkg{emmeans} should work. #' @param terms Character vector (or a formula) with the names of those terms #' from \code{model}, for which predictions should be displayed. At least #' one term is required to calculate effects for certain terms, maximum length is #' four terms, where the second to fourth term indicate the groups, i.e. #' predictions of first term are grouped at the values or levels of the remaining #' terms. If \code{terms} is missing or \code{NULL}, adjusted predictions for each #' model term are calculated. It is also possible to define specific values for #' terms, at which adjusted predictions should be calculated (see 'Details'). #' All remaining covariates that are not specified in \code{terms} are held #' constant (see 'Details'). See also arguments \code{condition} and \code{typical}. #' @param ci.lvl Numeric, the level of the confidence intervals. For \code{ggpredict()}, #' use \code{ci.lvl = NA}, if confidence intervals should not be calculated #' (for instance, due to computation time). Typically, confidence intervals #' based on the standard errors as returned by the \code{predict()} function #' are returned, assuming normal distribution (i.e. \code{+/- 1.96 * SE}). #' See introduction of \href{https://strengejacke.github.io/ggeffects/articles/ggeffects.html}{this vignette} #' for more details. #' @param type Character, only applies for survival models, mixed effects models #' and/or models with zero-inflation. \strong{Note:} For \code{brmsfit}-models #' with zero-inflation component, there is no \code{type = "zero_inflated"} nor #' \code{type = "zi_random"}; predicted values for \code{MixMod}-models from #' \pkg{GLMMadaptive} with zero-inflation component \emph{always} condition on #' the zero-inflation part of the model (see 'Details'). #' \describe{ #' \item{\code{"fixed"} (or \code{"fe"} or \code{"count"})}{ #' Predicted values are conditioned on the fixed effects or conditional #' model only (for mixed models: predicted values are on the population-level #' and \emph{confidence intervals} are returned). For instance, for models #' fitted with \code{zeroinfl} from \pkg{pscl}, this would return the #' predicted mean from the count component (without zero-inflation). #' For models with zero-inflation component, this type calls #' \code{predict(..., type = "link")} (however, predicted values are #' back-transformed to the response scale). #' } #' \item{\code{"random"} (or \code{"re"})}{ #' This only applies to mixed models, and \code{type = "random"} does not #' condition on the zero-inflation component of the model. \code{type = "random"} #' still returns population-level predictions, however, unlike \code{type = "fixed"}, #' intervals also consider the uncertainty in the variance parameters (the #' mean random effect variance, see \cite{Johnson et al. 2014} for details) #' and hence can be considered as \emph{prediction intervals}. For models #' with zero-inflation component, this type calls #' \code{predict(..., type = "link")} (however, predicted values are #' back-transformed to the response scale). #' \cr \cr #' To get predicted values for each level of the random effects groups, add the #' name of the related random effect term to the \code{terms}-argument #' (for more details, see \href{https://strengejacke.github.io/ggeffects/articles/introduction_effectsatvalues.html}{this vignette}). #' } #' \item{\code{"zero_inflated"} (or \code{"fe.zi"} or \code{"zi"})}{ #' Predicted values are conditioned on the fixed effects and the zero-inflation #' component. For instance, for models fitted with \code{zeroinfl} #' from \pkg{pscl}, this would return the predicted response (\code{mu*(1-p)}) #' and for \pkg{glmmTMB}, this would return the expected value \code{mu*(1-p)} #' \emph{without} conditioning on random effects (i.e. random effect variances #' are not taken into account for the confidence intervals). For models with #' zero-inflation component, this type calls \code{predict(..., type = "response")}. #' See 'Details'. #' } #' \item{\code{"zi_random"} (or \code{"re.zi"} or \code{"zero_inflated_random"})}{ #' Predicted values are conditioned on the zero-inflation component and #' take the random effects uncertainty into account. For models fitted with #' \code{glmmTMB()}, \code{hurdle()} or \code{zeroinfl()}, this would return the #' expected value \code{mu*(1-p)}. For \pkg{glmmTMB}, prediction intervals #' also consider the uncertainty in the random effects variances. This #' type calls \code{predict(..., type = "response")}. See 'Details'. #' } #' \item{\code{"zi_prob"} (or \code{"zi.prob"})}{ #' Predicted zero-inflation probability. For \pkg{glmmTMB} models with #' zero-inflation component, this type calls \code{predict(..., type = "zlink")}; #' models from \pkg{pscl} call \code{predict(..., type = "zero")} and for #' \pkg{GLMMadaptive}, \code{predict(..., type = "zero_part")} is called. #' } #' \item{\code{"simulate"} (or \code{"sim"})}{ #' Predicted values and confidence resp. prediction intervals are #' based on simulations, i.e. calls to \code{simulate()}. This type #' of prediction takes all model uncertainty into account, including #' random effects variances. Currently supported models are objects of #' class \code{lm}, \code{glm}, \code{glmmTMB}, \code{wbm}, \code{MixMod} #' and \code{merMod}. See \code{...} for details on number of simulations. #' } #' \item{\code{"survival"} and \code{"cumulative_hazard"} (or \code{"surv"} and \code{"cumhaz"})}{ #' Applies only to \code{coxph}-objects from the \pkg{survial}-package and #' calculates the survival probability or the cumulative hazard of an event. #' } #' } #' @param typical Character vector, naming the function to be applied to the #' covariates over which the effect is "averaged". The default is "mean". #' See \code{?sjmisc::typical_value} for options. #' @param back.transform Logical, if \code{TRUE} (the default), predicted values #' for log- or log-log transformed responses will be back-transformed to #' original response-scale. #' @param ppd Logical, if \code{TRUE}, predictions for Stan-models are #' based on the posterior predictive distribution #' (\code{rstantools::posterior_predict()}). If \code{FALSE} (the #' default), predictions are based on posterior draws of the linear #' predictor (\code{rstantools::posterior_linpred()}). #' @param condition Named character vector, which indicates covariates that #' should be held constant at specific values. Unlike \code{typical}, which #' applies a function to the covariates to determine the value that is used #' to hold these covariates constant, \code{condition} can be used to define #' exact values, for instance \code{condition = c(covariate1 = 20, covariate2 = 5)}. #' See 'Examples'. #' @param interval Type of interval calculation, can either be \code{"confidence"} #' (default) or \code{"prediction"}. May be abbreviated. Unlike #' \emph{confidence intervals}, \emph{prediction intervals} include the #' residual variance (sigma^2). This argument is ignored for mixed models, #' as \code{interval = "prediction"} is equivalent to \code{type = "random"} #' (and \code{interval = "confidence"} is equivalent to \code{type = "fixed"}). #' Note that prediction intervals are not available for all models, but only #' for models that work with \code{insight::get_sigma()}. #' @param vcov.fun String, indicating the name of the \code{vcov*()}-function #' from the \pkg{sandwich} or \pkg{clubSandwich}-package, e.g. #' \code{vcov.fun = "vcovCL"}, which is used to compute (cluster) robust #' standard errors for predictions. If \code{NULL}, standard errors (and #' confidence intervals) for predictions are based on the standard errors as #' returned by the \code{predict()}-function. \strong{Note} that probably not #' all model objects that work with \code{ggpredict()} are also supported #' by the \pkg{sandwich} or \pkg{clubSandwich}-package. #' @param vcov.type Character vector, specifying the estimation type for the #' robust covariance matrix estimation (see \code{?sandwich::vcovHC} #' or \code{?clubSandwich::vcovCR} for details). #' @param vcov.args List of named vectors, used as additional arguments that #' are passed down to \code{vcov.fun}. #' @param ... For \code{ggpredict()}, further arguments passed down to #' \code{predict()}; for \code{ggeffect()}, further arguments passed #' down to \code{effects::Effect()}; and for \code{ggemmeans()}, #' further arguments passed down to \code{emmeans::emmeans()}. #' If \code{type = "sim"}, \code{...} may also be used to set the number of #' simulation, e.g. \code{nsim = 500}. #' #' @details #' \subsection{Supported Models}{ #' A list of supported models can be found at \url{https://github.com/strengejacke/ggeffects}. #' Support for models varies by function, i.e. although \code{ggpredict()}, #' \code{ggemmeans()} and \code{ggeffect()} support most models, some models #' are only supported exclusively by one of the three functions. #' } #' \subsection{Difference between \code{ggpredict()} and \code{ggeffect()} or \code{ggemmeans()}}{ #' \code{ggpredict()} calls \code{predict()}, while \code{ggeffect()} #' calls \code{effects::Effect()} and \code{ggemmeans()} calls #' \code{emmeans::emmeans()} to compute predicted values. Thus, effects returned #' by \code{ggpredict()} can be described as \emph{conditional effects} (i.e. #' these are conditioned on certain (reference) levels of factors), while #' \code{ggemmeans()} and \code{ggeffect()} return \emph{marginal means}, since #' the effects are "marginalized" (or "averaged") over the levels of factors. #' Therefore, \code{ggpredict()} and \code{ggeffect()} resp. \code{ggemmeans()} #' differ in how factors are held constant: \code{ggpredict()} uses the #' reference level, while \code{ggeffect()} and \code{ggemmeans()} compute a #' kind of "average" value, which represents the proportions of each factor's #' category. Use \code{condition} to set a specific level for factors in #' \code{ggemmeans()}, so factors are not averaged over their categories, #' but held constant at a given level. #' } #' \subsection{Marginal Effects and Adjusted Predictions at Specific Values}{ #' Specific values of model terms can be specified via the \code{terms}-argument. #' Indicating levels in square brackets allows for selecting only #' specific groups or values resp. value ranges. Term name and the start of #' the levels in brackets must be separated by a whitespace character, e.g. #' \code{terms = c("age", "education [1,3]")}. Numeric ranges, separated #' with colon, are also allowed: \code{terms = c("education", "age [30:60]")}. #' The stepsize for range can be adjusted using `by`, e.g. #' \code{terms = "age [30:60 by=5]"}. #' \cr \cr #' The \code{terms}-argument also supports the same shortcuts as the #' \code{values}-argument in \code{values_at()}. So #' \code{terms = "age [meansd]"} would return predictions for the values #' one standard deviation below the mean age, the mean age and #' one SD above the mean age. \code{terms = "age [quart2]"} would calculate #' predictions at the value of the lower, median and upper quartile of age. #' \cr \cr #' Furthermore, it is possible to specify a function name. Values for #' predictions will then be transformed, e.g. \code{terms = "income [exp]"}. #' This is useful when model predictors were transformed for fitting the #' model and should be back-transformed to the original scale for predictions. #' It is also possible to define own functions (see #' \href{https://strengejacke.github.io/ggeffects/articles/introduction_effectsatvalues.html}{this vignette}). #' \cr \cr #' Instead of a function, it is also possible to define the name of a variable #' with specific values, e.g. to define a vector \code{v = c(1000, 2000, 3000)} and #' then use \code{terms = "income [v]"}. #' \cr \cr #' You can take a random sample of any size with \code{sample=n}, e.g #' \code{terms = "income [sample=8]"}, which will sample eight values from #' all possible values of the variable \code{income}. This option is especially #' useful for plotting predictions at certain levels of random effects #' group levels, where the group factor has many levels that can be completely #' plotted. For more details, see \href{https://strengejacke.github.io/ggeffects/articles/introduction_effectsatvalues.html}{this vignette}. #' \cr \cr #' Finally, numeric vectors for which no specific values are given, a #' "pretty range" is calculated (see \code{\link{pretty_range}}), to avoid #' memory allocation problems for vectors with many unique values. If a numeric #' vector is specified as second or third term (i.e. if this vector represents #' a grouping structure), representative values (see \code{\link{values_at}}) #' are chosen (unless other values are specified). If all values for a numeric #' vector should be used to compute predictions, you may use e.g. #' \code{terms = "age [all]"}. See also package vignettes. #' \cr \cr #' To create a pretty range that should be smaller or larger than the default #' range (i.e. if no specific values would be given), use the \code{n}-tag, #' e.g. \code{terms="age [n=5]"} or \code{terms="age [n=12]"}. Larger #' values for \code{n} return a larger range of predicted values. #' } #' \subsection{Holding covariates at constant values}{ #' For \code{ggpredict()}, \code{expand.grid()} is called on all unique #' combinations of \code{model.frame(model)[, terms]} and used as #' \code{newdata}-argument for \code{predict()}. In this case, #' all remaining covariates that are not specified in \code{terms} are #' held constant: Numeric values are set to the mean (unless changed with #' the \code{condition} or \code{typical}-argument), factors are set to their #' reference level (may also be changed with \code{condition}) and character #' vectors to their mode (most common element). #' \cr \cr #' \code{ggeffect()} and \code{ggemmeans()}, by default, set remaining numeric #' covariates to their mean value, while for factors, a kind of "average" value, #' which represents the proportions of each factor's category, is used. For #' \code{ggemmeans()}, use \code{condition} to set a specific level for #' factors so that these are not averaged over their categories, but held #' constant at the given level. #' } #' \subsection{Bayesian Regression Models}{ #' \code{ggpredict()} also works with \strong{Stan}-models from #' the \CRANpkg{rstanarm} or \CRANpkg{brms}-package. The predicted #' values are the median value of all drawn posterior samples. The #' confidence intervals for Stan-models are Bayesian predictive intervals. #' By default (i.e. \code{ppd = FALSE}), the predictions are based on #' \code{rstantools::posterior_linpred()} and hence have some #' limitations: the uncertainty of the error term is not taken into #' account. The recommendation is to use the posterior predictive #' distribution (\code{rstantools::posterior_predict()}). #' } #' \subsection{Zero-Inflated and Zero-Inflated Mixed Models with brms}{ #' Models of class \code{brmsfit} always condition on the zero-inflation #' component, if the model has such a component. Hence, there is no #' \code{type = "zero_inflated"} nor \code{type = "zi_random"} for \code{brmsfit}-models, #' because predictions are based on draws of the posterior distribution, #' which already account for the zero-inflation part of the model. #' } #' \subsection{Zero-Inflated and Zero-Inflated Mixed Models with glmmTMB}{ #' If \code{model} is of class \code{glmmTMB}, \code{hurdle}, \code{zeroinfl} #' or \code{zerotrunc}, simulations from a multivariate normal distribution #' (see \code{?MASS::mvrnorm}) are drawn to calculate \code{mu*(1-p)}. #' Confidence intervals are then based on quantiles of these results. For #' \code{type = "zi_random"}, prediction intervals also take the uncertainty in #' the random-effect paramters into account (see also Brooks et al. 2017, #' pp.391-392 for details). #' \cr \cr #' An alternative for models fitted with \pkg{glmmTMB} that take all model #' uncertainties into account are simulations based on \code{simulate()}, which #' is used when \code{type = "sim"} (see Brooks et al. 2017, pp.392-393 for #' details). #' } #' \subsection{MixMod-models from GLMMadaptive}{ #' Predicted values for the fixed effects component (\code{type = "fixed"} or #' \code{type = "zero_inflated"}) are based on \code{predict(..., type = "mean_subject")}, #' while predicted values for random effects components (\code{type = "random"} or #' \code{type = "zi_random"}) are calculated with \code{predict(..., type = "subject_specific")} #' (see \code{?GLMMadaptive::predict.MixMod} for details). The latter option #' requires the response variable to be defined in the \code{newdata}-argument #' of \code{predict()}, which will be set to its typical value (see #' \code{?sjmisc::typical_value}). #' } #' #' @references \itemize{ #' \item Brooks ME, Kristensen K, Benthem KJ van, Magnusson A, Berg CW, Nielsen A, et al. glmmTMB Balances Speed and Flexibility Among Packages for Zero-inflated Generalized Linear Mixed Modeling. The R Journal. 2017;9: 378-400. #' \item Johnson PC, O'Hara RB. 2014. Extension of Nakagawa & Schielzeth's R2GLMM to random slopes models. Methods Ecol Evol, 5: 944-946. (\doi{10.1111/2041-210X.12225}) #' } #' #' @note #' \subsection{Multinomial Models}{ #' \code{polr}-, \code{clm}-models, or more generally speaking, models with #' ordinal or multinominal outcomes, have an additional column #' \code{response.level}, which indicates with which level of the response #' variable the predicted values are associated. #' } #' \subsection{Printing Results}{ #' The \code{print()}-method gives a clean output (especially for predictions #' by groups), and indicates at which values covariates were held constant. #' Furthermore, the \code{print()}-method has the arguments \code{digits} and #' \code{n} to control number of decimals and lines to be printed, and an #' argument \code{x.lab} to print factor-levels instead of numeric values #' if \code{x} is a factor. #' } #' \subsection{Limitations}{ #' The support for some models, for example from package \pkg{MCMCglmm}, is #' rather experimental and may fail for certain models. If you encounter #' any errors, please file an issue at \url{https://github.com/strengejacke/ggeffects/issues}. #' } #' #' @return A data frame (with \code{ggeffects} class attribute) with consistent #' data columns: #' \describe{ #' \item{\code{x}}{the values of the first term in \code{terms}, used as x-position in plots.} #' \item{\code{predicted}}{the predicted values of the response, used as y-position in plots.} #' \item{\code{std.error}}{the standard error of the predictions. \emph{Note that the standard errors are always on the link-scale, and not back-transformed for non-Gaussian models!}} #' \item{\code{conf.low}}{the lower bound of the confidence interval for the predicted values.} #' \item{\code{conf.high}}{the upper bound of the confidence interval for the predicted values.} #' \item{\code{group}}{the grouping level from the second term in \code{terms}, used as grouping-aesthetics in plots.} #' \item{\code{facet}}{the grouping level from the third term in \code{terms}, used to indicate facets in plots.} #' } #' The estimated marginal means (predicted values) are always on the #' response scale! \cr \cr #' For proportional odds logistic regression (see \code{?MASS::polr}) #' resp. cumulative link models (e.g., see \code{?ordinal::clm}), #' an additional column \code{response.level} is returned, which indicates #' the grouping of predictions based on the level of the model's response. #' \cr \cr Note that for convenience reasons, the columns for the intervals #' are always named \code{conf.low} and \code{conf.high}, even though #' for Bayesian models credible or highest posterior density intervals #' are returned. #' #' @examples #' library(sjlabelled) #' data(efc) #' fit <- lm(barthtot ~ c12hour + neg_c_7 + c161sex + c172code, data = efc) #' #' ggpredict(fit, terms = "c12hour") #' ggpredict(fit, terms = c("c12hour", "c172code")) #' ggpredict(fit, terms = c("c12hour", "c172code", "c161sex")) #' #' # specified as formula #' ggpredict(fit, terms = ~ c12hour + c172code + c161sex) #' #' # only range of 40 to 60 for variable 'c12hour' #' ggpredict(fit, terms = "c12hour [40:60]") #' #' # using "summary()" shows that covariate "neg_c_7" is held #' # constant at a value of 11.84 (its mean value). To use a #' # different value, use "condition" #' ggpredict(fit, terms = "c12hour [40:60]", condition = c(neg_c_7 = 20)) #' #' # to plot ggeffects-objects, you can use the 'plot()'-function. #' # the following examples show how to build your ggplot by hand. #' #' \dontrun{ #' # plot predicted values, remaining covariates held constant #' library(ggplot2) #' mydf <- ggpredict(fit, terms = "c12hour") #' ggplot(mydf, aes(x, predicted)) + #' geom_line() + #' geom_ribbon(aes(ymin = conf.low, ymax = conf.high), alpha = .1) #' #' # three variables, so we can use facets and groups #' mydf <- ggpredict(fit, terms = c("c12hour", "c161sex", "c172code")) #' ggplot(mydf, aes(x = x, y = predicted, colour = group)) + #' stat_smooth(method = "lm", se = FALSE) + #' facet_wrap(~facet, ncol = 2) #' #' # select specific levels for grouping terms #' mydf <- ggpredict(fit, terms = c("c12hour", "c172code [1,3]", "c161sex")) #' ggplot(mydf, aes(x = x, y = predicted, colour = group)) + #' stat_smooth(method = "lm", se = FALSE) + #' facet_wrap(~facet) + #' labs( #' y = get_y_title(mydf), #' x = get_x_title(mydf), #' colour = get_legend_title(mydf) #' ) #' #' # level indication also works for factors with non-numeric levels #' # and in combination with numeric levels for other variables #' data(efc) #' efc$c172code <- sjlabelled::as_label(efc$c172code) #' fit <- lm(barthtot ~ c12hour + neg_c_7 + c161sex + c172code, data = efc) #' ggpredict(fit, terms = c("c12hour", #' "c172code [low level of education, high level of education]", #' "c161sex [1]")) #' #' # use categorical value on x-axis, use axis-labels, add error bars #' dat <- ggpredict(fit, terms = c("c172code", "c161sex")) #' ggplot(dat, aes(x, predicted, colour = group)) + #' geom_point(position = position_dodge(.1)) + #' geom_errorbar( #' aes(ymin = conf.low, ymax = conf.high), #' position = position_dodge(.1) #' ) + #' scale_x_discrete(breaks = 1:3, labels = get_x_labels(dat)) #' #' # 3-way-interaction with 2 continuous variables #' data(efc) #' # make categorical #' efc$c161sex <- as_factor(efc$c161sex) #' fit <- lm(neg_c_7 ~ c12hour * barthtot * c161sex, data = efc) #' # select only levels 30, 50 and 70 from continuous variable Barthel-Index #' dat <- ggpredict(fit, terms = c("c12hour", "barthtot [30,50,70]", "c161sex")) #' ggplot(dat, aes(x = x, y = predicted, colour = group)) + #' stat_smooth(method = "lm", se = FALSE, fullrange = TRUE) + #' facet_wrap(~facet) + #' labs( #' colour = get_legend_title(dat), #' x = get_x_title(dat), #' y = get_y_title(dat), #' title = get_title(dat) #' ) #' #' # or with ggeffects' plot-method #' plot(dat, ci = FALSE)} #' #' # predictions for polynomial terms #' data(efc) #' fit <- glm( #' tot_sc_e ~ c12hour + e42dep + e17age + I(e17age^2) + I(e17age^3), #' data = efc, #' family = poisson() #' ) #' ggeffect(fit, terms = "e17age") #' @export ggpredict <- function(model, terms, ci.lvl = .95, type = "fe", typical = "mean", condition = NULL, back.transform = TRUE, ppd = FALSE, vcov.fun = NULL, vcov.type = NULL, vcov.args = NULL, interval = "confidence", ...) { # check arguments type <- match.arg(type, choices = c("fe", "fixed", "count", "re", "random", "fe.zi", "zero_inflated", "re.zi", "zi_random", "zero_inflated_random", "zi.prob", "zi_prob", "sim", "simulate", "surv", "survival", "cumhaz", "cumulative_hazard", "sim_re", "simulate_random", "debug")) interval <- match.arg(interval, choices = c("confidence", "prediction")) model.name <- deparse(substitute(model)) type <- switch( type, "fixed" = , "count" = "fe", "random" = "re", "zi" = , "zero_inflated" = "fe.zi", "zi_random" = , "zero_inflated_random" = "re.zi", "zi_prob" = "zi.prob", "survival" = "surv", "cumulative_hazard" = "cumhaz", "simulate" = "sim", "simulate_random" = "sim_re", type ) # check if terms are a formula if (!missing(terms) && !is.null(terms) && inherits(terms, "formula")) { terms <- all.vars(terms) } # tidymodels? if (inherits(model, "model_fit")) { model <- model$fit } # for gamm/gamm4 objects, we have a list with two items, mer and gam # extract just the mer-part then if (is.gamm(model) || is.gamm4(model)) model <- model$gam if (inherits(model, "list") && !inherits(model, c("bamlss", "maxLik"))) { res <- lapply(model, function(.x) { ggpredict_helper( model = .x, terms = terms, ci.lvl = ci.lvl, type = type, typical = typical, ppd = ppd, condition = condition, back.transform = back.transform, vcov.fun = vcov.fun, vcov.type = vcov.type, vcov.args = vcov.args, interval = interval, ... ) }) class(res) <- c("ggalleffects", class(res)) } else { if (missing(terms) || is.null(terms)) { predictors <- insight::find_predictors(model, effects = "fixed", component = "conditional", flatten = TRUE) res <- lapply( predictors, function(.x) { tmp <- ggpredict_helper( model = model, terms = .x, ci.lvl = ci.lvl, type = type, typical = typical, ppd = ppd, condition = condition, back.transform = back.transform, vcov.fun = vcov.fun, vcov.type = vcov.type, vcov.args = vcov.args, interval = interval, ... ) tmp$group <- .x tmp } ) names(res) <- predictors class(res) <- c("ggalleffects", class(res)) } else { res <- ggpredict_helper( model = model, terms = terms, ci.lvl = ci.lvl, type = type, typical = typical, ppd = ppd, condition = condition, back.transform = back.transform, vcov.fun = vcov.fun, vcov.type = vcov.type, vcov.args = vcov.args, interval = interval, ... ) } } if (!is.null(res)) attr(res, "model.name") <- model.name res } # workhorse that computes the predictions # and creates the tidy data frames ggpredict_helper <- function(model, terms, ci.lvl, type, typical, ppd, condition, back.transform, vcov.fun, vcov.type, vcov.args, interval, ...) { # check class of fitted model, to make sure we have just one class-attribute # (while "inherits()" may return multiple attributes) model_class <- get_predict_function(model) # check terms argument, to make sure that terms were not misspelled # and are indeed existing in the data terms <- .check_vars(terms, model) cleaned_terms <- .clean_terms(terms) # check model family model_info <- .get_model_info(model) if (model_class == "coxph" && type == "surv") model_info$is_binomial <- TRUE # get model frame model_frame <- insight::get_data(model) # expand model frame to data grid of unique combinations data_grid <- .data_grid( model = model, model_frame = model_frame, terms = terms, value_adjustment = typical, condition = condition ) # save original frame, for labels, and original terms original_model_frame <- model_frame original_terms <- terms # clear argument from brackets terms <- cleaned_terms # compute predictions here ----- prediction_data <- select_prediction_method( model_class = model_class, model = model, data_grid = data_grid, ci.lvl = ci.lvl, type = type, model_info = model_info, ppd = ppd, terms = original_terms, value_adjustment = typical, vcov.fun = vcov.fun, vcov.type = vcov.type, vcov.args = vcov.args, condition = condition, interval = interval, ... ) # return if no predicted values have been computed if (is.null(prediction_data)) return(NULL) # remember if grouping variable was numeric, possibly needed for plotting attr(prediction_data, "continuous.group") <- attr(data_grid, "continuous.group") # for survival probabilities or cumulative hazards, we need # the "time" variable if (model_class == "coxph" && type %in% c("surv", "cumhaz")) { terms <- c("time", terms) cleaned_terms <- c("time", cleaned_terms) } result <- .post_processing_predictions( model = model, prediction_data = prediction_data, original_model_frame = original_model_frame, cleaned_terms = cleaned_terms ) # check if outcome is log-transformed, and if so, # back-transform predicted values to response scale result <- .back_transform_response(model, result, back.transform) # add raw data as well attr(result, "rawdata") <- .get_raw_data(model, original_model_frame, terms) .post_processing_labels( model = model, result = result, original_model_frame = original_model_frame, data_grid = data_grid, cleaned_terms = cleaned_terms, original_terms = original_terms, model_info = model_info, type = type, prediction.interval = attr(prediction_data, "prediction.interval", exact = TRUE), at_list = .data_grid( model = model, model_frame = original_model_frame, terms = original_terms, value_adjustment = typical, condition = condition, show_pretty_message = FALSE, emmeans.only = TRUE ), condition = condition, ci.lvl = ci.lvl ) } ggeffects/R/print.R0000644000176200001440000001771614046746430013713 0ustar liggesusers#' @export print.ggeffects <- function(x, n = 10, digits = 2, x.lab = FALSE, ...) { # convert to factor if (isTRUE(x.lab)) { labs <- sjlabelled::get_labels( x$x, attr.only = TRUE, values = "n", non.labelled = FALSE, drop.na = TRUE ) vals <- x$x x$x <- format(sjlabelled::as_label(x$x), justify = "right") if (!is.null(labs) && !is.null(names(labs))) { labs <- labs[match(vals, names(labs))] labs <- format(sprintf("[%s]", names(labs)), justify = "left") x$x <- paste(labs, x$x, sep = " ") } } # remove std.error for printint x$std.error <- NULL # do we have groups and facets? has_groups <- .obj_has_name(x, "group") && length(unique(x$group)) > 1 has_facets <- .obj_has_name(x, "facet") && length(unique(x$facet)) > 1 has_panel <- .obj_has_name(x, "panel") && length(unique(x$panel)) > 1 has_response <- .obj_has_name(x, "response.level") && length(unique(x$response.level)) > 1 has_se <- .obj_has_name(x, "std.error") lab <- attr(x, "title", exact = TRUE) if (!is.null(lab)) insight::print_color(paste0(sprintf("# %s", lab), "\n", collapse = ""), "blue") # lab <- attr(x, "x.title", exact = TRUE) # if (!is.null(lab)) insight::print_color(paste0(sprintf("# x = %s", lab), "\n", collapse = ""), "blue") consv <- attr(x, "constant.values") terms <- attr(x, "terms") ci.lvl <- attr(x, "ci.lvl") # fix terms for survival models a1 <- attr(x, "fitfun", exact = TRUE) a2 <- attr(x, "y.title", exact = TRUE) if (!is.null(a1) && !is.null(a2) && a1 == "coxph" && !(a2 == "Risk Score") && !"time" %in% terms) terms <- c("time", terms) # use focal term as column name focal_term <- terms[1] colnames(x)[1] <- focal_term x <- .round_numeric(x, digits = digits) # if we have groups, show n rows per group .n <- 1 # justify terms tl <- length(terms) if (tl > 2) terms[2:tl] <- format(terms[2:tl], justify = "right") if (has_groups) { .n <- .n_distinct(x$group) if (!is.null(terms) && length(terms) >= 2) { vals <- sprintf("%s = %s", terms[2], as.character(x$group)) lvls <- unique(vals) x$group <- factor(vals, levels = lvls) } } if (has_facets) { .n <- .n * .n_distinct(x$facet) if (!is.null(terms) && length(terms) >= 3) { x$facet <- sprintf("%s = %s", terms[3], as.character(x$facet)) } } if (has_panel) { .n <- .n * .n_distinct(x$panel) if (!is.null(terms) && length(terms) >= 4) { x$panel <- sprintf("%s = %s", terms[4], as.character(x$panel)) } } if (has_response) { .n <- .n * .n_distinct(x$response.level) vals <- sprintf("Response Level = %s", as.character(x$response.level)) lvls <- unique(vals) x$response.level <- ordered(vals, levels = lvls) } # make sure that by default not too many rows are printed if (missing(n)) { n <- if (.n >= 6) 4 else if (.n >= 4 & .n < 6) 5 else if (.n >= 2 & .n < 4) 6 else 8 } if (!has_groups) { if (!has_response) { cat("\n") if (.obj_has_name(x, "group")) x <- .remove_column(x, "group") # print.data.frame(x[.get_sample_rows(x, n), ], ..., row.names = FALSE, quote = FALSE) .print_block(x, n, digits, ci.lvl, ...) } else { x$.nest <- tapply(x$predicted, list(x$response.level), NULL) xx <- split(x, x$.nest) for (i in xx) { insight::print_color(sprintf("\n# %s\n\n", i$response.level[1]), "red") .print_block(i, n, digits, ci.lvl, ...) } } } else if (has_groups && !has_facets) { if (!has_response) { x$.nest <- tapply(x$predicted, list(x$group), NULL) xx <- split(x, x$.nest) for (i in xx) { insight::print_color(sprintf("\n# %s\n\n", i$group[1]), "red") .print_block(i, n, digits, ci.lvl, ...) } } else { x$.nest <- tapply(x$predicted, list(x$response.level, x$group), NULL) xx <- split(x, x$.nest) for (i in xx) { insight::print_color(sprintf("\n# %s\n# %s\n\n", i$response.level[1], i$group[1]), "red") .print_block(i, n, digits, ci.lvl, ...) } } } else if (has_groups && has_facets && !has_panel) { if (!has_response) { x$.nest <- tapply(x$predicted, list(x$group, x$facet), NULL) xx <- split(x, x$.nest) for (i in xx) { insight::print_color(sprintf("\n# %s\n# %s\n\n", i$group[1], i$facet[1]), "red") .print_block(i, n, digits, ci.lvl, ...) } } else { x$.nest <- tapply(x$predicted, list(x$response.level, x$group, x$facet), NULL) xx <- split(x, x$.nest) for (i in xx) { insight::print_color(sprintf("\n# %s\n# %s\n# %s\n\n", i$response.level[1], i$group[1], i$facet[1]), "red") .print_block(i, n, digits, ci.lvl, ...) } } } else { if (!has_response) { x$.nest <- tapply(x$predicted, list(x$group, x$facet, x$panel), NULL) xx <- split(x, x$.nest) for (i in xx) { insight::print_color(sprintf("\n# %s\n# %s\n# %s\n\n", i$group[1], i$facet[1], i$panel[1]), "red") .print_block(i, n, digits, ci.lvl, ...) } } else { x$.nest <- tapply(x$predicted, list(x$response.level, x$group, x$facet, x$panel), NULL) xx <- split(x, x$.nest) for (i in xx) { insight::print_color(sprintf("\n# %s\n# %s\n# %s\n# %s\n\n", i$response.level[1], i$group[1], i$facet[1], i$panel[1]), "red") .print_block(i, n, digits, ci.lvl, ...) } } } cv <- lapply( consv, function(.x) { if (is.numeric(.x)) sprintf("%.2f", .x) else as.character(.x) }) if (!.is_empty(cv)) { cv.names <- names(cv) cv.space <- max(nchar(cv.names)) # ignore this string when determining maximum length poplev <- which(cv %in% c("NA (population-level)", "0 (population-level)")) if (!.is_empty(poplev)) mcv <- cv[-poplev] else mcv <- cv if (!.is_empty(mcv)) cv.space2 <- max(nchar(mcv)) else cv.space2 <- 0 insight::print_color(paste0( "\nAdjusted for:\n", paste0(sprintf("* %*s = %*s", cv.space, cv.names, cv.space2, cv), collapse = "\n") ), "blue") cat("\n") } fitfun <- attr(x, "fitfun", exact = TRUE) if (has_se && !is.null(fitfun) && fitfun != "lm") { message("\nStandard errors are on the link-scale (untransformed).") } predint <- attr(x, "prediction.interval", exact = TRUE) if (!is.null(predint) && isTRUE(predint)) { message("\nIntervals are prediction intervals.") } } .get_sample_rows <- function(x, n) { nr.of.rows <- seq_len(nrow(x)) if (n < length(nr.of.rows)) { sample.rows <- round(c( min(nr.of.rows), stats::quantile(nr.of.rows, seq_len(n - 2) / n), max(nr.of.rows) )) } else { sample.rows <- nr.of.rows } sample.rows } .print_block <- function(i, n, digits, ci.lvl, ...) { i <- i[setdiff(colnames(i), c("group", "facet", "panel", "response.level", ".nest"))] # print.data.frame(, ..., row.names = FALSE, quote = FALSE) dd <- i[.get_sample_rows(i, n), , drop = FALSE] if ("conf.low" %in% colnames(dd) && "conf.high" %in% colnames(dd)) { dd$CI <- insight::format_ci(dd$conf.low, dd$conf.high, digits = digits, width = "auto") dd$CI <- gsub("95% CI ", "", dd$CI, fixed = TRUE) if (is.null(ci.lvl)) ci.lvl <- .95 colnames(dd)[which(colnames(dd) == "CI")] <- sprintf("%g%% CI", 100 * ci.lvl) dd$conf.low <- NULL dd$conf.high <- NULL } if ("std.error" %in% colnames(dd)) { colnames(dd)[which(colnames(dd) == "std.error")] <- "SE" } colnames(dd)[which(colnames(dd) == "predicted")] <- "Predicted" cat(insight::export_table(dd, digits = digits, protect_integers = TRUE)) # print.data.frame(dd, ..., quote = FALSE, row.names = FALSE) } ggeffects/R/get_predictions_gee.R0000644000176200001440000000052314046746430016545 0ustar liggesusersget_predictions_gee <- function(model, terms, ...) { prdat <- stats::predict( model, type = "response", ... ) mf <- insight::get_data(model)[, terms, drop = FALSE] # copy predictions mf$predicted <- as.vector(prdat) # No CI mf$conf.low <- NA mf$conf.high <- NA unique(mf) } ggeffects/R/get_predictions_gam.R0000644000176200001440000000710214004226715016542 0ustar liggesusersget_predictions_gam <- function(model, fitfram, ci.lvl, linv, type, ...) { se <- !is.null(ci.lvl) && !is.na(ci.lvl) # compute ci, two-ways if (!is.null(ci.lvl) && !is.na(ci.lvl)) ci <- (1 + ci.lvl) / 2 else ci <- .975 mi <- insight::model_info(model) if (!mi$is_zero_inflated && type %in% c("fe.zi", "re.zi")) { type <- "fe" message(sprintf("Model has no zero-inflation part. Changing prediction-type to \"%s\".", type)) } prdat <- stats::predict( model, newdata = fitfram, type = "link", se.fit = se ) if (type == "fe.zi") { # check if number of simulations was provided add.args <- lapply(match.call(expand.dots = FALSE)$`...`, function(x) x) if ("nsim" %in% names(add.args)) nsim <- eval(add.args[["nsim"]]) else nsim <- 1000 # simulate predictions, for standad errors / CI prdat.sim <- .get_zeroinfl_gam_predictions(model = model, newdata = fitfram, nsim = nsim) # make sure we have only predicted values as vector, no SE if (.obj_has_name(prdat, "fit")) { prdat <- list( cond = as.vector(prdat$fit[, 1]), zi = as.vector(prdat$fit[, 2]) ) } else { prdat <- list( cond = as.vector(prdat[, 1]), zi = as.vector(prdat[, 2]) ) } prdat <- as.vector(exp(prdat$cond) * (1 - stats::plogis(prdat$zi))) # success? if (is.null(prdat.sim) || inherits(prdat.sim, c("error", "simpleError"))) { insight::print_color("Error: Confidence intervals could not be computed.\n", "red") fitfram$predicted <- prdat fitfram$conf.low <- NA fitfram$conf.high <- NA } else { sims <- exp(prdat.sim$cond) * (1 - stats::plogis(prdat.sim$zi)) fitfram$predicted <- prdat fitfram$std.error <- apply(sims, 1, stats::sd) conf.low <- apply(sims, 1, stats::quantile, probs = 1 - ci) conf.high <- apply(sims, 1, stats::quantile, probs = ci) ci.range <- (conf.high - conf.low) / 2 # fix negative CI ci.low <- fitfram$predicted - ci.range neg.ci <- ci.low < 0 if (any(neg.ci)) { ci.range[neg.ci] <- ci.range[neg.ci] - abs(ci.low[neg.ci]) - 1e-05 fitfram$std.error[neg.ci] <- fitfram$std.error[neg.ci] - ((abs(ci.low[neg.ci]) + 1e-05) / stats::qnorm(ci)) } fitfram$conf.low <- fitfram$predicted - ci.range fitfram$conf.high <- fitfram$predicted + ci.range if (.obj_has_name(fitfram, "std.error")) { # copy standard errors attr(fitfram, "std.error") <- fitfram$std.error fitfram <- .remove_column(fitfram, "std.error") } } } else { if (mi$is_zero_inflated) { if (.obj_has_name(prdat, "fit")) { prdat$fit <- as.vector(prdat$fit[, 1]) prdat$se.fit <- as.vector(prdat$se.fit[, 1]) } else { prdat <- as.vector(prdat[, 1]) } linv <- exp } # did user request standard errors? if yes, compute CI if (se) { # copy predictions fitfram$predicted <- linv(prdat$fit) # calculate CI fitfram$conf.low <- linv(prdat$fit - stats::qnorm(ci) * prdat$se.fit) fitfram$conf.high <- linv(prdat$fit + stats::qnorm(ci) * prdat$se.fit) # copy standard errors attr(fitfram, "std.error") <- prdat$se.fit } else { # copy predictions fitfram$predicted <- linv(as.vector(prdat)) # no CI fitfram$conf.low <- NA fitfram$conf.high <- NA } fitfram } } ggeffects/R/vcov.R0000644000176200001440000002706114046746430013526 0ustar liggesusers#' @title Calculate variance-covariance matrix for marginal effects #' @name vcov #' #' @description Returns the variance-covariance matrix for the predicted values from \code{object}. #' #' @param object An object of class \code{"ggeffects"}, as returned by \code{ggpredict()}. #' @param ... Currently not used. #' @inheritParams ggpredict #' #' @return The variance-covariance matrix for the predicted values from \code{object}. #' #' @details The returned matrix has as many rows (and columns) as possible combinations #' of predicted values from the \code{ggpredict()} call. For example, if there #' are two variables in the \code{terms}-argument of \code{ggpredict()} with 3 and 4 #' levels each, there will be 3*4 combinations of predicted values, so the returned #' matrix has a 12x12 dimension. In short, \code{nrow(object)} is always equal to #' \code{nrow(vcov(object))}. See also 'Examples'. #' #' @examples #' data(efc) #' model <- lm(barthtot ~ c12hour + neg_c_7 + c161sex + c172code, data = efc) #' result <- ggpredict(model, c("c12hour [meansd]", "c161sex")) #' #' vcov(result) #' #' # compare standard errors #' sqrt(diag(vcov(result))) #' as.data.frame(result) #' #' # only two predicted values, no further terms #' # vcov() returns a 2x2 matrix #' result <- ggpredict(model, "c161sex") #' vcov(result) #' #' # 2 levels for c161sex multiplied by 3 levels for c172code #' # result in 6 combinations of predicted values #' # thus vcov() returns a 6x6 matrix #' result <- ggpredict(model, c("c161sex", "c172code")) #' vcov(result) #' @export vcov.ggeffects <- function(object, vcov.fun = NULL, vcov.type = NULL, vcov.args = NULL, ...) { model <- tryCatch({ get(attr(object, "model.name"), envir = parent.frame()) }, error = function(e) { NULL } ) if (is.null(model)) { warning("Can't access original model to compute variance-covariance matrix of predictions.", call. = FALSE) return(NULL) } model_frame <- insight::get_data(model) # check random effect terms. We can't compute SE if data has # factors with only one level, however, if user conditions on # random effects and only conditions on one level, it is indeed # possible to calculate SE - so, ignore random effects for the # check of one-level-factors only random_effect_terms <- insight::find_random(model, split_nested = TRUE, flatten = TRUE) # we can't condition on categorical variables condition <- attr(object, "condition") if (!is.null(condition)) { cn <- names(condition) cn.factors <- sapply(cn, function(.x) is.factor(model_frame[[.x]]) && !(.x %in% random_effect_terms)) condition <- condition[!cn.factors] if (.is_empty(condition)) condition <- NULL } const.values <- attr(object, "constant.values") const.values <- c(condition, unlist(const.values[sapply(const.values, is.numeric)])) terms <- attr(object, "original.terms") ## TODO fpr debugging add.args <- lapply(match.call(expand.dots = FALSE)$`...`, function(x) x) if (isTRUE(add.args[["debug"]])) { message("Collection 1") print(gc(TRUE)) } # copy data frame with predictions newdata <- .data_grid( model, model_frame, terms, value_adjustment = "mean", factor_adjustment = FALSE, show_pretty_message = FALSE, condition = const.values ) ## TODO fpr debugging if (isTRUE(add.args[["debug"]])) { message("Collection 2") print(gc(TRUE)) } # make sure we have enough values to compute CI # nlevels_terms <- sapply( # colnames(newdata), # function(.x) !(.x %in% random_effect_terms) && is.factor(newdata[[.x]]) && nlevels(newdata[[.x]]) == 1 # ) # # if (any(nlevels_terms)) { # not_enough <- colnames(newdata)[which(nlevels_terms)[1]] # remove_lvl <- paste0("[", gsub(pattern = "(.*)\\[(.*)\\]", replacement = "\\2", x = terms[which(.clean_terms(terms) == not_enough)]), "]", collapse = "") # stop(sprintf("`%s` does not have enough factor levels. Try to remove `%s`.", not_enough, remove_lvl), call. = TRUE) # } # add response to newdata. For models fitted with "glmmPQL", # the response variable is renamed internally to "zz". if (inherits(model, "glmmPQL")) { new_response <- 0 names(new_response) <- "zz" } else { fr <- insight::find_response(model, combine = FALSE) new_response <- rep(0, length.out = length(fr)) names(new_response) <- fr } new_response <- new_response[setdiff(names(new_response), colnames(newdata))] newdata <- cbind(as.list(new_response), newdata) # clean terms from brackets terms <- .clean_terms(terms) # sort data by grouping levels, so we have the correct order # to slice data afterwards if (length(terms) > 2) { trms <- terms[3] newdata <- newdata[order(newdata[[trms]]), , drop = FALSE] } if (length(terms) > 1) { trms <- terms[2] newdata <- newdata[order(newdata[[trms]]), , drop = FALSE] } trms <- terms[1] newdata <- newdata[order(newdata[[trms]]), , drop = FALSE] # rownames were resorted as well, which causes troubles in model.matrix rownames(newdata) <- NULL tryCatch( { .vcov_helper(model, model_frame, get_predict_function(model), newdata, vcov.fun, vcov.type, vcov.args, terms) }, error = function(e) { message("Could not compute variance-covariance matrix of predictions. No confidence intervals are returned.") NULL } ) } .vcov_helper <- function(model, model_frame, model_class, newdata, vcov.fun, vcov.type, vcov.args, terms) { # check if robust vcov-matrix is requested if (!is.null(vcov.fun)) { # check for existing vcov-prefix if (!grepl("^vcov", vcov.fun)) { vcov.fun <- paste0("vcov", vcov.fun) } # set default for clubSandwich if (vcov.fun == "vcovCR" && is.null(vcov.type)) { vcov.type <- "CR0" } if (!is.null(vcov.type) && vcov.type %in% c("CR0", "CR1", "CR1p", "CR1S", "CR2", "CR3")) { if (!requireNamespace("clubSandwich", quietly = TRUE)) { stop("Package `clubSandwich` needed for this function. Please install and try again.") } robust_package <- "clubSandwich" vcov.fun <- "vcovCR" } else { if (!requireNamespace("sandwich", quietly = TRUE)) { stop("Package `sandwich` needed for this function. Please install and try again.") } robust_package <- "sandwich" } # compute robust standard errors based on vcov if (robust_package == "sandwich") { vcov.fun <- get(vcov.fun, asNamespace("sandwich")) vcm <- as.matrix(do.call(vcov.fun, c(list(x = model, type = vcov.type), vcov.args))) } else { vcov.fun <- clubSandwich::vcovCR vcm <- as.matrix(do.call(vcov.fun, c(list(obj = model, type = vcov.type), vcov.args))) } } else { # get variance-covariance-matrix, depending on model type vcm <- insight::get_varcov(model, component = "conditional") } model_terms <- tryCatch({ stats::terms(model) }, error = function(e) { insight::find_formula(model)$conditional }) # exception for gamlss, who may have "random()" function in formula # we need to remove this term... if (inherits(model, "gamlss") && grepl("random\\((.*\\))", .safe_deparse(stats::formula(model)))) { model_terms <- insight::find_formula(model)$conditional } # drop offset from model_terms+ if (inherits(model, c("zeroinfl", "hurdle", "zerotrunc"))) { all_terms <- insight::find_terms(model)$conditional off_terms <- grepl("^offset\\((.*)\\)", all_terms) if (any(off_terms)) { all_terms <- all_terms[!off_terms] ## TODO preserve interactions vcov_names <- dimnames(vcm)[[1]][grepl(":", dimnames(vcm)[[1]], fixed = TRUE)] if (length(vcov_names)) { vcov_names <- gsub(":", "*", vcov_names, fixed = TRUE) all_terms <- unique(c(all_terms, vcov_names)) } off_terms <- grepl("^offset\\((.*)\\)", all_terms) model_terms <- stats::reformulate(all_terms[!off_terms], response = insight::find_response(model)) } } # check if factors are held constant. if so, we have just one # level in the data, which is too few to compute the vcov - # in this case, remove those factors from model formula and vcov re.terms <- insight::find_random(model, split_nested = TRUE, flatten = TRUE) nlevels_terms <- sapply( colnames(newdata), function(.x) !(.x %in% re.terms) && is.factor(newdata[[.x]]) && nlevels(newdata[[.x]]) == 1 ) if (any(nlevels_terms)) { all_terms <- setdiff( insight::find_terms(model)$conditional, colnames(newdata)[nlevels_terms] ) model_terms <- stats::reformulate(all_terms, response = insight::find_response(model)) vcm <- vcm[!nlevels_terms, !nlevels_terms, drop = FALSE] } # code to compute se of prediction taken from # http://bbolker.github.io/mixedmodels-misc/glmmFAQ.html#predictions-andor-confidence-or-prediction-intervals-on-predictions mm <- stats::model.matrix(model_terms, newdata) # here we need to fix some term names, so variable names match the column # names from the model matrix. NOTE that depending on the type of contrasts, # the naming column names for factors differs: for "contr.sum", column names # of factors are named "Species1", "Species2", etc., while for "contr.treatment", # column names are "Speciesversicolor", "Speciesvirginica", etc. contrs <- attr(mm, "contrasts") if (!.is_empty(contrs)) { # check which contrasts are actually in terms-argument, # and which terms also appear in contrasts keep.c <- names(contrs) %in% terms rem.t <- terms %in% names(contrs) # only iterate required terms and contrasts contrs <- contrs[keep.c] terms <- terms[!rem.t] add.terms <- unlist(mapply(function(.x, .y) { f <- model_frame[[.y]] if (!is.factor(f)) { f <- as.factor(f) } if (.x %in% c("contr.sum", "contr.helmert")) sprintf("%s%s", .y, 1:(nlevels(f) - 1)) else if (.x == "contr.poly") sprintf("%s%s", .y, c(".L", ".Q", ".C")) else sprintf("%s%s", .y, levels(f)[2:nlevels(f)]) }, contrs, names(contrs), SIMPLIFY = FALSE)) terms <- c(terms, add.terms) } # we need all this intersection-stuff to reduce the model matrix and remove # duplicated entries. Else, especially for mixed models, we often run into # memory allocation problems. The problem is to find the correct rows of # the matrix that should be kept, and only take those columns of the # matrix for which terms we need standard errors. model_matrix_data <- as.data.frame(mm) rows_to_keep <- as.numeric(rownames(unique(model_matrix_data[intersect(colnames(model_matrix_data), terms)]))) # for poly-terms, we have no match, so fix this here if (.is_empty(rows_to_keep) || !all(terms %in% colnames(model_matrix_data))) { inters <- which(insight::clean_names(colnames(model_matrix_data)) %in% terms) rows_to_keep <- as.numeric(rownames(unique(model_matrix_data[inters]))) } mm <- mm[rows_to_keep, , drop = FALSE] # check class of fitted model, to make sure we have just one class-attribute # (while "inherits()" may return multiple attributes) model_class <- get_predict_function(model) if (!is.null(model_class) && model_class %in% c("polr", "mixor", "multinom", "brmultinom", "bracl", "fixest")) { keep <- intersect(colnames(mm), colnames(vcm)) vcm <- vcm[keep, keep, drop = FALSE] mm <- mm[, keep] } mm %*% vcm %*% t(mm) } ggeffects/R/utils_colors.R0000644000176200001440000000241614004226715015260 0ustar liggesusers# get color palette .get_colors <- function(geom.colors, collen, continuous) { # check for corrct color argument if (!is.null(geom.colors)) { geom.colors <- tolower(geom.colors) # check for color brewer palette if (geom.colors[1] %in% names(ggeffects_colors)) { geom.colors <- ggeffects_pal(palette = geom.colors[1], n = collen) } else if (geom.colors[1] == "bw") { geom.colors <- rep("black", times = collen) } else if (geom.colors[1] == "gs") { geom.colors <- ggeffects_pal(palette = "greyscale", n = collen) # do we have correct amount of colours? } else if (length(geom.colors) > 1 & continuous) { # preserve colors as is for latter use in gradient scale return(geom.colors) } else if (length(geom.colors) > collen) { # shorten palette geom.colors <- geom.colors[1:collen] } else if (length(geom.colors) < collen) { # warn user abount wrong color palette warning(sprintf("Insufficient length of color palette provided. %i color values needed.", collen), call. = FALSE) # set default palette geom.colors <- ggeffects_pal(palette = "set1", n = collen) } } else { geom.colors <- ggeffects_pal(palette = "set1", n = collen) } geom.colors } ggeffects/R/utils_ggpredict.R0000644000176200001440000000516214046746430015737 0ustar liggesusers.back_transform_response <- function(model, mydf, back.transform) { # check if outcome is log-transformed, and if so, # back-transform predicted values to response scale rv <- insight::find_terms(model)[["response"]] if (any(grepl("log\\((.*)\\)", rv))) { if (back.transform) { # do we have log-log models? if (grepl("log\\(log\\((.*)\\)\\)", rv)) { mydf$predicted <- exp(exp(mydf$predicted)) if (.obj_has_name(mydf, "conf.low") && .obj_has_name(mydf, "conf.high")) { mydf$conf.low <- exp(exp(mydf$conf.low)) mydf$conf.high <- exp(exp(mydf$conf.high)) } } else { plus_minus <- eval(parse(text = gsub("log\\(([^,\\+)]*)(.*)\\)", "\\2", rv))) if (is.null(plus_minus)) plus_minus <- 0 mydf$predicted <- exp(mydf$predicted) - plus_minus if (.obj_has_name(mydf, "conf.low") && .obj_has_name(mydf, "conf.high")) { mydf$conf.low <- exp(mydf$conf.low) - plus_minus mydf$conf.high <- exp(mydf$conf.high) - plus_minus } } message("Model has log-transformed response. Back-transforming predictions to original response scale. Standard errors are still on the log-scale.") } else { message("Model has log-transformed response. Predictions are on log-scale.") } } if (any(grepl("log1p\\((.*)\\)", rv))) { if (back.transform) { mydf$predicted <- expm1(mydf$predicted) if (.obj_has_name(mydf, "conf.low") && .obj_has_name(mydf, "conf.high")) { mydf$conf.low <- expm1(mydf$conf.low) mydf$conf.high <- expm1(mydf$conf.high) } message("Model has log-transformed response. Back-transforming predictions to original response scale. Standard errors are still on the log-scale.") } else { message("Model has log-transformed response. Predictions are on log(mu + 1) scale.") } } if (any(grepl("sqrt\\((.*)\\)", rv))) { if (back.transform) { plus_minus <- eval(parse(text = gsub("sqrt\\(([^,\\+)]*)(.*)\\)", "\\2", rv))) if (is.null(plus_minus)) plus_minus <- 0 mydf$predicted <- mydf$predicted^2 - plus_minus if (.obj_has_name(mydf, "conf.low") && .obj_has_name(mydf, "conf.high")) { mydf$conf.low <- mydf$conf.low^2 - plus_minus mydf$conf.high <- mydf$conf.high^2 - plus_minus } message("Model has sqrt-transformed response. Back-transforming predictions to original response scale. Standard errors are still on the sqrt-scale.") } else { message("Model has sqrt-transformed response. Predictions are on sqrt-scale.") } } mydf } ggeffects/R/ggemmeans.R0000644000176200001440000001417314046746430014514 0ustar liggesusers#' @rdname ggpredict #' @export ggemmeans <- function(model, terms, ci.lvl = .95, type = "fe", typical = "mean", condition = NULL, back.transform = TRUE, interval = "confidence", ...) { if (!requireNamespace("emmeans")) { stop("Package `emmeans` required to compute marginal effects with `ggemmeans()`.", call. = FALSE) } # check arguments type <- match.arg(type, choices = c("fe", "fixed", "count", "re", "random", "fe.zi", "zero_inflated", "re.zi", "zi_random", "zero_inflated_random", "zi.prob", "zi_prob")) interval <- match.arg(interval, choices = c("confidence", "prediction")) model_name <- deparse(substitute(model)) type <- switch( type, "fixed" = , "count" = "fe", "random" = "re", "zi" = , "zero_inflated" = "fe.zi", "zi_random" = , "zero_inflated_random" = "re.zi", "zi_prob" = "zi.prob", "survival" = "surv", "cumulative_hazard" = "cumhaz" , type ) # check if terms are a formula if (!missing(terms) && !is.null(terms) && inherits(terms, "formula")) { terms <- all.vars(terms) } # tidymodels? if (inherits(model, "model_fit")) { model <- model$fit } if (inherits(model, c("glmmTMB", "MixMod")) && type == "zi.prob") { stop(sprintf("This prediction-type is currently not available for models of class '%s'.", class(model)[1]), call. = FALSE) } # for gamm/gamm4 objects, we have a list with two items, mer and gam # extract just the mer-part then if (is.gamm(model) || is.gamm4(model)) model <- model$gam # check model family, do we have count model? model_info <- .get_model_info(model) # get model frame original_model_frame <- model_frame <- insight::get_data(model) # check terms argument terms <- .check_vars(terms, model) cleaned_terms <- .clean_terms(terms) data_grid <- .data_grid( model = model, model_frame = model_frame, terms = terms, value_adjustment = typical, condition = condition, emmeans.only = TRUE ) # for zero-inflated mixed models, we need some extra handling if (model_info$is_zero_inflated && inherits(model, c("glmmTMB", "MixMod")) && type == "fe.zi") { preds <- .emmeans_mixed_zi(model, data_grid, cleaned_terms, ...) additional_dot_args <- lapply(match.call(expand.dots = FALSE)$`...`, function(x) x) if ("nsim" %in% names(additional_dot_args)) nsim <- eval(additional_dot_args[["nsim"]]) else nsim <- 1000 prediction_data <- .ggemmeans_zi_predictions( model = model, model_frame = model_frame, preds = preds, ci.lvl = ci.lvl, terms = terms, cleaned_terms = cleaned_terms, value_adjustment = typical, condition = condition, nsim = nsim, type = type ) pmode <- "response" } else { # get prediction mode, i.e. at which scale predicted # values should be returned pmode <- .get_prediction_mode_argument(model, model_info, type) prediction_data <- .emmeans_prediction_data(model, data_grid, cleaned_terms, ci.lvl, pmode, type, model_info, interval = interval, ...) # fix gam here if (inherits(model, "gam") && model_info$is_zero_inflated) { prediction_data$predicted <- exp(prediction_data$predicted) prediction_data$conf.low <- exp(prediction_data$conf.low) prediction_data$conf.high <- exp(prediction_data$conf.high) } } # return NULL on error if (is.null(prediction_data)) return(NULL) attr(prediction_data, "continuous.group") <- attr(data_grid, "continuous.group") if (model_info$is_ordinal || model_info$is_categorical || model_info$is_multinomial) { if (colnames(prediction_data)[1] != "x") colnames(prediction_data)[1] <- "response.level" } result <- .post_processing_predictions( model = model, prediction_data = prediction_data, original_model_frame = original_model_frame, cleaned_terms = cleaned_terms ) # apply link inverse function linv <- insight::link_inverse(model) if (!is.null(linv) && (inherits(model, c("lrm", "orm")) || pmode == "link" || (inherits(model, "MixMod") && type != "fe.zi"))) { result$predicted <- linv(result$predicted) result$conf.low <- linv(result$conf.low) result$conf.high <- linv(result$conf.high) } # check if outcome is log-transformed, and if so, # back-transform predicted values to response scale result <- .back_transform_response(model, result, back.transform) attr(result, "model.name") <- model_name # add raw data as well attr(result, "rawdata") <- .get_raw_data(model, original_model_frame, cleaned_terms) .post_processing_labels( model = model, result = result, original_model_frame = original_model_frame, data_grid = data_grid, cleaned_terms = cleaned_terms, original_terms = terms, model_info = model_info, type = type, prediction.interval = attr(prediction_data, "prediction.interval", exact = TRUE), at_list = data_grid, ci.lvl = ci.lvl ) } .get_prediction_mode_argument <- function(model, model_info, type) { if (inherits(model, "betareg")) "response" else if (inherits(model, c("polr", "clm", "clmm", "clm2", "rms", "lrm", "orm"))) "prob" else if (inherits(model, "lmerMod")) "asymptotic" else if (inherits(model, "MixMod")) "fixed-effects" else if (inherits(model, c("gls", "lme"))) "auto" else if (inherits(model, "MCMCglmm") && model_info$is_multinomial) "response" else if (model_info$is_ordinal || model_info$is_categorical || model_info$is_multinomial) "prob" else if (model_info$is_zero_inflated && type %in% c("fe", "re") && inherits(model, "glmmTMB")) "link" else if (model_info$is_zero_inflated && type %in% c("fe.zi", "re.zi")) "response" else if (model_info$is_zero_inflated && type %in% c("fe", "re")) "count" else if (model_info$is_zero_inflated && type %in% c("zi.prob")) "prob0" else "link" } ggeffects/R/get_predictions_glmrob_base.R0000644000176200001440000000163713451124203020253 0ustar liggesusersget_predictions_glmrob_base <- function(model, fitfram, ci.lvl, linv, ...) { # does user want standard errors? se <- !is.null(ci.lvl) && !is.na(ci.lvl) # compute ci, two-ways if (!is.null(ci.lvl) && !is.na(ci.lvl)) ci <- (1 + ci.lvl) / 2 else ci <- .975 # for models from "robust"-pkg (glmRob) we need to # suppress warnings about fake models prdat <- stats::predict( model, newdata = fitfram, type = "link", se.fit = se, ... ) # get predicted values, on link-scale fitfram$predicted <- linv(prdat$fit) if (se) { fitfram$conf.low <- linv(prdat$fit - stats::qnorm(ci) * prdat$se.fit) fitfram$conf.high <- linv(prdat$fit + stats::qnorm(ci) * prdat$se.fit) # copy standard errors attr(fitfram, "std.error") <- prdat$se.fit } else { fitfram$conf.low <- NA fitfram$conf.high <- NA } fitfram } ggeffects/R/get_predictions_rq.R0000644000176200001440000000106213451124203016411 0ustar liggesusersget_predictions_rq <- function(model, fitfram, ci.lvl, ...) { if (!is.null(ci.lvl) && !is.na(ci.lvl)) ci <- "confidence" else ci <- "none" prdat <- stats::predict( model, newdata = fitfram, interval = ci, level = ci.lvl, ... ) # get predicted values, on link-scale fitfram$predicted <- prdat[, 1] if (ci == "none") { fitfram$conf.low <- NA fitfram$conf.high <- NA } else { fitfram$conf.low <- prdat[, 2] fitfram$conf.high <- prdat[, 3] } fitfram } ggeffects/R/utils.R0000644000176200001440000002010314046746430013677 0ustar liggesusersdata_frame <- function(...) { x <- data.frame(..., stringsAsFactors = FALSE) rownames(x) <- NULL x } .check_vars <- function(terms, model) { if (missing(terms) || is.null(terms)) { stop("`terms` needs to be a character vector with at least one predictor names: one term used for the x-axis, more optional terms as grouping factors.", call. = FALSE) } # check for correct length of vector if (length(terms) > 4) { message("`terms` must have not more than four values. Using first four values now.") terms <- terms[1:4] } if (!is.null(model)) { tryCatch( { pv <- insight::find_predictors(model, effects = "all", component = "all", flatten = TRUE) clean.terms <- .clean_terms(terms) for (i in clean.terms) { if (!(i %in% pv)) { insight::print_color(sprintf("`%s` was not found in model terms. Maybe misspelled?\n", i), "red") } } }, error = function(x) { NULL } ) } terms } .offset_term <- function(model, verbose = TRUE) { tryCatch({ off <- .safe_deparse(model$call$offset) if (identical(off, "NULL")) { return(NULL) } cleaned_off <- insight::clean_names(off) if (!identical(off, cleaned_off) && isTRUE(verbose) && !inherits(model, "glmmTMB")) { warning(sprintf("Model uses a transformed offset term. Predictions may not be correct. Please apply transformation of offset term to the data before fitting the model and use 'offset(%s)' in the model formula.\n", cleaned_off), call. = FALSE) } cleaned_off }, error = function(e) { NULL }) } .get_raw_data <- function(model, mf, terms) { # for matrix variables, don't return raw data if (any(sapply(mf, is.matrix)) && !inherits(model, c("coxph", "coxme"))) return(NULL) if (!all(insight::find_response(model, combine = FALSE) %in% colnames(mf))) return(NULL) # get response and x-value response <- insight::get_response(model) # for cox-models, modify response if (inherits(model, "coxph")) { response <- response[[2]] } # back-transform log-transformed response? rv <- insight::find_terms(model)[["response"]] # character vectors to factors for (i in terms) { if (is.character(mf[[i]])) { mf[[i]] <- factor(mf[[i]], levels = unique(mf[[i]])) } } x <- .factor_to_numeric(mf[[terms[1]]]) # add optional grouping variable if (length(terms) > 1) { group <- sjlabelled::as_label( mf[[terms[2]]], prefix = FALSE, drop.na = TRUE, drop.levels = !is.numeric(mf[[terms[2]]]) ) } else { group <- as.factor(1) } if (length(terms) > 2) { facet <- sjlabelled::as_label( mf[[terms[3]]], prefix = FALSE, drop.na = TRUE, drop.levels = !is.numeric(mf[[terms[3]]]) ) } else { facet <- as.factor(1) } # return all as data.frame tryCatch( { data_frame(response = response, x = x, group = group, facet = facet) }, error = function(x) { NULL }, warning = function(x) { NULL }, finally = function(x) { NULL } ) } .prettify_data <- function(conditional_terms, original_model_frame, terms, use_all_values = FALSE, show_pretty_message = FALSE) { lapply(conditional_terms, function(.x) { pr <- original_model_frame[[terms[.x]]] if (is.numeric(pr)) { if (.x > 1 && .n_distinct(pr) >= 10) values_at(pr) else if (.n_distinct(pr) < 20 || isTRUE(use_all_values)) { sort(stats::na.omit(unique(pr))) } else { if (show_pretty_message) { message(sprintf("Data were 'prettified'. Consider using `terms=\"%s [all]\"` to get smooth plots.", terms[.x])) show_pretty_message <- FALSE } pretty_range(pr) } } else if (is.factor(pr)) levels(droplevels(pr)) else stats::na.omit(unique(pr)) }) } .get_residual_variance <- function(x) { tryCatch( { insight::get_sigma(x, ci = NULL, verbose = FALSE)^2 # info <- insight::model_info(x) # if (info$is_mixed || inherits(x, c("merMod", "rlmerMod", "lmerMod", "glmerMod", "glmmTMB", "stanreg", "MixMod"))) { # re.var <- insight::get_variance_random(x) # } else if (inherits(x, c("lme", "nlme"))) { # re.var <- x$sigma^2 # } else { # re.var <- insight::get_sigma(x, ci = NULL, verbose = FALSE) # if (is.null(re.var)) { # re.var <- 0 # } # } # re.var }, error = function(x) { 0 } ) } .frac_length <- function(x) { if (is.numeric(x)) { max(nchar(gsub(pattern = "(.\\.)(.*)", "\\2", sprintf("%f", abs(x) %% 1)))) } else 0 } is.whole <- function(x) { (is.numeric(x) && isTRUE(all.equal(x, round(x)))) || is.character(x) || is.factor(x) } is.whole.number <- function(x) { (is.numeric(x) && isTRUE(all.equal(x, round(x)))) } .get_poly_term <- function(x) { p <- "(.*)poly\\(([^,]*)[^)]*\\)(.*)" sub(p, "\\2", x) } .get_poly_degree <- function(x) { p <- "(.*)poly\\(([^,]*)([^)])*\\)(.*)" tryCatch( { as.numeric(sub(p, "\\3", x)) }, error = function(x) { 1 } ) } is_brms_trial <- function(model) { is.trial <- FALSE if (inherits(model, "brmsfit") && is.null(stats::formula(model)$responses)) { is.trial <- tryCatch({ rv <- .safe_deparse(stats::formula(model)$formula[[2L]]) trimws(sub("(.*)\\|(.*)\\(([^,)]*).*", "\\2", rv)) %in% c("trials", "resp_trials") }, error = function(x) { FALSE } ) } is.trial } .get_model_info <- function(model) { faminfo <- insight::model_info(model) if (insight::is_multivariate(model)) faminfo <- faminfo[[1]] faminfo$is_brms_trial <- is_brms_trial(model) faminfo } .compact_list <- function(x) { if (is.data.frame(x)) { x <- x[stats::complete.cases(x), ] } x[!sapply(x, function(i) length(i) == 0 || is.null(i) || any(i == "NULL"))] } .safe_deparse <- function(string) { paste0(sapply(deparse(string, width.cutoff = 500), trimws, simplify = TRUE), collapse = " ") } is.gamm <- function(x) { inherits(x, c("list", "gamm")) && all(names(x) %in% c("lme", "gam")) } is.gamm4 <- function(x) { inherits(x, "list") && all(names(x) %in% c("mer", "gam")) } .n_distinct <- function(x, na.rm = TRUE) { if (na.rm) x <- x[!is.na(x)] length(unique(x)) } # select rows where values in "variable" match "value" .select_rows <- function(data, variable, value) { data[which(data[[variable]] == value), , drop = FALSE] } # remove column .remove_column <- function(data, variables) { a <- attributes(data) if (!length(variables) || is.null(variables)) return(data) if (is.numeric(variables)) variables <- colnames(data)[variables] data <- data[, -which(colnames(data) %in% variables), drop = FALSE] remaining <- setdiff(names(a), names(attributes(data))) if (length(remaining)) attributes(data) <- c(attributes(data), a[remaining]) data } .convert_numeric_factors <- function(x) { num_facs <- sapply(x, .is_numeric_factor) if (any(num_facs)) { x[num_facs] <- lapply(x[num_facs], function(i) as.numeric(as.character(i))) } x } .is_numeric_factor <- function(x) { is.factor(x) && !anyNA(suppressWarnings(as.numeric(levels(x)))) } .factor_to_numeric <- function(x, lowest = NULL) { if (is.numeric(x)) { return(x) } if (is.logical(x)) { return(as.numeric(x)) } if (anyNA(suppressWarnings(as.numeric(as.character(stats::na.omit(x)))))) { if (is.character(x)) { x <- as.factor(x) } x <- droplevels(x) levels(x) <- 1:nlevels(x) } out <- as.numeric(as.character(x)) if (!is.null(lowest)) { difference <- min(out) - lowest out <- out - difference } out } .check_returned_se <- function(se.pred) { !is.null(se.pred) && length(se.pred) > 0 && !is.null(se.pred$se.fit) && length(se.pred$se.fit) > 0 } ggeffects/R/get_predictions_lme.R0000644000176200001440000000411714046746430016565 0ustar liggesusersget_predictions_lme <- function(model, fitfram, ci.lvl, linv, type, terms, value_adjustment, model_class, vcov.fun, vcov.type, vcov.args, condition, ...) { # does user want standard errors? se <- (!is.null(ci.lvl) && !is.na(ci.lvl)) || !is.null(vcov.fun) # compute ci, two-ways if (!is.null(ci.lvl) && !is.na(ci.lvl)) ci <- (1 + ci.lvl) / 2 else ci <- .975 if (inherits(model, "glmmPQL")) pr.type <- "link" else pr.type <- "response" if (type %in% c("re", "random")) { level <- 1 } else { level <- 0 } prdat <- stats::predict( model, newdata = fitfram, type = pr.type, level = level, ... ) # copy predictions fitfram$predicted <- as.vector(prdat) # did user request standard errors? if yes, compute CI if (se) { se.pred <- .standard_error_predictions( model = model, prediction_data = fitfram, value_adjustment = value_adjustment, terms = terms, model_class = model_class, type = type, vcov.fun = vcov.fun, vcov.type = vcov.type, vcov.args = vcov.args, condition = condition ) if (.check_returned_se(se.pred)) { se.fit <- se.pred$se.fit fitfram <- se.pred$prediction_data # calculate CI fitfram$conf.low <- fitfram$predicted - stats::qnorm(ci) * se.fit fitfram$conf.high <- fitfram$predicted + stats::qnorm(ci) * se.fit # copy standard errors attr(fitfram, "std.error") <- se.fit attr(fitfram, "prediction.interval") <- attr(se.pred, "prediction_interval") } else { # No CI fitfram$conf.low <- NA fitfram$conf.high <- NA } } else { # No CI fitfram$conf.low <- NA fitfram$conf.high <- NA } # for glmmPQL, we need to back-transform using link-inverse if (inherits(model, "glmmPQL")) { fitfram$predicted <- linv(fitfram$predicted) fitfram$conf.low <- linv(fitfram$conf.low) fitfram$conf.high <- linv(fitfram$conf.high) } fitfram } ggeffects/R/ggemmeans_add_confint.R0000644000176200001440000000477714046746430017055 0ustar liggesusers.ggemmeans_add_confint <- function(model, tmp, ci.lvl, type = "fe", pmode = NULL, interval = NULL) { # compute ci, two-ways if (!is.null(ci.lvl) && !is.na(ci.lvl)) ci <- (1 + ci.lvl) / 2 else ci <- .975 if (type %in% c("re", "re.zi") || identical(interval, "prediction")) { fitfram <- suppressWarnings( .var_rename( as.data.frame(tmp), SE = "std.error", emmean = "predicted", lower.CL = "conf.low", upper.CL = "conf.high", prob = "predicted", asymp.LCL = "conf.low", asymp.UCL = "conf.high", lower.HPD = "conf.low", upper.HPD = "conf.high" ) ) revar <- .get_residual_variance(model) # get link-function and back-transform fitted values # to original scale, so we compute proper CI if (!is.null(revar)) { if (!is.null(pmode) && pmode %in% c("prob", "count")) { lf <- insight::link_function(model) fitfram$conf.low <- exp(lf(fitfram$conf.low) - stats::qnorm(ci) * sqrt(revar)) fitfram$conf.high <- exp(lf(fitfram$conf.high) + stats::qnorm(ci) * sqrt(revar)) } else { fitfram$conf.low <- fitfram$conf.low - stats::qnorm(ci) * sqrt(revar) fitfram$conf.high <- fitfram$conf.high + stats::qnorm(ci) * sqrt(revar) } fitfram$std.error <- sqrt(fitfram$std.error^2 + revar) } fitfram } else if (inherits(model, "multinom")) { fitfram <- suppressWarnings( .var_rename( as.data.frame(tmp), SE = "std.error", emmean = "predicted", lower.CL = "conf.low", upper.CL = "conf.high", prob = "predicted", asymp.LCL = "conf.low", asymp.UCL = "conf.high", lower.HPD = "conf.low", upper.HPD = "conf.high" ) ) lf <- insight::link_function(model) fitfram$conf.low <- stats::plogis(lf(fitfram$predicted) - stats::qnorm(ci) * fitfram$std.error) fitfram$conf.high <- stats::plogis(lf(fitfram$predicted) + stats::qnorm(ci) * fitfram$std.error) fitfram } else { suppressWarnings( .var_rename( as.data.frame(stats::confint(tmp, level = ci.lvl)), SE = "std.error", emmean = "predicted", lower.CL = "conf.low", upper.CL = "conf.high", prob = "predicted", asymp.LCL = "conf.low", asymp.UCL = "conf.high", lower.HPD = "conf.low", upper.HPD = "conf.high" ) ) } } ggeffects/R/get_predictions_glimML.R0000644000176200001440000000177014046746430017173 0ustar liggesusersget_predictions_glimML <- function(model, fitfram, ci.lvl, linv, ...) { # does user want standard errors? se <- !is.null(ci.lvl) && !is.na(ci.lvl) # compute ci, two-ways if (!is.null(ci.lvl) && !is.na(ci.lvl)) ci <- (1 + ci.lvl) / 2 else ci <- .975 if (!requireNamespace("aod", quietly = TRUE)) { stop("Package 'aod' is required. Please install it.") } prdat <- aod::predict( model, newdata = fitfram, type = "link", se.fit = se, ... ) # copy predictions fitfram$predicted <- linv(prdat$fit) # did user request standard errors? if yes, compute CI if (se) { # calculate CI fitfram$conf.low <- linv(prdat$fit - stats::qnorm(ci) * prdat$se.fit) fitfram$conf.high <- linv(prdat$fit + stats::qnorm(ci) * prdat$se.fit) # copy standard errors attr(fitfram, "std.error") <- prdat$se.fit } else { # No CI fitfram$conf.low <- NA fitfram$conf.high <- NA } fitfram } ggeffects/R/pretty_range.R0000644000176200001440000000460414036250437015246 0ustar liggesusers#' @title Create a pretty sequence over a range of a vector #' @name pretty_range #' #' @description Creates an evenly spaced, pretty sequence of numbers for a #' range of a vector. #' #' @param x A numeric vector. #' @param n Numeric value, indicating the size of how many values are used to #' create a pretty sequence. If \code{x} has a large value range (> 100), #' \code{n} could be something between 1 to 5. If \code{x} has a rather #' small amount of unique values, \code{n} could be something between #' 10 to 20. If \code{n = NULL}, \code{pretty_range()} automatically #' tries to find a pretty sequence. #' @param length Integer value, as alternative to \code{n}, defines the number of #' intervals to be returned. #' #' @return A numeric vector with a range corresponding to the minimum and #' maximum values of \code{x}. If \code{x} is missing, a function, #' pre-programmed with \code{n} and \code{length} is returned. See examples. #' #' @examples #' data(iris) #' # pretty range for vectors with decimal points #' pretty_range(iris$Petal.Length) #' #' # pretty range for large range, increasing by 50 #' pretty_range(1:1000) #' #' # increasing by 20 #' pretty_range(1:1000, n = 7) #' #' # return 10 intervals #' pretty_range(1:1000, length = 10) #' #' # same result #' pretty_range(1:1000, n = 2.5) #' #' # function factory #' range_n_5 <- pretty_range(n = 5) #' range_n_5(1:1000) #' @export pretty_range <- function(x, n = NULL, length = NULL) { force(n) force(length) .pretty_range <- function(x) { ra.min <- min(x, na.rm = TRUE) ra.max <- max(x, na.rm = TRUE) ra <- seq(ra.min, ra.max, sqrt(ra.max - ra.min) / 10) if (!is.null(length)) { pretty(ra, n = length) } else { if (!is.null(n)) pr <- n else if (.n_distinct(x) > 100) pr <- 3 else if (.n_distinct(x) > 50) pr <- 5 else pr <- 10 pr <- pr^(floor(log10(length(ra)))) p1 <- pretty(ra, n = pr) p2 <- pretty(ra, n = ceiling(pr * 1.5)) p3 <- pretty(ra, n = 2 * pr) if (length(p1) >= .n_distinct(x)) p1 else if (length(p1) < 10 && length(p2) < 25) p2 else if (length(p2) < 15 && length(p3) < 25) p3 else p1 } } if (missing(x)) { .pretty_range } else { .pretty_range(x) } } ggeffects/R/get_predictions_stan.R0000644000176200001440000001244314046746430016756 0ustar liggesusersget_predictions_stan <- function(model, fitfram, ci.lvl, type, model_info, ppd, terms = NULL, ...) { # check if pkg is available if (!requireNamespace("rstantools", quietly = TRUE)) { stop("Package `rstantools` is required to compute predictions.", call. = FALSE) } # does user want standard errors? se <- !is.null(ci.lvl) && !is.na(ci.lvl) # check whether predictions should be conditioned # on random effects (grouping level) or not. if (type != "fe") ref <- NULL else ref <- NA # check if we have terms that are ordinal, and if so, # convert factors to ordinal in "newdata" to allow # predictions for monotonic models if (!is.null(terms)) { mf <- insight::get_data(model) vo <- names(which(sapply(mf, is.ordered))) fac2ord <- which(terms %in% vo) if (!.is_empty(fac2ord)) { for (i in fac2ord) fitfram[[terms[i]]] <- as.ordered(fitfram[[terms[i]]]) } } # compute posterior predictions if (ppd) { # for binomial models, "newdata" also needs a response # value. we take the value for a successful event if (model_info$is_binomial) { resp.name <- insight::find_response(model) resp.value <- insight::get_response(model) # successful events if (is.factor(resp.value)) { fitfram[[resp.name]] <- levels(resp.value)[2] } else { fitfram[[resp.name]] <- unique(resp.value)[2] } } prdat2 <- prdat <- rstantools::posterior_predict( model, newdata = fitfram, re.form = ref, ... ) } else { # get posterior distribution of the linear predictor # note that these are not best practice for inferences, # because they don't take the measurement error into account prdat <- rstantools::posterior_epred( model, newdata = fitfram, re.form = ref, re_formula = ref, ... ) if (model_info$is_mixed) { # tell user message("Note: uncertainty of error terms are not taken into account. You may want to use `rstantools::posterior_predict()`.") } } # we have a list of 4000 samples, so we need to coerce to data frame prdat <- as.data.frame(prdat) # handle cumulative link models if (inherits(model, "brmsfit") && model_info$family %in% c("cumulative", "categorical")) { tmp <- as.data.frame(lapply(prdat, stats::median)) tmp <- .gather(tmp, names_to = "grp", values_to = "predicted") rownames(tmp) <- NULL tmp$grp <- gsub("X", "", tmp$grp, fixed = TRUE) resp <- insight::get_response(model) if (is.data.frame(resp)) resp <- resp[[1]] # Model must have been using weights # Response could be a factor or numeric if (is.factor(resp)) resp.vals <- levels(resp) else resp.vals <- sort(unique(resp)) term.cats <- nrow(fitfram) fitfram <- do.call(rbind, rep(list(fitfram), time = length(resp.vals))) fitfram$response.level <- rep(unique(resp.vals), each = term.cats) fitfram$predicted <- tmp$predicted } else if (insight::is_multivariate(model)) { # handle multivariate response models tmp <- as.data.frame(lapply(prdat, stats::median)) tmp <- .gather(tmp, names_to = "grp", values_to = "predicted") rownames(tmp) <- NULL tmp$grp <- gsub("X", "", tmp$grp, fixed = TRUE) resp.vars <- insight::find_response(model, combine = FALSE) # exclude weighting variables resp.weights <- unique(insight::find_weights(model)) if (!is.null(resp.weights)) { resp.vars <- setdiff(resp.vars, resp.weights) } fitfram <- do.call(rbind, rep(list(fitfram), time = length(resp.vars))) fitfram$response.level <- "" for (i in resp.vars) { pos <- string_ends_with(pattern = i, x = tmp$grp) if (.is_empty(pos)) { i <- gsub(pattern = "[\\_\\.]", replacement = "", x = i) # same as # i <- gsub(pattern = "(\\_|\\.)", replacement = "", x = i) pos <- string_ends_with(pattern = i, x = tmp$grp) } fitfram$response.level[pos] <- i } fitfram$predicted <- tmp$predicted } else { # compute median, as "most probable estimate" fitfram$predicted <- sapply(prdat, stats::median) } # for posterior predictive distributions, we compute # the predictive intervals if (ppd) { # for multivariate response models, we have an array # instead of matrix - get CIs for each response if (inherits(prdat2, "array")) { if (length(dim(prdat2)) == 3) { tmp <- do.call(rbind, lapply(1:dim(prdat2)[3], function(.x) { as.data.frame(rstantools::predictive_interval(as.matrix(prdat2[, , .x]), prob = ci.lvl)) })) } else { tmp <- as.data.frame(rstantools::predictive_interval(prdat2), prob = ci.lvl) } } else { tmp <- rstantools::predictive_interval(prdat2, prob = ci.lvl) } } else { tmp <- rstantools::predictive_interval(as.matrix(prdat), prob = ci.lvl) } predint <- list( tmp[, 1], tmp[, 2] ) if (se) { # bind predictive intervals int fitfram$conf.low <- predint[[1]] fitfram$conf.high <- predint[[2]] } else { # no CI fitfram$conf.low <- NA fitfram$conf.high <- NA } fitfram } ggeffects/R/utils_check_transformations.R0000644000176200001440000000440314046746430020352 0ustar liggesusers.has_splines <- function(model) { form <- .get_pasted_formula(model) if (is.null(form)) return(FALSE) any( grepl("s\\(([^,)]*)", form) | grepl("bs\\(([^,)]*)", form) | grepl("ns\\(([^,)]*)", form) | grepl("pspline\\(([^,)]*)", form) | grepl("poly\\(([^,)]*)", form) ) } .has_poly <- function(model) { form <- .get_pasted_formula(model) if (is.null(form)) return(FALSE) any(grepl("I\\(.*?\\^.*?\\)", form) | grepl("poly\\(([^,)]*)", form)) } .has_trigonometry <- function(model) { form <- .get_pasted_formula(model) if (is.null(form)) return(FALSE) any(grepl("(sin|cos|tan)\\(([^,)]*)", form)) } .has_log <- function(model) { any(.get_log_terms(model)) } .get_log_terms <- function(model) { form <- .get_pasted_formula(model) if (is.null(form)) return(FALSE) grepl("(log|log1|log10|log1p|log2)\\(([^,)]*).*", form) } .get_offset_log_terms <- function(model) { form <- .get_pasted_formula(model) if (is.null(form)) return(FALSE) grepl("offset\\((log|log1|log10|log1p|log2)\\(([^,)]*).*", form) } .get_offset_transformation <- function(model) { form <- .get_pasted_formula(model) log_offset <- .get_offset_log_terms(model) unname(gsub("offset\\((log|log1|log10|log1p|log2)\\(([^,)]*).*", "\\1", form[log_offset])) } .get_pasted_formula <- function(model) { tryCatch( { model_terms <- unlist(.compact_list(insight::find_terms(model)[c("conditional", "random", "instruments")])) if (model_terms[1] %in% c("0", "1")) { model_terms <- model_terms[-1] } model_terms }, error = function(x) { NULL } ) } .which_log_terms <- function(model) { form <- .get_pasted_formula(model) if (is.null(form)) return(NULL) log_terms <- form[grepl("(log|log1|log10|log1p|log2)\\(([^,)]*).*", form)] if (length(log_terms) > 0) { log_terms <- insight::clean_names(log_terms) } else { log_terms <- NULL } log_terms } .has_poly_term <- function(x) { any(grepl("poly\\(([^,)]*)", x)) } .uses_all_tag <- function(terms) { tags <- unlist(regmatches( terms, gregexpr( pattern = "\\[(.*)\\]", text = terms, perl = T ) )) "[all]" %in% tags } ggeffects/R/predict_zero_inflation.R0000644000176200001440000003625014046746430017305 0ustar liggesusers.join_simulations <- function(prediction_data, newdata, prdat, sims, ci, clean_terms) { # after "bootstrapping" confidence intervals by simulating from the # multivariate normal distribution, we need to prepare the data and # calculate "bootstrapped" estimates and CIs. prediction_data$sort__id <- 1:nrow(prediction_data) column_matches <- sapply(colnames(prediction_data), function(.x) any(unique(prediction_data[[.x]]) %in% newdata[[.x]])) # we need two data grids here: one for all combination of levels from the # model predictors ("newdata"), and one with the current combinations only # for the terms in question ("prediction_data"). "sims" has always the same # number of rows as "newdata", but "prediction_data" might be shorter. So we # merge "prediction_data" and "newdata", add mean and quantiles from "sims" # as new variables, and then later only keep the original observations # from "prediction_data" - by this, we avoid unequal row-lengths. join_by <- colnames(prediction_data)[column_matches] prediction_data <- merge(newdata, prediction_data, by = join_by, all = TRUE, sort = FALSE) prediction_data$predicted <- apply(sims, 1, mean) prediction_data$conf.low <- apply(sims, 1, stats::quantile, probs = 1 - ci) prediction_data$conf.high <- apply(sims, 1, stats::quantile, probs = ci) prediction_data$std.error <- apply(sims, 1, stats::sd) # group_by() changes the order of rows / variables in "prediction_data", however # we later add back the original predictions "prdat" (see below), which # correspond to the *current* sorting of prediction_data. So we add a dummy-ID, # which we use to restore the original sorting of prediction_data later... # The following code is a replace for dplyr, when grouping and # summarizing data. previous code was: # # grp <- rlang::syms(clean_terms) # prediction_data <- prediction_data %>% # dplyr::filter(!is.na(.data$sort__id)) %>% # dplyr::group_by(!!! grp) %>% # dplyr::summarize( # predicted = mean(.data$predicted), # conf.low = mean(.data$conf.low), # conf.high = mean(.data$conf.high), # std.error = mean(.data$std.error), # id = .data$sort__id # ) %>% # dplyr::ungroup() prediction_data <- prediction_data[!is.na(prediction_data$sort__id), , drop = FALSE] prediction_data <- cbind( stats::aggregate( prediction_data[c("predicted", "conf.low", "conf.high", "std.error")], by = prediction_data[clean_terms], FUN = mean, na.rm = TRUE ), id = prediction_data$sort__id ) rownames(prediction_data) <- NULL if (length(clean_terms) == 1) { prediction_data <- prediction_data[order(prediction_data[[1]]), , drop = FALSE] } else if (length(clean_terms) == 2) { prediction_data <- prediction_data[order(prediction_data[[1]], prediction_data[[2]]), , drop = FALSE] } else if (length(clean_terms) == 3) { prediction_data <- prediction_data[order(prediction_data[[1]], prediction_data[[2]], prediction_data[[3]]), , drop = FALSE] } else if (length(clean_terms) == 4) { prediction_data <- prediction_data[order(prediction_data[[1]], prediction_data[[2]], prediction_data[[3]], prediction_data[[4]]), , drop = FALSE] } # we use the predicted values from "predict(type = "reponse")", but the # bootstrapped CI - so we need to fix a bit here. "predict(type = "reponse")" # works as intended, but the related standard errors are not reliable (due # to the combination of the conditional and zero-inflated model), so we need # the simulated standard errors and CI's - but can use the "correct" predictions. # in order to make CI and predictions match, we take the simulated CI-range # and use the original predicted values as "center" for those CI-ranges. if (length(prdat) == nrow(prediction_data)) { prediction_data <- prediction_data[order(prediction_data$id), , drop = FALSE] ci.range <- (prediction_data$conf.high - prediction_data$conf.low) / 2 prediction_data$predicted <- prdat # fix negative CI ci.low <- prediction_data$predicted - ci.range neg.ci <- ci.low < 0 if (any(neg.ci)) { ci.range[neg.ci] <- ci.range[neg.ci] - abs(ci.low[neg.ci]) - 1e-05 prediction_data$std.error[neg.ci] <- prediction_data$std.error[neg.ci] - ((abs(ci.low[neg.ci]) + 1e-05) / stats::qnorm(ci)) } prediction_data$conf.low <- prediction_data$predicted - ci.range prediction_data$conf.high <- prediction_data$predicted + ci.range prediction_data <- .remove_column(prediction_data, "id") } prediction_data } .simulate_zi_predictions <- function(model, newdata, nsim = 1000, terms = NULL, value_adjustment = NULL, condition = NULL) { # Since the zero inflation and the conditional model are working in "opposite # directions", confidence intervals can not be derived directly from the # "predict()"-function. Thus, confidence intervals for type = "fe.zi" are # based on quantiles of simulated draws from a multivariate normal distribution # (see also _Brooks et al. 2017, pp.391-392_ for details). if (inherits(model, "glmmTMB")) { .simulate_predictions_glmmTMB(model, newdata, nsim, terms, value_adjustment, condition) } else if (inherits(model, "MixMod")) { .simulate_predictions_MixMod(model, newdata, nsim, terms, value_adjustment, condition) } else { .simulate_predictions_zeroinfl(model, newdata, nsim, terms, value_adjustment, condition) } } .simulate_predictions_glmmTMB <- function(model, newdata, nsim, terms = NULL, value_adjustment = NULL, condition = NULL) { if (!requireNamespace("lme4", quietly = TRUE)) { stop("You need to install package `lme4` first to compute marginal effects.", call. = FALSE) } # Since the zero inflation and the conditional model are working in "opposite # directions", confidence intervals can not be derived directly from the # "predict()"-function. Thus, confidence intervals for type = "fe.zi" are # based on quantiles of simulated draws from a multivariate normal distribution # (see also _Brooks et al. 2017, pp.391-392_ for details). tryCatch( { condformula <- lme4::nobars(stats::formula(model)[-2]) ziformula <- lme4::nobars(stats::formula(model$modelInfo$allForm$ziformula)) # if formula has a polynomial term, and this term is one that is held # constant, model.matrix() with "newdata" will throw an error - so we # re-build the newdata-argument by including all values for poly-terms, if # these are hold constant. fixes <- .rows_to_keep(model, newdata, condformula, ziformula, terms, value_adjustment, condition) if (!is.null(fixes)) { keep <- fixes$keep newdata <- fixes$newdata } else { keep <- NULL } x.cond <- stats::model.matrix(condformula, newdata) beta.cond <- lme4::fixef(model)$cond x.zi <- stats::model.matrix(ziformula, newdata) beta.zi <- lme4::fixef(model)$zi cond.varcov <- insight::get_varcov(model, component = "conditional") zi.varcov <- insight::get_varcov(model, component = "zero_inflated") pred.condpar.psim <- MASS::mvrnorm(n = nsim, mu = beta.cond, Sigma = cond.varcov) pred.cond.psim <- x.cond %*% t(pred.condpar.psim) pred.zipar.psim <- MASS::mvrnorm(n = nsim, mu = beta.zi, Sigma = zi.varcov) pred.zi.psim <- x.zi %*% t(pred.zipar.psim) if (!.is_empty(keep)) { pred.cond.psim <- pred.cond.psim[keep, , drop = FALSE] pred.zi.psim <- pred.zi.psim[keep, , drop = FALSE] } list(cond = pred.cond.psim, zi = pred.zi.psim) }, error = function(x) { x }, warning = function(x) { NULL }, finally = function(x) { NULL } ) } .simulate_predictions_MixMod <- function(model, newdata, nsim, terms = NULL, value_adjustment = NULL, condition = NULL) { if (!requireNamespace("lme4", quietly = TRUE)) { stop("You need to install package `lme4` first to compute marginal effects.", call. = FALSE) } tryCatch( { condformula <- stats::formula(model, type = "fixed") ziformula <- stats::formula(model, type = "zi_fixed") # if formula has a polynomial term, and this term is one that is held # constant, model.matrix() with "newdata" will throw an error - so we # re-build the newdata-argument by including all values for poly-terms, if # these are hold constant. fixes <- .rows_to_keep(model, newdata, condformula, ziformula, terms, value_adjustment, condition) if (!is.null(fixes)) { keep <- fixes$keep newdata <- fixes$newdata } else { keep <- NULL } x.cond <- stats::model.matrix(condformula, newdata) beta.cond <- lme4::fixef(model, sub_model = "main") x.zi <- stats::model.matrix(ziformula, newdata) beta.zi <- lme4::fixef(model, sub_model = "zero_part") cond.varcov <- insight::get_varcov(model, component = "conditional") zi.varcov <- insight::get_varcov(model, component = "zero_inflated") pred.condpar.psim <- MASS::mvrnorm(n = nsim, mu = beta.cond, Sigma = cond.varcov) pred.cond.psim <- x.cond %*% t(pred.condpar.psim) pred.zipar.psim <- MASS::mvrnorm(n = nsim, mu = beta.zi, Sigma = zi.varcov) pred.zi.psim <- x.zi %*% t(pred.zipar.psim) if (!.is_empty(keep)) { pred.cond.psim <- pred.cond.psim[keep, , drop = FALSE] pred.zi.psim <- pred.zi.psim[keep, , drop = FALSE] } list(cond = pred.cond.psim, zi = pred.zi.psim) }, error = function(x) { x }, warning = function(x) { NULL }, finally = function(x) { NULL } ) } .simulate_predictions_zeroinfl <- function(model, newdata, nsim = 1000, terms = NULL, value_adjustment = NULL, condition = NULL) { tryCatch( { condformula <- stats::as.formula(paste0("~", .safe_deparse(stats::formula(model)[[3]][[2]]))) ziformula <- stats::as.formula(paste0("~", .safe_deparse(stats::formula(model)[[3]][[3]]))) # if formula has a polynomial term, and this term is one that is held # constant, model.matrix() with "newdata" will throw an error - so we # re-build the newdata-argument by including all values for poly-terms, if # these are hold constant. fixes <- .rows_to_keep(model, newdata, condformula, ziformula, terms, value_adjustment, condition) if (!is.null(fixes)) { keep <- fixes$keep newdata <- fixes$newdata } else { keep <- NULL } x.cond <- stats::model.matrix(condformula, model = "count", data = newdata) beta.cond <- stats::coef(model, model = "count") x.zi <- stats::model.matrix(ziformula, model = "zero", data = newdata) beta.zi <- stats::coef(model, model = "zero") cond.varcov <- insight::get_varcov(model, component = "conditional") zi.varcov <- insight::get_varcov(model, component = "zero_inflated") pred.condpar.psim <- MASS::mvrnorm(nsim, mu = beta.cond, Sigma = cond.varcov) pred.cond.psim <- x.cond %*% t(pred.condpar.psim) pred.zipar.psim <- MASS::mvrnorm(nsim, mu = beta.zi, Sigma = zi.varcov) pred.zi.psim <- x.zi %*% t(pred.zipar.psim) if (!.is_empty(keep)) { pred.cond.psim <- pred.cond.psim[keep, , drop = FALSE] pred.zi.psim <- pred.zi.psim[keep, , drop = FALSE] } list(cond = pred.cond.psim, zi = pred.zi.psim) }, error = function(x) { x }, warning = function(x) { NULL }, finally = function(x) { NULL } ) } .rows_to_keep <- function(model, newdata, condformula, ziformula, terms, value_adjustment, condition) { # if formula has a polynomial term, and this term is one that is held # constant, model.matrix() with "newdata" will throw an error - so we # re-build the newdata-argument by including all values for poly-terms, if # these are hold constant. const.values <- attr(newdata, "constant.values") condformula_string <- .safe_deparse(condformula) ziformula_string <- .safe_deparse(ziformula) keep <- NULL if (.has_poly_term(condformula_string) || .has_poly_term(ziformula_string)) { model_frame <- insight::get_data(model) polycondcheck <- NULL polyzicheck <- NULL if (.has_poly_term(condformula_string)) { polyterm <- .get_poly_term(condformula_string) if (polyterm %in% names(const.values)) { polycondcheck <- polyterm polydg <- .get_poly_degree(condformula_string) polyvals <- paste0( stats::quantile(model_frame[[polyterm]], probs = seq_len(polydg + 1) / (polydg + 2)), collapse = "," ) terms <- c(terms, sprintf("%s [%s]", polyterm, polyvals)) } } if (.has_poly_term(ziformula_string)) { polyterm <- .get_poly_term(ziformula_string) if (polyterm %in% names(const.values)) { polyzicheck <- polyterm polydg <- .get_poly_degree(ziformula_string) polyvals <- paste0( stats::quantile(model_frame[[polyterm]], probs = seq_len(polydg + 1) / (polydg + 2)), collapse = "," ) terms <- c(terms, sprintf("%s [%s]", polyterm, polyvals)) } } newdata <- .data_grid( model = model, model_frame = model_frame, terms = terms, value_adjustment = value_adjustment, factor_adjustment = FALSE, show_pretty_message = FALSE, condition = condition ) keep.cond <- vector("numeric") keep.zi <- vector("numeric") if (!is.null(polycondcheck)) { keep.cond <- unlist(lapply(polycondcheck, function(.x) { wm <- newdata[[.x]][which.min(abs(newdata[[.x]] - .typical_value(newdata[[.x]], fun = value_adjustment)))] as.vector(which(newdata[[.x]] == wm)) })) } if (!is.null(polyzicheck)) { keep.zi <- unlist(lapply(polyzicheck, function(.x) { wm <- newdata[[.x]][which.min(abs(newdata[[.x]] - .typical_value(newdata[[.x]], fun = value_adjustment)))] as.vector(which(newdata[[.x]] == wm)) })) } keep <- intersect(keep.cond, keep.zi) if (.is_empty(keep)) keep <- unique(c(keep.cond, keep.zi)) } if (.is_empty(keep)) return(NULL) list(keep = keep, newdata = newdata) } .get_zeroinfl_gam_predictions <- function(model, newdata, nsim = 1000) { tryCatch( { mm <- stats::model.matrix(model, data = newdata) linpred <- attr(mm, "lpi", exact = TRUE) cond <- linpred[[1]] zi <- linpred[[2]] x.cond <- mm[, cond] x.zi <- mm[, zi] beta.cond <- stats::coef(model)[cond] beta.zi <- stats::coef(model)[zi] varcov.cond <- stats::vcov(model)[cond, cond] varcov.zi <- stats::vcov(model)[zi, zi] psim.cond <- MASS::mvrnorm(nsim, mu = beta.cond, Sigma = varcov.cond) pred.cond <- x.cond %*% t(psim.cond) psim.zi <- MASS::mvrnorm(nsim, mu = beta.zi, Sigma = varcov.zi) pred.zi <- x.zi %*% t(psim.zi) list(cond = pred.cond, zi = pred.zi) }, error = function(x) { x }, warning = function(x) { NULL }, finally = function(x) { NULL } ) } ggeffects/R/get_predictions_clm2.R0000644000176200001440000000325714046746430016651 0ustar liggesusersget_predictions_clm2 <- function(model, fitfram, ci.lvl, linv, ...) { stop("`ggpredict()` does currently not support clm2-models.", call. = FALSE) # does user want standard errors? se <- !is.null(ci.lvl) && !is.na(ci.lvl) # compute ci, two-ways if (!is.null(ci.lvl) && !is.na(ci.lvl)) ci <- (1 + ci.lvl) / 2 else ci <- .975 fitfram <- cbind(data.frame(as.factor(insight::get_response(model))), fitfram) colnames(fitfram)[1] <- insight::find_response(model) # prediction, with CI prdat <- stats::predict( model, newdata = fitfram, type = "prob", interval = se, level = ci, ... ) # convert to data frame. prdat <- as.data.frame(prdat) # bind predictions to model frame fitfram <- cbind(prdat, fitfram) # get levels of response lv <- levels(insight::get_response(model)) # for proportional ordinal logistic regression (see ordinal::clm), # we have predicted values for each response category. Hence, # gather columns. Since we also have conf. int. for each response # category, we need to gather multiple columns at once if (isTRUE(se)) { # length of each variable block l <- seq_len(ncol(prdat) / 3) colnames(fitfram)[l] <- lv fitfram <- .multiple_gather( fitfram, names_to = "response.level", values_to = c("predicted", "conf.low", "conf.high"), columns = list(l, l + length(l), l + 2 * length(l)) ) } else { fitfram <- .gather(fitfram, names_to = "response.level", values_to = "predicted", colnames(prdat)) # No CI fitfram$conf.low <- NA fitfram$conf.high <- NA } fitfram } ggeffects/R/utils_set_attr.R0000644000176200001440000000364414046746430015617 0ustar liggesusers.set_attributes_and_class <- function(data, model, t.title, x.title, y.title, l.title, legend.labels, x.axis.labels, model_info, constant.values = NULL, terms = NULL, original_terms = NULL, at_list = NULL, n.trials = NULL, prediction.interval = NULL, condition = NULL, ci.lvl = .95, type = NULL) { # check correct labels if (!is.null(x.axis.labels) && length(x.axis.labels) != length(stats::na.omit(unique(data$x)))) x.axis.labels <- as.vector(sort(stats::na.omit(unique(data$x)))) rownames(data) <- NULL if (!is.null(at_list) && !is.null(terms)) at_list <- at_list[names(at_list) %in% terms] # add attributes attr(data, "title") <- t.title attr(data, "x.title") <- x.title attr(data, "y.title") <- y.title attr(data, "legend.title") <- l.title attr(data, "legend.labels") <- legend.labels attr(data, "x.axis.labels") <- x.axis.labels attr(data, "constant.values") <- constant.values attr(data, "terms") <- terms attr(data, "original.terms") <- original_terms attr(data, "at.list") <- at_list attr(data, "prediction.interval") <- prediction.interval attr(data, "condition") <- condition attr(data, "ci.lvl") <- ci.lvl attr(data, "type") <- type attr(data, "response.name") <- insight::find_response(model) # remember fit family attr(data, "family") <- model_info$family attr(data, "link") <- model_info$link_function attr(data, "logistic") <- ifelse(model_info$is_binomial || model_info$is_ordinal || model_info$is_multinomial, "1", "0") attr(data, "link_inverse") <- insight::link_inverse(model) attr(data, "link_function") <- insight::link_function(model) attr(data, "is.trial") <- ifelse(model_info$is_trial && inherits(model, "brmsfit"), "1", "0") attr(data, "n.trials") <- n.trials # and model-function attr(data, "fitfun") <- .get_model_function(model) # add class attribute class(data) <- c("ggeffects", class(data)) data } ggeffects/R/get_predictions_generic.R0000644000176200001440000000077214036250437017423 0ustar liggesusersget_predictions_generic <- function(model, fitfram, linv, ...) { if (!requireNamespace("prediction", quietly = TRUE)) { stop("You need to install package `prediction` first to compute adjusted predictions.", call. = FALSE) } prdat <- prediction::prediction( model, data = fitfram, type = "response", ... ) # copy predictions fitfram$predicted <- prdat$fitted # No CI fitfram$conf.low <- NA fitfram$conf.high <- NA fitfram } ggeffects/R/get_predictions_mlogit.R0000644000176200001440000000100414046746430017273 0ustar liggesusersget_predictions_mlogit <- function(model, fitfram, ...) { # bind IDX to new data dat <- insight::get_data(model) fitfram <- do.call(rbind, lapply(1:length(levels(dat$idx$id2)), function(i) { fitfram$idx <- sprintf("%g:%s", i, levels(dat$idx$id2)[i]) fitfram })) prdat <- stats::predict( model, newdata = fitfram, ... ) # stack columns prdat <- utils::stack(as.data.frame(prdat)) colnames(prdat) <- c("predicted", "response.level") cbind(fitfram, prdat) } ggeffects/R/new_data.R0000644000176200001440000000274014046746430014330 0ustar liggesusers#' @title Create a data frame from all combinations of predictor values #' @name new_data #' #' @description Create a data frame for the "newdata"-argument that contains #' all combinations of values from the terms in questions. Similar to #' \code{expand.grid()}. The \code{terms}-argument accepts all shortcuts #' for representative values as in \code{ggpredict()}. #' #' @param model A fitted model object. #' @param terms Character vector with the names of those terms from #' \code{model} for which all combinations of values should be created. #' #' @inheritParams ggpredict #' #' @return A data frame containing one row for each combination of values of the #' supplied variables. #' #' @examples #' data(efc) #' fit <- lm(barthtot ~ c12hour + neg_c_7 + c161sex + c172code, data = efc) #' new_data(fit, c("c12hour [meansd]", "c161sex")) #' #' nd <- new_data(fit, c("c12hour [meansd]", "c161sex")) #' pr <- predict(fit, type = "response", newdata = nd) #' nd$predicted <- pr #' nd #' #' # compare to #' ggpredict(fit, c("c12hour [meansd]", "c161sex")) #' #' @export new_data <- function(model, terms, typical = "mean", condition = NULL) { .data_grid( model = model, model_frame = insight::get_data(model), terms = terms, value_adjustment = typical, factor_adjustment = TRUE, show_pretty_message = TRUE, condition = condition, emmeans.only = FALSE ) } #' @rdname new_data #' @export data_grid <- new_data ggeffects/R/get_predictions_glmRob.R0000644000176200001440000000127513565167350017237 0ustar liggesusersget_predictions_glmRob <- function(model, fitfram, ci.lvl, linv, value_adjustment, model_class, terms, vcov.fun, vcov.type, vcov.args, condition, interval, ...) { # does user want standard errors? se <- !is.null(ci.lvl) && !is.na(ci.lvl) && is.null(vcov.fun) # for models from "robust"-pkg (glmRob) we need to # suppress warnings about fake models prdat <- suppressWarnings(stats::predict( model, newdata = fitfram, type = "link", se.fit = se, ... )) # copy predictions .generic_prediction_data(model, fitfram, linv, prdat, se, ci.lvl, model_class, value_adjustment, terms, vcov.fun, vcov.type, vcov.args, condition, interval) } ggeffects/R/post_processing_labels.R0000644000176200001440000000226314030655632017305 0ustar liggesusers.post_processing_labels <- function(model, result, original_model_frame, data_grid, cleaned_terms, original_terms, model_info, type, prediction.interval, at_list, condition = NULL, ci.lvl = .95) { # get axis titles and labels all.labels <- .get_axis_titles_and_labels( original_model_frame = original_model_frame, terms = cleaned_terms, fun = .get_model_function(model), model_info = model_info, no.transform = FALSE, type = type ) # set attributes with necessary information .set_attributes_and_class( data = result, model = model, t.title = all.labels$t.title, x.title = all.labels$x.title, y.title = all.labels$y.title, l.title = all.labels$l.title, legend.labels = attr(result, "legend.labels"), x.axis.labels = all.labels$axis.labels, model_info = model_info, constant.values = attr(data_grid, "constant.values", exact = TRUE), terms = cleaned_terms, original_terms = original_terms, at_list = at_list, n.trials = attr(data_grid, "n.trials", exact = TRUE), prediction.interval = prediction.interval, condition = condition, ci.lvl = ci.lvl, type = type ) } ggeffects/R/get_predictions_MixMod.R0000644000176200001440000001241614046746430017206 0ustar liggesusersget_predictions_MixMod <- function(model, data_grid, ci.lvl, linv, type, terms, value_adjustment, condition, ...) { # does user want standard errors? se <- !is.null(ci.lvl) && !is.na(ci.lvl) # compute ci, two-ways if (!is.null(ci.lvl) && !is.na(ci.lvl)) ci <- (1 + ci.lvl) / 2 else ci <- .975 # get info about model model_info <- insight::model_info(model) # copy object predicted_data <- data_grid if (!model_info$is_zero_inflated && type %in% c("fe.zi", "re.zi", "zi.prob")) { if (type == "zi.prob") stop("Model has no zero-inflation part.") else if (type == "fe.zi") type <- "fe" else type <- "re" message(sprintf("Model has no zero-inflation part. Changing prediction-type to \"%s\".", type)) } if (model_info$is_zero_inflated && type %in% c("fe", "re")) { if (type == "fe") type <- "fe.zi" else type <- "re.zi" message(sprintf("Model has zero-inflation part, predicted values can only be conditioned on zero-inflation part. Changing prediction-type to \"%s\".", type)) } if (type == "sim") { predicted_data <- .do_simulate(model, terms, ci, ...) } else { response_name <- insight::find_response(model) if (is.null(condition) || !(response_name %in% names(condition))) { warning(sprintf("Results for MixMod-objects may vary depending on which value the response is conditioned on. Make sure to choose a sensible value for '%s' using the 'condition'-argument.", response_name), call. = FALSE) } prtype <- switch( type, "fe" = , "fe.zi" = "mean_subject", "re" = , "re.zi" = "subject_specific", "zi.prob" = "zero_part", "mean_subject" ) prdat <- stats::predict( model, newdata = data_grid, type = prtype, type_pred = "response", se.fit = se, level = ci.lvl, ... ) if (!is.list(prdat)) { prdat <- list(pred = prdat) } predicted_data$predicted <- prdat$pred if (model_info$is_zero_inflated && prtype == "mean_subject") { add.args <- lapply(match.call(expand.dots = FALSE)$`...`, function(x) x) if ("nsim" %in% names(add.args)) nsim <- eval(add.args[["nsim"]]) else nsim <- 1000 model_frame <- insight::get_data(model) clean_terms <- .clean_terms(terms) newdata <- .data_grid( model = model, model_frame = model_frame, terms = terms, value_adjustment = value_adjustment, factor_adjustment = FALSE, show_pretty_message = FALSE, condition = condition ) # Since the zero inflation and the conditional model are working in "opposite # directions", confidence intervals can not be derived directly from the # "predict()"-function. Thus, confidence intervals for type = "fe.zi" are # based on quantiles of simulated draws from a multivariate normal distribution # (see also _Brooks et al. 2017, pp.391-392_ for details). prdat.sim <- .simulate_zi_predictions(model, newdata, nsim, terms, value_adjustment, condition) if (is.null(prdat.sim) || inherits(prdat.sim, c("error", "simpleError"))) { insight::print_color("Error: Confidence intervals could not be computed.\n", "red") if (inherits(prdat.sim, c("error", "simpleError"))) { cat(sprintf("* Reason: %s\n", .safe_deparse(prdat.sim[[1]]))) cat(sprintf("* Source: %s\n", .safe_deparse(prdat.sim[[2]]))) } predicted_data$conf.low <- NA predicted_data$conf.high <- NA } else { # we need two data grids here: one for all combination of levels from the # model predictors ("newdata"), and one with the current combinations only # for the terms in question ("data_grid"). "sims" has always the same # number of rows as "newdata", but "data_grid" might be shorter. So we # merge "data_grid" and "newdata", add mean and quantiles from "sims" # as new variables, and then later only keep the original observations # from "data_grid" - by this, we avoid unequal row-lengths. sims <- exp(prdat.sim$cond) * (1 - stats::plogis(prdat.sim$zi)) predicted_data <- .join_simulations(data_grid, newdata, prdat, sims, ci, clean_terms) } } else { if (.obj_has_name(prdat, "upp")) { predicted_data$conf.low <- prdat$low predicted_data$conf.high <- prdat$upp } else if (!is.null(prdat$se.fit)) { if (type == "zi.prob") { lf <- stats::qlogis linv <- stats::plogis } else { lf <- insight::link_function(model) if (is.null(lf)) lf <- function(x) x } predicted_data$conf.low <- linv(lf(predicted_data$predicted) - stats::qnorm(ci) * prdat$se.fit) predicted_data$conf.high <- linv(lf(predicted_data$predicted) + stats::qnorm(ci) * prdat$se.fit) } else { predicted_data$conf.low <- NA predicted_data$conf.high <- NA } } # copy standard errors attr(predicted_data, "std.error") <- prdat$se.fit } attr(predicted_data, "prediction.interval") <- type %in% c("re", "re.zi", "sim") predicted_data } ggeffects/R/utils_get_cleaned_terms.R0000644000176200001440000000106613710753570017432 0ustar liggesusers.clean_terms <- function(x) { # get positions of variable names and see if we have # a suffix for certain values cleaned.pos <- regexpr(pattern = "\\[", x) # cleaned.pos <- regexpr(pattern = "(\\s|\\[)", x) # position "-1" means we only had variable name, no suffix replacers <- which(cleaned.pos == -1) # replace -1 with number of chars cleaned.pos[replacers] <- nchar(x)[replacers] # get variable names only x <- trimws(substr(x, 0, cleaned.pos)) # be sure to remove any brackets trimws(sub("[", "", x, fixed = TRUE)) } ggeffects/R/predictions.R0000644000176200001440000002404114046746430015067 0ustar liggesusers# select prediction method, based on model-object select_prediction_method <- function(model_class, model, data_grid, ci.lvl, type, model_info, ppd, terms, value_adjustment, vcov.fun, vcov.type, vcov.args, condition, interval, ...) { # get link-inverse-function linv <- insight::link_inverse(model) if (is.null(linv)) linv <- function(x) x if (model_class == "svyglm") { prediction_data <- get_predictions_svyglm(model, data_grid, ci.lvl, linv, ...) } else if (model_class == "svyglm.nb") { prediction_data <- get_predictions_svyglmnb(model, data_grid, ci.lvl, linv, model_class, value_adjustment, terms, vcov.fun, vcov.type, vcov.args, condition, interval, ...) } else if (model_class == "stanreg") { prediction_data <- get_predictions_stan(model, data_grid, ci.lvl, type, model_info, ppd, terms, ...) } else if (model_class == "brmsfit") { prediction_data <- get_predictions_stan(model, data_grid, ci.lvl, type, model_info, ppd, terms, ...) } else if (model_class == "coxph" && type != "surv" && type != "cumhaz") { prediction_data <- get_predictions_coxph(model, data_grid, ci.lvl, model_class, value_adjustment, terms, vcov.fun, vcov.type, vcov.args, condition, interval, ...) } else if (model_class == "coxph" && type %in% c("surv", "cumhaz")) { prediction_data <- get_predictions_survival(model, data_grid, ci.lvl, type, terms, ...) } else if (model_class == "ols") { prediction_data <- get_predictions_ols(model, data_grid, ci.lvl, ...) } else if (model_class %in% c("lrm", "orm")) { prediction_data <- get_predictions_lrm(model, data_grid, ci.lvl, linv, ...) } else if (model_class == "glimML") { prediction_data <- get_predictions_glimML(model, data_grid, ci.lvl, linv, ...) } else if (model_class == "glmmTMB") { prediction_data <- get_predictions_glmmTMB(model, data_grid, ci.lvl, linv, type, terms, value_adjustment, condition, ...) } else if (model_class == "wbm") { prediction_data <- get_predictions_wbm(model, data_grid, ci.lvl, linv, type, terms, condition, ...) } else if (model_class %in% c("lmer", "nlmer", "glmer")) { prediction_data <- get_predictions_merMod(model, data_grid, ci.lvl, linv, type, terms, value_adjustment, condition, ...) } else if (model_class == "geeglm") { prediction_data <- get_predictions_geeglm(model, data_grid, ci.lvl, linv, type, model_class, value_adjustment, terms, condition, ...) } else if (model_class == "gamlss") { prediction_data <- get_predictions_gamlss(model, data_grid, ci.lvl, terms, model_class, value_adjustment, condition, ...) } else if (model_class == "bamlss") { prediction_data <- get_predictions_bamlss(model, data_grid, linv, ...) } else if (model_class == "bayesx") { prediction_data <- get_predictions_bayesx(model, data_grid, ...) } else if (model_class == "cgam") { prediction_data <- get_predictions_cgam(model, data_grid, ci.lvl, linv, value_adjustment, model_class, terms, condition, ...) } else if (model_class == "gam") { prediction_data <- get_predictions_gam(model, data_grid, ci.lvl, linv, type, ...) } else if (model_class == "Gam") { prediction_data <- get_predictions_Gam(model, data_grid, ci.lvl, linv, value_adjustment, terms, model_class, condition, ...) # } else if (model_class == "vgam") { # prediction_data <- get_predictions_vgam(model, data_grid, ci.lvl, linv, ...) } else if (model_class == "vglm") { prediction_data <- get_predictions_vglm(model, data_grid, ci.lvl, linv, ...) } else if (model_class == "tobit") { prediction_data <- get_predictions_tobit(model, data_grid, ci.lvl, linv, ...) } else if (model_class %in% c("lme", "gls", "plm")) { prediction_data <- get_predictions_lme(model, data_grid, ci.lvl, linv, type, terms, value_adjustment, model_class, vcov.fun, vcov.type, vcov.args, condition, ...) } else if (model_class == "gee") { prediction_data <- get_predictions_gee(model, terms, ...) } else if (model_class %in% c("multinom", "bracl", "brmultinom")) { prediction_data <- get_predictions_multinom(model, data_grid, ci.lvl, linv, value_adjustment, terms, model_class, ...) } else if (model_class == "clmm") { prediction_data <- get_predictions_clmm(model, terms, value_adjustment, condition, ci.lvl, linv, ...) } else if (model_class == "clm") { prediction_data <- get_predictions_clm(model, data_grid, ci.lvl, linv, ...) } else if (model_class == "clm2") { prediction_data <- get_predictions_clm2(model, data_grid, ci.lvl, linv, ...) } else if (model_class == "Zelig-relogit") { prediction_data <- get_predictions_zelig(model, data_grid, ci.lvl, linv, ...) } else if (model_class == "mixor") { prediction_data <- get_predictions_mixor(model, data_grid, ci.lvl, linv, value_adjustment, terms, model_class, condition, ...) } else if (model_class == "polr") { prediction_data <- get_predictions_polr(model, data_grid, ci.lvl, linv, value_adjustment, terms, model_class, vcov.fun, vcov.type, vcov.args, condition, interval, ...) } else if (model_class %in% c("averaging", "betareg", "truncreg", "ivreg", "vgam", "fixest", "feglm", "glmx")) { prediction_data <- get_predictions_generic2(model, data_grid, ci.lvl, linv, type, model_class, value_adjustment, terms, vcov.fun, vcov.type, vcov.args, condition, interval, ...) } else if (model_class %in% c("zeroinfl", "hurdle", "zerotrunc")) { prediction_data <- get_predictions_zeroinfl(model, data_grid, ci.lvl, linv, type, model_class, value_adjustment, terms, vcov.fun, vcov.type, vcov.args, condition, interval, ...) } else if (model_class %in% c("glm", "glm.nb")) { prediction_data <- get_predictions_glm(model, data_grid, ci.lvl, linv, value_adjustment, model_class, terms, vcov.fun, vcov.type, vcov.args, condition, interval, type, ...) } else if (model_class %in% c("rq")) { prediction_data <- get_predictions_rq(model, data_grid, ci.lvl, ...) } else if (model_class %in% c("lmrob")) { prediction_data <- get_predictions_lmrob_base(model, data_grid, ci.lvl, ...) } else if (model_class %in% c("glmrob")) { prediction_data <- get_predictions_glmrob_base(model, data_grid, ci.lvl, linv, ...) } else if (model_class %in% c("glmRob")) { prediction_data <- get_predictions_glmRob(model, data_grid, ci.lvl, linv, value_adjustment, model_class, terms, vcov.fun, vcov.type, vcov.args, condition, interval, ...) } else if (model_class == "logistf") { prediction_data <- get_predictions_logistf(model, data_grid, terms, ...) } else if (model_class == "mclogit") { prediction_data <- get_predictions_mclogit(model, data_grid, ci.lvl, model_class, value_adjustment, terms, vcov.fun, vcov.type, vcov.args, condition, ...) } else if (model_class == "mlogit") { prediction_data <- get_predictions_mlogit(model, data_grid, ...) } else if (model_class == "lm") { prediction_data <- get_predictions_lm(model, data_grid, ci.lvl, model_class, value_adjustment, terms, vcov.fun, vcov.type, vcov.args, condition, interval, type, ...) } else if (model_class == "MixMod") { prediction_data <- get_predictions_MixMod(model, data_grid, ci.lvl, linv, type, terms, value_adjustment, condition, ...) } else if (model_class == "MCMCglmm") { prediction_data <- get_predictions_MCMCglmm(model, data_grid, ci.lvl, interval, terms, value_adjustment, condition, ...) } else { prediction_data <- get_predictions_generic(model, data_grid, linv, ...) } prediction_data } .generic_prediction_data <- function(model, fitfram, linv, prdat, se, ci.lvl, model_class, value_adjustment, terms, vcov.fun, vcov.type, vcov.args, condition = NULL, interval = NULL) { # compute ci, two-ways if (!is.null(ci.lvl) && !is.na(ci.lvl)) ci <- (1 + ci.lvl) / 2 else ci <- .975 # copy predictions if (typeof(prdat) == "double") .predicted <- prdat else .predicted <- prdat$fit # get standard errors, if computed if (.obj_has_name(prdat, "se.fit")) { se.fit <- prdat$se.fit # reset interval, since we have normal confidence intervals already here if (interval == "confidence") interval <- NULL } else { se.fit <- NULL } # get predicted values, on link-scale fitfram$predicted <- .predicted # did user request robust standard errors? if (!is.null(vcov.fun) || (!is.null(interval) && se)) { se.pred <- .standard_error_predictions( model = model, prediction_data = fitfram, value_adjustment = value_adjustment, terms = terms, model_class = model_class, vcov.fun = vcov.fun, vcov.type = vcov.type, vcov.args = vcov.args, condition = condition, interval = interval ) if (.check_returned_se(se.pred)) { fitfram <- se.pred$prediction_data se.fit <- se.pred$se.fit se <- TRUE } else { se.fit <- NULL se <- FALSE } } else { se.pred <- NULL } # did user request standard errors? if yes, compute CI if (se && !is.null(se.fit)) { fitfram$conf.low <- linv(fitfram$predicted - stats::qnorm(ci) * se.fit) fitfram$conf.high <- linv(fitfram$predicted + stats::qnorm(ci) * se.fit) # copy standard errors attr(fitfram, "std.error") <- se.fit if (!is.null(se.pred) && length(se.pred) > 0) attr(fitfram, "prediction.interval") <- attr(se.pred, "prediction_interval") } else { # No CI fitfram$conf.low <- NA fitfram$conf.high <- NA } # transform predicted values fitfram$predicted <- linv(fitfram$predicted) fitfram } ggeffects/R/moderator_pattern.R0000644000176200001440000001160314046746430016275 0ustar liggesusers#' @title Calculate representative values of a vector #' @name values_at #' #' @description This function calculates representative values of a vector, #' like minimum/maximum values or lower, median and upper quartile etc., #' which can be used for numeric vectors to plot marginal effects at these #' representative values. #' #' @param x A numeric vector. #' @param values Character vector, naming a pattern for which representative values #' should be calculcated. #' \describe{ #' \item{\code{"minmax"}}{(default) minimum and maximum values (lower and upper bounds) of the moderator are used to plot the interaction between independent variable and moderator.} #' \item{\code{"meansd"}}{uses the mean value of the moderator as well as one standard deviation below and above mean value to plot the effect of the moderator on the independent variable.} #' \item{\code{"zeromax"}}{is similar to the \code{"minmax"} option, however, \code{0} is always used as minimum value for the moderator. This may be useful for predictors that don't have an empirical zero-value, but absence of moderation should be simulated by using 0 as minimum.} #' \item{\code{"quart"}}{calculates and uses the quartiles (lower, median and upper) of the moderator value, \emph{including} minimum and maximum value.} #' \item{\code{"quart2"}}{calculates and uses the quartiles (lower, median and upper) of the moderator value, \emph{excluding} minimum and maximum value.} #' \item{\code{"all"}}{uses all values of the moderator variable. Note that this option only applies to \code{type = "eff"}, for numeric moderator values.} #' } #' #' @return A numeric vector of length two or three, representing the required #' values from \code{x}, like minimum/maximum value or mean and +/- 1 SD. If #' \code{x} is missing, a function, pre-programmed with \code{n} and #' \code{length} is returned. See examples. #' #' @examples #' data(efc) #' values_at(efc$c12hour) #' values_at(efc$c12hour, "quart2") #' #' mean_sd <- values_at(values = "meansd") #' mean_sd(efc$c12hour) #' @export values_at <- function(x, values = "meansd") { force(values) .values_at <- function(x) { # check if representative value is possible to compute # e.g. for quantiles, if we have at least three values values <- check_rv(values, x) # we have more than two values, so re-calculate effects, just using # min and max value of moderator. if (values == "minmax") { # retrieve min and max values mv.min <- min(x, na.rm = TRUE) mv.max <- max(x, na.rm = TRUE) # re-compute effects, prepare xlevels xl <- c(mv.min, mv.max) # we have more than two values, so re-calculate effects, just using # 0 and max value of moderator. } else if (values == "zeromax") { # retrieve max values mv.max <- max(x, na.rm = TRUE) # re-compute effects, prepare xlevels xl <- c(0, mv.max) # compute mean +/- sd } else if (values == "meansd") { # retrieve mean and sd mv.mean <- mean(x, na.rm = TRUE) mv.sd <- stats::sd(x, na.rm = TRUE) # re-compute effects, prepare xlevels xl <- c(mv.mean - mv.sd, mv.mean, mv.mean + mv.sd) } else if (values == "all") { # re-compute effects, prepare xlevels xl <- as.vector(unique(sort(x, na.last = NA))) } else if (values == "quart") { # re-compute effects, prepare xlevels xl <- as.vector(stats::quantile(x, na.rm = TRUE)) } else if (values == "quart2") { # re-compute effects, prepare xlevels xl <- as.vector(stats::quantile(x, na.rm = TRUE))[2:4] } if (is.numeric(x)) { if (is.whole(x)) { rv <- round(xl, 1) if (length(unique(rv)) < length(rv)) rv <- unique(round(xl, 2)) } else rv <- round(xl, 2) if (length(unique(rv)) < length(rv)) { rv <- unique(round(xl, 3)) if (length(unique(rv)) < length(rv)) rv <- unique(round(xl, 4)) } } else { rv <- xl } rv } if (missing(x)) { .values_at } else { .values_at(x) } } check_rv <- function(values, x) { if ((is.factor(x) || is.character(x)) && values != "all") { # tell user that quart won't work message(paste0("Cannot use '", values, "' for factors or character vectors. Defaulting `values` to `all`.")) values <- "all" } if (is.numeric(x)) { mvc <- length(unique(as.vector(stats::quantile(x, na.rm = TRUE)))) if (values %in% c("quart", "quart2") && mvc < 3) { # tell user that quart won't work message("Could not compute quartiles, too small range of variable. Defaulting `values` to `minmax`.") values <- "minmax" } } values } #' @rdname values_at #' @export representative_values <- values_at ggeffects/R/efc.R0000644000176200001440000000056213735301353013276 0ustar liggesusers#' @docType data #' @title Sample dataset from the EUROFAMCARE project #' @name efc #' @aliases efc_test #' @keywords data #' #' @description A SPSS sample data set, imported with the \code{sjlabelled::read_spss()} function. #' #' @examples #' # Attach EFC-data #' data(efc) #' #' # Show structure #' str(efc) #' #' # show first rows #' head(efc) NULL ggeffects/R/get_predictions_mclogit.R0000644000176200001440000000446514036250437017450 0ustar liggesusersget_predictions_mclogit <- function(model, fitfram, ci.lvl, model_class, value_adjustment, terms, vcov.fun, vcov.type, vcov.args, condition, ...) { # does user want standard errors? se <- !is.null(ci.lvl) && !is.na(ci.lvl) && is.null(vcov.fun) # compute ci, two-ways if (!is.null(ci.lvl) && !is.na(ci.lvl)) ci <- (1 + ci.lvl) / 2 else ci <- .975 # add response to new data resp <- insight::find_response(model, combine = FALSE) cn <- c(colnames(fitfram), resp) for (r in resp) { fitfram <- cbind(fitfram, 1) } colnames(fitfram) <- cn prdat <- stats::predict( model, newdata = fitfram, type = "response", se.fit = se, ... ) # did user request standard errors? if yes, compute CI if (!is.null(vcov.fun)) { # copy predictions if ("fit" %in% names(prdat)) fitfram$predicted <- as.vector(prdat$fit) else fitfram$predicted <- as.vector(prdat) se.pred <- .standard_error_predictions( model = model, prediction_data = fitfram, value_adjustment = value_adjustment, terms = terms, model_class = model_class, vcov.fun = vcov.fun, vcov.type = vcov.type, vcov.args = vcov.args, condition = condition ) if (.check_returned_se(se.pred)) { se.fit <- se.pred$se.fit fitfram <- se.pred$prediction_data # CI fitfram$conf.low <- fitfram$predicted - stats::qnorm(ci) * se.fit fitfram$conf.high <- fitfram$predicted + stats::qnorm(ci) * se.fit # copy standard errors attr(fitfram, "std.error") <- se.fit attr(fitfram, "prediction.interval") <- attr(se.pred, "prediction_interval") } else { # CI fitfram$conf.low <- NA fitfram$conf.high <- NA } } else if (se) { # copy predictions fitfram$predicted <- prdat$fit # calculate CI fitfram$conf.low <- prdat$fit - stats::qnorm(ci) * prdat$se.fit fitfram$conf.high <- prdat$fit + stats::qnorm(ci) * prdat$se.fit # copy standard errors attr(fitfram, "std.error") <- prdat$se.fit } else { # copy predictions fitfram$predicted <- as.vector(prdat) # no CI fitfram$conf.low <- NA fitfram$conf.high <- NA } fitfram } ggeffects/R/get_predictions_coxph.R0000644000176200001440000000412414046746430017127 0ustar liggesusersget_predictions_coxph <- function(model, fitfram, ci.lvl, model_class, value_adjustment, terms, vcov.fun, vcov.type, vcov.args, condition, interval, ...) { # does user want standard errors? se <- !is.null(ci.lvl) && !is.na(ci.lvl) # compute ci, two-ways if (!is.null(ci.lvl) && !is.na(ci.lvl)) ci <- (1 + ci.lvl) / 2 else ci <- .975 prdat <- stats::predict( model, newdata = fitfram, type = "lp", se.fit = se, ... ) # did user request standard errors? if yes, compute CI if (!is.null(vcov.fun) || (!is.null(interval) && interval == "prediction")) { # copy predictions fitfram$predicted <- exp(prdat$fit) se.pred <- .standard_error_predictions( model = model, prediction_data = fitfram, value_adjustment = value_adjustment, terms = terms, model_class = model_class, vcov.fun = vcov.fun, vcov.type = vcov.type, vcov.args = vcov.args, condition = condition, interval = interval ) if (.check_returned_se(se.pred)) { se.fit <- se.pred$se.fit fitfram <- se.pred$prediction_data # CI fitfram$conf.low <- fitfram$predicted - stats::qnorm(ci) * se.fit fitfram$conf.high <- fitfram$predicted + stats::qnorm(ci) * se.fit # copy standard errors attr(fitfram, "std.error") <- se.fit attr(fitfram, "prediction.interval") <- attr(se.pred, "prediction_interval") } else { # CI fitfram$conf.low <- NA fitfram$conf.high <- NA } } else if (se) { # copy predictions fitfram$predicted <- exp(prdat$fit) # calculate CI fitfram$conf.low <- exp(prdat$fit - stats::qnorm(ci) * prdat$se.fit) fitfram$conf.high <- exp(prdat$fit + stats::qnorm(ci) * prdat$se.fit) # copy standard errors attr(fitfram, "std.error") <- prdat$se.fit } else { # copy predictions fitfram$predicted <- exp(as.vector(prdat)) # no CI fitfram$conf.low <- NA fitfram$conf.high <- NA } fitfram } ggeffects/R/get_predictions_zelig.R0000644000176200001440000000104513451124203017102 0ustar liggesusersget_predictions_zelig <- function(model, fitfram, ci.lvl, linv, ...) { stop("`ggpredict()` does currently not support Zelig-models.", call. = FALSE) # does user want standard errors? se <- !is.null(ci.lvl) && !is.na(ci.lvl) # compute ci, two-ways if (!is.null(ci.lvl) && !is.na(ci.lvl)) ci <- (1 + ci.lvl) / 2 else ci <- .975 # prediction, with CI # prdat <- # Zelig::predict( # model, # newdata = fitfram, # interval = se, # level = ci, # ... # ) NULL } ggeffects/R/get_predictions_cgam.R0000644000176200001440000000367514036250437016723 0ustar liggesusersget_predictions_cgam <- function(model, fitfram, ci.lvl, linv, value_adjustment, model_class, terms, condition, ...) { # does user want standard errors? se <- !is.null(ci.lvl) && !is.na(ci.lvl) # does user want standard errors? if (se) interval <- "confidence" else interval <- "none" # compute ci, two-ways if (!is.null(ci.lvl) && !is.na(ci.lvl)) ci <- (1 + ci.lvl) / 2 else ci <- .975 prdat <- stats::predict( model, newData = fitfram, type = "link", interval = "none", ... ) # copy predictions if (typeof(prdat) == "double") .predicted <- prdat else .predicted <- prdat$fit # get standard errors, if computed # get predicted values, on link-scale fitfram$predicted <- .predicted if (se) { se.pred <- .standard_error_predictions( model = model, prediction_data = fitfram, value_adjustment = value_adjustment, terms = terms, model_class = model_class, vcov.fun = NULL, vcov.type = NULL, vcov.args = NULL, condition = condition, interval = interval ) if (.check_returned_se(se.pred)) { fitfram <- se.pred$prediction_data se.fit <- se.pred$se.fit se <- TRUE } else { se.fit <- NULL se <- FALSE } } else { se.pred <- NULL } if (se) { fitfram$conf.low <- linv(fitfram$predicted - stats::qnorm(ci) * se.fit) fitfram$conf.high <- linv(fitfram$predicted + stats::qnorm(ci) * se.fit) # copy standard errors attr(fitfram, "std.error") <- se.fit if (!is.null(se.pred) && length(se.pred) > 0) attr(fitfram, "prediction.interval") <- attr(se.pred, "prediction_interval") } else { # No CI fitfram$conf.low <- NA fitfram$conf.high <- NA } # transform predicted values fitfram$predicted <- linv(fitfram$predicted) fitfram } ggeffects/R/get_predictions_vgam.R0000644000176200001440000000041313451124203016720 0ustar liggesusersget_predictions_vgam <- function(model, fitfram, ci.lvl, linv, ...) { prdat <- stats::predict( model, newdata = fitfram, type = "link", se.fit = FALSE ) # copy predictions fitfram$predicted <- linv(as.vector(prdat)) fitfram } ggeffects/R/get_predictions_lm.R0000644000176200001440000000525714036250437016422 0ustar liggesusersget_predictions_lm <- function(model, fitfram, ci.lvl, model_class, value_adjustment, terms, vcov.fun, vcov.type, vcov.args, condition, interval, type, ...) { # does user want standard errors? se <- !is.null(ci.lvl) && !is.na(ci.lvl) && is.null(vcov.fun) # compute ci, two-ways if (!is.null(ci.lvl) && !is.na(ci.lvl)) ci <- (1 + ci.lvl) / 2 else ci <- .975 prdat <- stats::predict( model, newdata = fitfram, type = "response", se.fit = se, ... ) if (type == "sim") { # simulate predictions fitfram <- .do_simulate(model, terms, ci, ...) } else if (!is.null(vcov.fun) || (!is.null(interval) && interval == "prediction")) { # did user request standard errors? if yes, compute CI # copy predictions if ("fit" %in% names(prdat)) fitfram$predicted <- as.vector(prdat$fit) else fitfram$predicted <- as.vector(prdat) se.pred <- .standard_error_predictions( model = model, prediction_data = fitfram, value_adjustment = value_adjustment, terms = terms, model_class = model_class, vcov.fun = vcov.fun, vcov.type = vcov.type, vcov.args = vcov.args, condition = condition, interval = interval ) if (.check_returned_se(se.pred)) { se.fit <- se.pred$se.fit fitfram <- se.pred$prediction_data # CI fitfram$conf.low <- fitfram$predicted - stats::qnorm(ci) * se.fit fitfram$conf.high <- fitfram$predicted + stats::qnorm(ci) * se.fit # copy standard errors attr(fitfram, "std.error") <- se.fit attr(fitfram, "prediction.interval") <- attr(se.pred, "prediction_interval") } else { # CI fitfram$conf.low <- NA fitfram$conf.high <- NA } } else if (se) { # copy predictions fitfram$predicted <- prdat$fit # calculate CI fitfram$conf.low <- prdat$fit - stats::qnorm(ci) * prdat$se.fit fitfram$conf.high <- prdat$fit + stats::qnorm(ci) * prdat$se.fit # copy standard errors attr(fitfram, "std.error") <- prdat$se.fit } else { # check if we have a multivariate response model pdim <- dim(prdat) if (!is.null(pdim) && pdim[2] > 1) { tmp <- cbind(fitfram, as.data.frame(prdat)) gather.vars <- (ncol(fitfram) + 1):ncol(tmp) fitfram <- .gather( tmp, names_to = "response.level", values_to = "predicted", colnames(tmp)[gather.vars] ) } else { # copy predictions fitfram$predicted <- as.vector(prdat) } # no CI fitfram$conf.low <- NA fitfram$conf.high <- NA } fitfram } ggeffects/R/get_predictions_svyglm.R0000644000176200001440000000260513451124203017314 0ustar liggesusersget_predictions_svyglm <- function(model, fitfram, ci.lvl, linv, ...) { # does user want standard errors? se <- !is.null(ci.lvl) && !is.na(ci.lvl) # compute ci, two-ways if (!is.null(ci.lvl) && !is.na(ci.lvl)) ci <- (1 + ci.lvl) / 2 else ci <- .975 # get predictions prdat <- stats::predict( model, newdata = fitfram, type = "link", se.fit = se, ... ) # check if user wants standard errors if (se) { # get variance matrix for standard errors. "survey" stores the information # somewhat different from classical predict function vv <- attr(prdat, "var") # compute standard errors if (is.matrix(vv)) prdat <- as.data.frame(cbind(prdat, sqrt(diag(vv)))) else prdat <- as.data.frame(cbind(prdat, sqrt(vv))) # consistent column names colnames(prdat) <- c("fit", "se.fit") # copy predictions fitfram$predicted <- linv(prdat$fit) # calculate CI fitfram$conf.low <- linv(prdat$fit - stats::qnorm(ci) * prdat$se.fit) fitfram$conf.high <- linv(prdat$fit + stats::qnorm(ci) * prdat$se.fit) # copy standard errors attr(fitfram, "std.error") <- prdat$se.fit } else { # copy predictions fitfram$predicted <- linv(as.vector(prdat)) # no CI fitfram$conf.low <- NA fitfram$conf.high <- NA } fitfram } ggeffects/R/get_predictions_polr.R0000644000176200001440000000411714036250437016760 0ustar liggesusersget_predictions_polr <- function(model, fitfram, ci.lvl, linv, value_adjustment, terms, model_class, vcov.fun, vcov.type, vcov.args, condition, interval, ...) { se <- (!is.null(ci.lvl) && !is.na(ci.lvl)) || !is.null(vcov.fun) # compute ci, two-ways if (!is.null(ci.lvl) && !is.na(ci.lvl)) ci <- (1 + ci.lvl) / 2 else ci <- .975 prdat <- stats::predict( model, newdata = fitfram, type = "probs", ... ) prdat <- as.data.frame(prdat) # usually, we have same numbers of rows for predictions and model frame. # this is, however. not true when calling the "emm()" function. in this # case. just return predictions if (nrow(prdat) > nrow(fitfram) && ncol(prdat) == 1) { colnames(prdat)[1] <- "predicted" return(.rownames_as_column(prdat, var = "response.level")) } # bind predictions to model frame fitfram <- cbind(prdat, fitfram) # for proportional ordinal logistic regression (see MASS::polr), # we have predicted values for each response category. Hence, # gather columns fitfram <- .gather(fitfram, names_to = "response.level", values_to = "predicted", colnames(prdat)) se.pred <- .standard_error_predictions( model = model, prediction_data = fitfram, value_adjustment = value_adjustment, terms = terms, model_class = model_class, vcov.fun = vcov.fun, vcov.type = vcov.type, vcov.args = vcov.args, condition = condition, interval = interval ) if (.check_returned_se(se.pred) && isTRUE(se)) { se.fit <- se.pred$se.fit fitfram <- se.pred$prediction_data # CI fitfram$conf.low <- linv(stats::qlogis(fitfram$predicted) - stats::qnorm(ci) * se.fit) fitfram$conf.high <- linv(stats::qlogis(fitfram$predicted) + stats::qnorm(ci) * se.fit) # copy standard errors attr(fitfram, "std.error") <- se.fit attr(fitfram, "prediction.interval") <- attr(se.pred, "prediction_interval") } else { # CI fitfram$conf.low <- NA fitfram$conf.high <- NA } fitfram } ggeffects/R/get_predictions_geeglm.R0000644000176200001440000000270014046746430017244 0ustar liggesusersget_predictions_geeglm <- function(model, fitfram, ci.lvl, linv, type, model_class, value_adjustment, terms, condition, ...) { se <- (!is.null(ci.lvl) && !is.na(ci.lvl)) # compute ci, two-ways if (!is.null(ci.lvl) && !is.na(ci.lvl)) ci <- (1 + ci.lvl) / 2 else ci <- .975 # get predictions prdat <- stats::predict( model, newdata = fitfram, ... ) fitfram$predicted <- as.vector(prdat) # get standard errors from variance-covariance matrix se.pred <- .standard_error_predictions( model = model, prediction_data = fitfram, value_adjustment = value_adjustment, type = type, terms = terms, model_class = model_class, vcov.fun = NULL, vcov.type = NULL, vcov.args = NULL, condition = condition, interval = NULL ) if (.check_returned_se(se.pred) && isTRUE(se)) { se.fit <- se.pred$se.fit fitfram <- se.pred$prediction_data # CI fitfram$conf.low <- linv(fitfram$predicted - stats::qnorm(ci) * se.fit) fitfram$conf.high <- linv(fitfram$predicted + stats::qnorm(ci) * se.fit) # copy standard errors attr(fitfram, "std.error") <- se.fit attr(fitfram, "prediction.interval") <- attr(se.pred, "prediction_interval") } else { # CI fitfram$conf.low <- NA fitfram$conf.high <- NA } fitfram$predicted <- linv(fitfram$predicted) fitfram } ggeffects/R/utils_is_empty.R0000644000176200001440000000120213567464647015625 0ustar liggesusers.is_empty <- function(x) { # do we have a valid vector? if (!is.null(x)) { # if it's a character, check if we have only one element in that vector if (is.character(x)) { # characters may also be of length 0 if (length(x) == 0) return(TRUE) # else, check first elements of x zero_len <- isTRUE((nchar(x) == 0)[1]) if (length(x) > 0) x <- x[1] # we have a non-character vector here. check for length } else if (is.list(x)) { x <- .compact_list(x) zero_len <- length(x) == 0 } else { zero_len <- length(x) == 0 } } any(is.null(x) || zero_len || all(is.na(x))) } ggeffects/R/get_predictions_ols.R0000644000176200001440000000166513541155270016605 0ustar liggesusersget_predictions_ols <- function(model, fitfram, ci.lvl, ...) { # does user want standard errors? se <- !is.null(ci.lvl) && !is.na(ci.lvl) # compute ci, two-ways if (!is.null(ci.lvl) && !is.na(ci.lvl)) ci <- (1 + ci.lvl) / 2 else ci <- .975 prdat <- stats::predict( model, newdata = fitfram, type = "lp", se.fit = se, ... ) if (se) { # copy predictions fitfram$predicted <- prdat$linear.predictors # calculate CI fitfram$conf.low <- prdat$linear.predictors - stats::qnorm(ci) * prdat$se.fit fitfram$conf.high <- prdat$linear.predictors + stats::qnorm(ci) * prdat$se.fit # copy standard errors attr(fitfram, "std.error") <- prdat$se.fit } else { # copy predictions fitfram$predicted <- as.vector(prdat$linear.predictors) # no CI fitfram$conf.low <- NA fitfram$conf.high <- NA } fitfram } ggeffects/R/get_predictions_merMod.R0000644000176200001440000000437014036250437017230 0ustar liggesusersget_predictions_merMod <- function(model, fitfram, ci.lvl, linv, type, terms, value_adjustment, condition, ...) { # does user want standard errors? se <- !is.null(ci.lvl) && !is.na(ci.lvl) # compute ci, two-ways if (!is.null(ci.lvl) && !is.na(ci.lvl)) ci <- (1 + ci.lvl) / 2 else ci <- .975 # check whether predictions should be conditioned # on random effects (grouping level) or not. if (type == "fe") ref <- NA else ref <- NULL if (type %in% c("sim", "sim_re")) { # simulate predictions fitfram <- .do_simulate(model, terms, ci, type, ...) } else { fitfram$predicted <- suppressWarnings(stats::predict( model, newdata = fitfram, type = "response", re.form = ref, allow.new.levels = TRUE, ... )) if (se) { # get standard errors from variance-covariance matrix se.pred <- .standard_error_predictions( model = model, prediction_data = fitfram, value_adjustment = value_adjustment, terms = terms, type = type, condition = condition ) if (.check_returned_se(se.pred)) { se.fit <- se.pred$se.fit fitfram <- se.pred$prediction_data if (is.null(linv)) { # calculate CI for linear mixed models fitfram$conf.low <- fitfram$predicted - stats::qnorm(ci) * se.fit fitfram$conf.high <- fitfram$predicted + stats::qnorm(ci) * se.fit } else { # get link-function and back-transform fitted values # to original scale, so we compute proper CI lf <- insight::link_function(model) # calculate CI for glmm fitfram$conf.low <- linv(lf(fitfram$predicted) - stats::qnorm(ci) * se.fit) fitfram$conf.high <- linv(lf(fitfram$predicted) + stats::qnorm(ci) * se.fit) } # copy standard errors attr(fitfram, "std.error") <- se.fit attr(fitfram, "prediction.interval") <- attr(se.pred, "prediction_interval") } else { fitfram$conf.low <- NA fitfram$conf.high <- NA } } else { fitfram$conf.low <- NA fitfram$conf.high <- NA } } fitfram } ggeffects/R/fish.R0000644000176200001440000000073514030655632013475 0ustar liggesusers#' @docType data #' @title Sample data set #' @name fish #' @keywords data #' #' @description A sample data set, used in tests and some examples. NULL #' @docType data #' @title Sample data set #' @name lung2 #' @keywords data #' #' @description A sample data set, used in tests and examples for survival models. #' This dataset is originally included in the \pkg{survival} package, but for #' convenience reasons it is also available in this package. NULL ggeffects/R/post_processing_predictions.R0000644000176200001440000000616213714713374020376 0ustar liggesusers.post_processing_predictions <- function(model, prediction_data, original_model_frame, cleaned_terms) { # check for correct terms specification if (!all(cleaned_terms %in% colnames(prediction_data))) { stop("At least one term specified in `terms` is no valid model term.", call. = FALSE) } # copy standard errors if (!.obj_has_name(prediction_data, "std.error")) { prediction_data$std.error <- attr(prediction_data, "std.error") } else { attr(prediction_data, "std.error") <- prediction_data$std.error } # now select only relevant variables: the predictors on the x-axis, # the predictions and the original response vector (needed for scatter plot) columns_to_keep <- c(cleaned_terms, "predicted", "std.error", "conf.low", "conf.high", "response.level") result <- prediction_data[, intersect(columns_to_keep, colnames(prediction_data))] # name and sort columns, depending on groups, facet and panel result <- .prepare_columns(result, cleaned_terms) # grouping variable may not be labelled # do this here, so we convert to labelled factor later result <- .add_labels_to_groupvariable(result, original_model_frame, cleaned_terms) # convert grouping variable to factor, for proper legend result <- .groupvariable_to_labelled_factor(result) # check if we have legend labels legend.labels <- sjlabelled::get_labels(result$group) # if we had numeric variable w/o labels, these still might be numeric # make sure we have factors here for our grouping and facet variables if (is.numeric(result$group)) { result$group <- as.factor(result$group) } # remember if x was a factor x.is.factor <- ifelse(is.factor(result$x), "1", "0") # sort values result <- result[order(result$x, result$group), , drop = FALSE] empty_columns <- which(colSums(is.na(result)) == nrow(result)) if (length(empty_columns)) result <- result[, -empty_columns] if (.obj_has_name(result, "facet") && is.numeric(result$facet)) { result$facet <- as.factor(result$facet) attr(result, "numeric.facet") <- TRUE } attr(result, "legend.labels") <- legend.labels attr(result, "x.is.factor") <- x.is.factor attr(result, "continuous.group") <- attr(prediction_data, "continuous.group") & is.null(attr(original_model_frame[[cleaned_terms[2]]], "labels")) result } # name and sort columns, depending on groups, facet and panel .prepare_columns <- function(result, cleaned_terms) { columns <- c("x", "predicted", "std.error", "conf.low", "conf.high", "response.level", "group", "facet", "panel") # with or w/o grouping factor? if (length(cleaned_terms) == 1) { colnames(result)[1] <- "x" # convert to factor for proper legend result$group <- as.factor(1) } else if (length(cleaned_terms) == 2) { colnames(result)[1:2] <- c("x", "group") } else if (length(cleaned_terms) == 3) { colnames(result)[1:3] <- c("x", "group", "facet") } else if (length(cleaned_terms) == 4) { colnames(result)[1:4] <- c("x", "group", "facet", "panel") } # sort columns result[, columns[columns %in% colnames(result)]] } ggeffects/R/ggeffect.R0000644000176200001440000002511114046746430014315 0ustar liggesusers#' @rdname ggpredict #' @export ggeffect <- function(model, terms, ci.lvl = .95, ...) { if (!requireNamespace("effects", quietly = TRUE)) { message("Package `effects` is not available, but needed for `ggeffect()`. Either install package `effects`, or use `ggpredict()`. Calling `ggpredict()` now.", call. = FALSE) return(ggpredict(model = model, terms = terms, ci.lvl = ci.lvl)) } model_name <- deparse(substitute(model)) # check if terms are a formula if (!missing(terms) && !is.null(terms) && inherits(terms, "formula")) { terms <- all.vars(terms) } # tidymodels? if (inherits(model, "model_fit")) { model <- model$fit } if (inherits(model, "list") && !inherits(model, c("bamlss", "maxLik"))) { res <- lapply(model, function(.x) ggeffect_helper(.x, terms, ci.lvl, ...)) } else { if (missing(terms) || is.null(terms)) { predictors <- insight::find_predictors(model, effects = "fixed", component = "conditional", flatten = TRUE) res <- lapply( predictors, function(.x) { tmp <- ggeffect_helper(model, terms = .x, ci.lvl, ...) if (!is.null(tmp)) tmp$group <- .x tmp } ) no_results <- sapply(res, is.null) res <- .compact_list(res) if (!is.null(res) && !.is_empty(res)) { names(res) <- predictors[!no_results] class(res) <- c("ggalleffects", class(res)) } else { res <- NULL } } else { res <- ggeffect_helper(model, terms, ci.lvl, ...) } } if (!is.null(res)) { attr(res, "model.name") <- model_name } res } ggeffect_helper <- function(model, terms, ci.lvl, ...) { # check terms argument original_terms <- terms <- .check_vars(terms, model) cleaned_terms <- .clean_terms(terms) # get model frame original_model_frame <- insight::get_data(model) # get model family model_info <- .get_model_info(model) # check whether we have an argument "transformation" for effects()-function # in this case, we need another default title, since we have # non-transformed effects additional_dot_args <- lapply(match.call(expand.dots = FALSE)$`...`, function(x) x) # check whether we have a "transformation" argument t.add <- which(names(additional_dot_args) == "transformation") # if we have a "transformation" argument, and it's NULL, # no transformation of scale no.transform <- !.is_empty(t.add) && is.null(eval(additional_dot_args[[t.add]])) # check if we have specific levels in square brackets at_values <- .get_representative_values(terms, original_model_frame) # clear argument from brackets terms <- .clean_terms(terms) # check for character vectors, transform to factor is_char <- sapply(terms, function(.i) is.character(original_model_frame[[.i]])) if (any(is_char)) { for (.i in terms[is_char]) { original_model_frame[[.i]] <- as.factor(original_model_frame[[.i]]) } } # fix remaining x-levels conditional_terms <- which(!(terms %in% names(at_values))) if (!.is_empty(conditional_terms)) { xl <- .prettify_data(conditional_terms, original_model_frame, terms) names(xl) <- terms[conditional_terms] at_values <- c(at_values, xl) } # restore inital order of focal predictors at_values <- at_values[match(terms, names(at_values))] # compute marginal effects for each model term eff <- tryCatch( { suppressWarnings( effects::Effect( focal.predictors = terms, mod = model, xlevels = at_values, confidence.level = ci.lvl, ... ) ) }, error = function(e) { insight::print_color("Can't compute marginal effects, 'effects::Effect()' returned an error.\n\n", "red") cat(sprintf("Reason: %s\n", e$message)) cat("You may try 'ggpredict()' or 'ggemmeans()'.\n\n") NULL } ) # return NULL on error if (is.null(eff)) return(NULL) # build data frame, with raw values # predicted response and lower/upper ci if (inherits(model, c("polr", "clm", "clm2", "clmm", "clmm2", "multinom"))) { # for categorical outcomes, we need to gather the data # from effects to get a single data frame eff.logits <- as.data.frame(eff$logit, stringsAsFactors = FALSE) tmp <- cbind(eff$x, eff.logits) ft <- (ncol(tmp) - ncol(eff.logits) + 1):ncol(tmp) tmp <- .gather(tmp, names_to = "response.level", values_to = "predicted", colnames(tmp)[ft]) fx.term <- eff$term colnames(tmp)[1] <- "x" if (length(terms) > 1) colnames(tmp)[2] <- "group" if (length(terms) > 2) colnames(tmp)[3] <- "facet" if (!is.null(ci.lvl) && !is.na(ci.lvl)) ci <- 1 - ((1 - ci.lvl) / 2) else ci <- .975 # same for standard errors. we need to gather all data frames together, # compute CI manually and then also fix column names. eff.se.logits <- as.data.frame(eff$se.logit) tmp2 <- .gather(eff.se.logits, names_to = "response.level", values_to = "se", colnames(eff.se.logits)) tmp2$conf.low <- tmp$predicted - stats::qnorm(ci) * tmp2$se tmp2$conf.high <- tmp$predicted + stats::qnorm(ci) * tmp2$se tmp2$std.error <- tmp2$se tmp <- cbind(tmp, tmp2[, c("std.error", "conf.low", "conf.high")]) tmp$response.level <- substr(tmp$response.level, 7, max(nchar(tmp$response.level))) } else { # check for multi response .ne <- names(eff) .mv <- insight::find_response(model, combine = FALSE) if (length(.ne) == length(.mv) && all.equal(.ne, .mv)) { l <- lapply(names(eff), function(.x) { tmpl <- data.frame( x = eff[[.x]]$x[[terms[1]]], predicted = eff[[.x]]$fit, std.error = eff[[.x]]$se, conf.low = eff[[.x]]$lower, conf.high = eff[[.x]]$upper, response.level = .x, stringsAsFactors = FALSE ) .create_eff_group(tmpl, terms, eff, sub = .x) }) tmp <- do.call(rbind, l) fx.term <- eff[[1]]$term } else { tmp <- data.frame( x = eff$x[[terms[1]]], predicted = eff$fit, std.error = eff$se, conf.low = eff$lower, conf.high = eff$upper, stringsAsFactors = FALSE ) tmp <- .create_eff_group(tmp, terms, eff, sub = NULL) # effects-package keeps the order of numeric value as they are # returned by "unique()", so we want to sort the data frame # in the order of ascending values if (is.numeric(eff$data[[terms[1]]])) tmp <- tmp[order(tmp$x), , drop = FALSE] fx.term <- eff$term } } if (!no.transform) { linv <- insight::link_inverse(model) tmp$predicted <- linv(tmp$predicted) tmp$conf.low <- linv(tmp$conf.low) tmp$conf.high <- linv(tmp$conf.high) } # init legend labels legend.labels <- NULL # get axis titles and labels all.labels <- .get_axis_titles_and_labels( original_model_frame, terms, .get_model_function(model), model_info = model_info, no.transform, type = NULL ) # slice data, only select observations that have specified # levels for the grouping variables # for numeric values with many decimal places, we need to round if (.frac_length(tmp$x) > 5) filter.keep <- round(tmp$x, 5) %in% round(at_values[[1]], 5) else filter.keep <- tmp$x %in% at_values[[1]] tmp <- tmp[filter.keep, , drop = FALSE] # slice data, only select observations that have specified # levels for the facet variables if (length(at_values) > 1) { filter.keep <- tmp$group %in% at_values[[2]] tmp <- tmp[filter.keep, , drop = FALSE] } # slice data, only select observations that have specified # levels for the facet variables if (length(at_values) > 2) { filter.keep <- tmp$facet %in% at_values[[3]] tmp <- tmp[filter.keep, , drop = FALSE] } # label grouping variables, for axis and legend labels in plot if (length(terms) > 1) { # grouping variable may not be labelled # do this here, so we convert to labelled factor later tmp <- .add_labels_to_groupvariable(tmp, original_model_frame, terms) # convert to factor for proper legend tmp <- .groupvariable_to_labelled_factor(tmp) # check if we have legend labels legend.labels <- sjlabelled::get_labels(tmp$group, attr.only = FALSE, drop.unused = TRUE) } # convert to data frame result <- as.data.frame(tmp, stringsAsFactors = FALSE) if (is.null(result$group)) { result$group <- as.factor(1) } if (length(terms) > 1) { attr(result, "continuous.group") <- is.numeric(original_model_frame[[terms[2]]]) & is.null(attr(original_model_frame[[terms[2]]], "labels")) } else { attr(result, "continuous.group") <- FALSE } # add raw data as well attr(result, "rawdata") <- .get_raw_data(model, original_model_frame, terms) x_v <- original_model_frame[[fx.term]] if (is.null(x_v)) xif <- ifelse(is.factor(tmp$x), "1", "0") else xif <- ifelse(is.factor(x_v), "1", "0") attr(result, "x.is.factor") <- xif # set attributes with necessary information .set_attributes_and_class( data = result, model = model, t.title = all.labels$t.title, x.title = all.labels$x.title, y.title = all.labels$y.title, l.title = all.labels$l.title, legend.labels = legend.labels, x.axis.labels = all.labels$axis.labels, model_info = model_info, terms = cleaned_terms, original_terms = original_terms, ci.lvl = ci.lvl ) } .create_eff_group <- function(tmp, terms, eff, sub) { if (!is.null(sub)) { fx <- eff[[sub]] } else { fx <- eff } if (isTRUE(fx$variables[[1]]$is.factor)) { tmp$x <- factor(tmp$x, levels = fx$variables[[1]]$levels) } # with or w/o grouping factor? if (length(terms) == 1) { # convert to factor for proper legend tmp$group <- as.factor(1) } else if (length(terms) == 2) { tmp$group <- if (isTRUE(fx$variables[[2]]$is.factor)) { factor(fx$x[[terms[2]]], levels = fx$variables[[2]]$levels) } else { as.factor(fx$x[[terms[2]]]) } } else { tmp$group <- if (isTRUE(fx$variables[[2]]$is.factor)) { factor(fx$x[[terms[2]]], levels = fx$variables[[2]]$levels) } else { as.factor(fx$x[[terms[2]]]) } tmp$facet <- if (isTRUE(fx$variables[[3]]$is.factor)) { factor(fx$x[[terms[3]]], levels = fx$variables[[3]]$levels) } else { as.factor(fx$x[[terms[3]]]) } } tmp } ggeffects/R/utils_reshape.R0000644000176200001440000000364514046746430015422 0ustar liggesusers#' @keywords internal .gather <- function(x, names_to = "key", values_to = "value", columns = colnames(x)) { if (is.numeric(columns)) columns <- colnames(x)[columns] dat <- stats::reshape( x, idvar = "id", ids = row.names(x), times = columns, timevar = names_to, v.names = values_to, varying = list(columns), direction = "long" ) if (is.factor(dat[[values_to]])) dat[[values_to]] <- as.character(dat[[values_to]]) dat[, 1:(ncol(dat) - 1), drop = FALSE] } #' @keywords internal .multiple_gather <- function(x, names_to = "key", values_to = "value", columns = colnames(x), numeric_timvar = FALSE, id = "id") { ## TODO make timevar numeric? variable_attr <- lapply(x, attributes) if (is.numeric(columns)) columns <- colnames(x)[columns] if (!is.list(columns)) columns <- list(columns) dat <- stats::reshape( x, idvar = id, times = columns[[1]], timevar = names_to, v.names = values_to, varying = columns, direction = "long" ) if (numeric_timvar) { f <- as.factor(dat[[names_to]]) levels(f) <- 1:nlevels(f) dat[[names_to]] <- as.numeric(as.character(f)) } for (i in colnames(dat)) { attributes(dat[[i]]) <- variable_attr[[i]] } dat[[id]] <- NULL rownames(dat) <- NULL dat } .var_rename <- function(x, ...) { .dots <- unlist(match.call(expand.dots = FALSE)$...) old_names <- names(.dots) new_names <- unname(.dots) non.match <- which(!(old_names %in% colnames(x))) if (length(non.match)) { # remove invalid names old_names <- old_names[-non.match] new_names <- new_names[-non.match] } name_pos <- match(old_names, colnames(x)) colnames(x)[name_pos] <- new_names x } .round_numeric <- function(x, digits = 2) { x[] <- lapply(x, function(.i) { if (is.numeric(.i)) round(.i, digits = digits) else .i }) x } ggeffects/R/get_predictions_logistf.R0000644000176200001440000000127314046746430017457 0ustar liggesusersget_predictions_logistf <- function(model, fitfram, terms, ...) { prdat <- data.frame( predictions = model$predict, insight::get_data(model) ) grp_means <- tapply( prdat$predictions, lapply(terms, function(i) prdat[[i]]), function(j) mean(j, na.rm = TRUE), simplify = FALSE ) terms_df <- data.frame(expand.grid(attributes(grp_means)$dimnames), stringsAsFactors = FALSE) colnames(terms_df) <- terms terms_df <- .convert_numeric_factors(terms_df) pv <- cbind(terms_df, predicted = unlist(grp_means)) rownames(pv) <- NULL fitfram <- merge(fitfram, pv) # CI fitfram$conf.low <- NA fitfram$conf.high <- NA fitfram } ggeffects/R/emmeans_prediction_data.R0000644000176200001440000001173414046746430017407 0ustar liggesusers.emmeans_mixed_zi <- function(model, data_grid, cleaned_terms, ...) { if (inherits(model, "glmmTMB")) { .ggemmeans_glmmTMB(model, data_grid, cleaned_terms, ...) } else { .ggemmeans_MixMod(model, data_grid, cleaned_terms, ...) } } .emmeans_prediction_data <- function(model, data_grid, cleaned_terms, ci.lvl, pmode, type, model_info, interval = NULL, model_data = NULL, ...) { if (inherits(model, "MCMCglmm")) { prediction_data <- .ggemmeans_predict_MCMCglmm(model, data_grid, cleaned_terms, ci.lvl, pmode, type, interval = interval, model_data = model_data, ...) } else if (model_info$is_ordinal | model_info$is_multinomial | model_info$is_categorical) { prediction_data <- .ggemmeans_predict_ordinal(model, data_grid, cleaned_terms, ci.lvl, type, interval = interval, model_data = model_data, ...) } else if (inherits(model, c("gls", "lme"))) { prediction_data <- .ggemmeans_predict_nlme(model, data_grid, cleaned_terms, ci.lvl, type, interval = interval, model_data = model_data, ...) } else { prediction_data <- .ggemmeans_predict_generic(model, data_grid, cleaned_terms, ci.lvl, pmode, type, interval = interval, model_data = model_data, ...) } } .ggemmeans_MixMod <- function(model, data_grid, cleaned_terms, ...) { if (!requireNamespace("emmeans")) { stop("Package `emmeans` required to compute estimated marginal means for MixMod-models.", call. = FALSE) } x1 <- as.data.frame(suppressWarnings(emmeans::emmeans( model, specs = cleaned_terms, at = data_grid, ... ))) x2 <- as.data.frame(suppressWarnings(emmeans::emmeans( model, specs = all.vars(stats::formula(model, type = "zi_fixed")), at = data_grid, mode = "zero_part", ... ))) list(x1 = x1, x2 = x2) } .ggemmeans_glmmTMB <- function(model, data_grid, cleaned_terms, ...) { if (!requireNamespace("emmeans")) { stop("Package `emmeans` required to estimated marginal means effects for glmmTMB-models.", call. = FALSE) } x1 <- as.data.frame(suppressWarnings(emmeans::emmeans( model, specs = cleaned_terms, at = data_grid, component = "cond", ... ))) x2 <- as.data.frame(suppressWarnings(emmeans::emmeans( model, specs = cleaned_terms, at = data_grid, component = "zi", ... ))) list(x1 = x1, x2 = x2) } .ggemmeans_predict_ordinal <- function(model, data_grid, cleaned_terms, ci.lvl, type, interval = NULL, model_data = NULL, ...) { tmp <- emmeans::emmeans( model, specs = c(insight::find_response(model, combine = FALSE), cleaned_terms), at = data_grid, mode = "prob", ... ) .ggemmeans_add_confint(model, tmp, ci.lvl, type, pmode = "prob", interval) } .ggemmeans_predict_MCMCglmm <- function(model, data_grid, cleaned_terms, ci.lvl, pmode, type, interval = NULL, model_data = NULL, ...) { tmp <- emmeans::emmeans( model, specs = cleaned_terms, at = data_grid, pmode = pmode, data = insight::get_data(model), ... ) .ggemmeans_add_confint(model, tmp, ci.lvl, type, pmode, interval) } .ggemmeans_predict_generic <- function(model, data_grid, cleaned_terms, ci.lvl, pmode, type, interval = NULL, model_data = NULL, ...) { tmp <- tryCatch( { suppressWarnings( emmeans::emmeans( model, specs = cleaned_terms, at = data_grid, mode = pmode, ... ) ) }, error = function(e) { NULL } ) if (is.null(tmp)) { tmp <- tryCatch( { suppressWarnings( emmeans::emmeans( model, specs = cleaned_terms, at = data_grid, mode = pmode, data = insight::get_data(model), ... ) ) }, error = function(e) { insight::print_color("Can't compute estimated marginal means, 'emmeans::emmeans()' returned an error.\n\n", "red") cat(sprintf("Reason: %s\n", e$message)) cat("You may try 'ggpredict()' or 'ggeffect()'.\n\n") NULL } ) } if (!is.null(tmp)) .ggemmeans_add_confint(model, tmp, ci.lvl, type, pmode, interval) else NULL } .ggemmeans_predict_nlme <- function(model, data_grid, cleaned_terms, ci.lvl, type, interval = NULL, model_data = NULL, ...) { tmp <- tryCatch( { suppressWarnings( emmeans::emmeans( model, specs = cleaned_terms, at = data_grid, ... ) ) }, error = function(e) { insight::print_color("Can't compute estimated marginal means, 'emmeans::emmeans()' returned an error.\n\n", "red") cat(sprintf("Reason: %s\n", e$message)) cat("You may try 'ggpredict()' or 'ggeffect()'.\n\n") NULL } ) if (!is.null(tmp)) .ggemmeans_add_confint(model, tmp, ci.lvl, type) else NULL } ggeffects/R/utils_select.R0000644000176200001440000000156613567446145015261 0ustar liggesusers#' @keywords internal string_contains <- function(pattern, x) { pattern <- paste0("\\Q", pattern, "\\E") grep(pattern, x, perl = TRUE) } #' @keywords internal string_ends_with <- function(pattern, x) { pattern <- paste0("\\Q", pattern, "\\E$") grep(pattern, x, perl = TRUE) } #' @keywords internal string_one_of <- function(pattern, x) { m <- unlist(lapply(pattern, function(.x) grep(.x, x, fixed = TRUE, useBytes = TRUE))) x[m] } #' @keywords internal .rownames_as_column <- function(x, var = "rowname") { rn <- data.frame(rn = rownames(x), stringsAsFactors = FALSE) x <- cbind(rn, x) colnames(x)[1] <- var rownames(x) <- NULL x } #' @keywords internal .obj_has_name <- function(x, name) { name %in% names(x) } #' @keywords internal obj_has_rownames <- function(x) { !identical(as.character(1:nrow(x)), rownames(x)) } ggeffects/R/get_predictions_MCMCglmm.R0000644000176200001440000000166014046746430017404 0ustar liggesusersget_predictions_MCMCglmm <- function(model, fitfram, ci.lvl, interval, terms, value_adjustment, condition, ...) { if (!(interval %in% c("confidence", "prediction"))) { interval <- "confidence" } new_grid <- .data_grid( model, model_frame = insight::get_data(model), terms = terms, value_adjustment = value_adjustment, factor_adjustment = FALSE, show_pretty_message = FALSE, condition = condition, emmeans.only = FALSE ) prdat <- stats::predict( model, newdata = new_grid, type = "response", interval = interval, level = ci.lvl, ... ) new_grid$predicted <- prdat[, 1] new_grid$conf.low <- prdat[, 2] new_grid$conf.high <- prdat[, 3] fitfram <- merge(new_grid, fitfram, sort = FALSE) # copy standard errors attr(fitfram, "std.error") <- NULL attr(fitfram, "prediction.interval") <- interval == "prediction" fitfram } ggeffects/R/simulate_predictions.R0000644000176200001440000000567514046746430017006 0ustar liggesuserssimulate_predictions <- function(model, nsim, clean_terms, ci, type) { fitfram <- insight::get_data(model) fam <- insight::model_info(model) if (fam$is_binomial || fam$is_multinomial || fam$is_ordinal || fam$is_categorical) stop("Can't simulate predictions from models with binary, categorical or ordinal outcome. Please use another option for argument `type`.", call. = FALSE) if (type == "sim") { ref <- NULL } else { ref <- NA } sims <- stats::simulate(model, nsim = nsim, re.form = ref) fitfram$predicted <- apply(sims, 1, mean) fitfram$conf.low <- apply(sims, 1, stats::quantile, probs = 1 - ci) fitfram$conf.high <- apply(sims, 1, stats::quantile, probs = ci) fitfram$std.error <- apply(sims, 1, stats::sd) means_predicted <- tapply( fitfram$predicted, lapply(clean_terms, function(i) fitfram[[i]]), function(j) mean(j, na.rm = TRUE), simplify = FALSE ) means_conf_low <- tapply( fitfram$conf.low, lapply(clean_terms, function(i) fitfram[[i]]), function(j) mean(j, na.rm = TRUE), simplify = FALSE ) means_conf_high <- tapply( fitfram$conf.high, lapply(clean_terms, function(i) fitfram[[i]]), function(j) mean(j, na.rm = TRUE), simplify = FALSE ) means_se <- tapply( fitfram$std.error, lapply(clean_terms, function(i) fitfram[[i]]), function(j) mean(j, na.rm = TRUE), simplify = FALSE ) terms_df <- data.frame(expand.grid(attributes(means_predicted)$dimnames), stringsAsFactors = FALSE) colnames(terms_df) <- clean_terms terms_df <- .convert_numeric_factors(terms_df) fitfram <- cbind( terms_df, predicted = unlist(lapply(means_predicted, function(i) if (is.null(i)) NA else i)), conf.low = unlist(lapply(means_conf_low, function(i) if (is.null(i)) NA else i)), conf.high = unlist(lapply(means_conf_high, function(i) if (is.null(i)) NA else i)), std.error = unlist(lapply(means_se, function(i) if (is.null(i)) NA else i)) ) rownames(fitfram) <- NULL fitfram <- fitfram[stats::complete.cases(fitfram), , drop = FALSE] if (length(clean_terms) == 1) { fitfram <- fitfram[order(fitfram[[1]]), , drop = FALSE] } else if (length(clean_terms) == 2) { fitfram <- fitfram[order(fitfram[[1]], fitfram[[2]]), , drop = FALSE] } else if (length(clean_terms) == 3) { fitfram <- fitfram[order(fitfram[[1]], fitfram[[2]], fitfram[[3]]), , drop = FALSE] } else if (length(clean_terms) == 4) { fitfram <- fitfram[order(fitfram[[1]], fitfram[[2]], fitfram[[3]], fitfram[[4]]), , drop = FALSE] } fitfram } .do_simulate <- function(model, terms, ci, type = "sim", ...) { clean_terms <- .clean_terms(terms) add.args <- lapply(match.call(expand.dots = FALSE)$`...`, function(x) x) if ("nsim" %in% names(add.args)) nsim <- eval(add.args[["nsim"]]) else nsim <- 1000 simulate_predictions(model, nsim, clean_terms, ci, type) } ggeffects/R/utils_handle_labels.R0000644000176200001440000001361314046746430016544 0ustar liggesusers# add labels to grouping and facet variables, if these # variables come from labelled data .add_labels_to_groupvariable <- function(mydf, original_model_frame, terms) { grp.lbl <- sjlabelled::get_labels( original_model_frame[[terms[2]]], non.labelled = TRUE, values = "n", drop.unused = TRUE ) # no new labels for labelled factors if (is.factor(mydf$group) && !.is_numeric_factor(mydf$group)) grp.lbl <- NULL # drop levels, if necessary if (is.factor(mydf$group) && .n_distinct(mydf$group) < nlevels(mydf$group)) mydf$group <- droplevels(mydf$group) # check if vector has any labels if (!is.null(grp.lbl) && !is.null(names(grp.lbl))) { # get unique levels, and match levels with group labels # might be necessary, if user only wants to calculate effects # for specific factor levels - unused labels must be removed then values <- as.numeric(as.vector(unique(stats::na.omit(mydf$group)))) if (min(values) < 1) values <- round(.recode_to(values, lowest = 1)) grp.lbl <- grp.lbl[values] mydf$group <- sjlabelled::set_labels(mydf$group, labels = grp.lbl) # make sure values of labels match actual values in vector if (!all(mydf$group %in% sjlabelled::get_values(mydf$group))) attr(mydf$group, "labels") <- NULL } if (.obj_has_name(mydf, "facet")) { facet.lbl <- sjlabelled::get_labels( original_model_frame[[terms[3]]], non.labelled = TRUE, values = "n", drop.unused = TRUE ) # no new labels for labelled factors if (is.factor(mydf$facet) && !.is_numeric_factor(mydf$facet)) facet.lbl <- NULL # drop levels, if necessary if (is.factor(mydf$facet) && .n_distinct(mydf$facet) < nlevels(mydf$facet)) mydf$facet <- droplevels(mydf$facet) # check if vector has any labels if (!is.null(facet.lbl) && !is.null(names(facet.lbl))) { # get unique levels, and match levels with group labels # might be necessary, if user only wants to calculate effects # for specific factor levels - unused labels must be removed then values <- as.numeric(as.vector(unique(stats::na.omit(mydf$facet)))) if (min(values) < 1) values <- .recode_to(values, lowest = 1) facet.lbl <- facet.lbl[values] mydf$facet <- sjlabelled::set_labels(mydf$facet, labels = facet.lbl) # make sure values of labels match actual values in vector if (!all(mydf$facet %in% sjlabelled::get_values(mydf$facet))) attr(mydf$facet, "labels") <- NULL } } mydf } # this method converts lavelled group variables # into factors with labelled levels .groupvariable_to_labelled_factor <- function(mydf) { mydf$group <- sjlabelled::as_label( mydf$group, prefix = FALSE, drop.na = TRUE, drop.levels = !is.numeric(mydf$group) ) # make sure we have a facet-column at all if (.obj_has_name(mydf, "facet")) { # convert to factor mydf$facet <- sjlabelled::as_label( mydf$facet, prefix = TRUE, drop.na = TRUE, drop.levels = !is.numeric(mydf$facet) ) } mydf } # get labels from labelled data for axis titles and labels .get_axis_titles_and_labels <- function(original_model_frame, terms, fun, model_info, no.transform, type) { # Retrieve response for automatic title resp.col <- colnames(original_model_frame)[1] # check for family, and set appropriate scale-title # if we have transformation through effects-package, # check if data is on original or transformed scale ysc <- .get_title_labels(fun, model_info, no.transform, type) # set plot-title t.title <- paste(sprintf("Predicted %s of", ysc), sjlabelled::get_label(original_model_frame[[1]], def.value = resp.col)) # axis titles x.title <- sjlabelled::get_label(original_model_frame[[terms[1]]], def.value = terms[1]) y.title <- sjlabelled::get_label(original_model_frame[[1]], def.value = resp.col) if (fun == "coxph") { if (!is.null(type) && type == "surv") { t.title <- y.title <- "Probability of Survival" } else if (!is.null(type) && type == "cumhaz") { t.title <- y.title <- "Cumulative Hazard" } else { t.title <- "Predicted risk scores" y.title <- "Risk Score" } } # legend title l.title <- sjlabelled::get_label(original_model_frame[[terms[2]]], def.value = terms[2]) # check if we have a categorical variable with value # labels at the x-axis. axis.labels <- sjlabelled::get_labels( original_model_frame[[terms[1]]], non.labelled = TRUE, drop.unused = TRUE ) list( t.title = t.title, x.title = x.title, y.title = y.title, l.title = l.title, axis.labels = axis.labels ) } .get_title_labels <- function(fun, model_info, no.transform, type) { ysc <- "values" if (!is.null(type) && type == "zi.prob") { ysc <- "zero-inflation probabilities" } else if (fun == "glm") { if (model_info$is_brms_trial) ysc <- "successes" else if (model_info$is_binomial || model_info$is_ordinal || model_info$is_multinomial) ysc <- ifelse(isTRUE(no.transform), "log-odds", "probabilities") else if (model_info$is_count) ysc <- ifelse(isTRUE(no.transform), "log-mean", "counts") } else if (model_info$is_beta) { ysc <- "proportion" } else if (fun == "coxph") { if (!is.null(type) && type == "surv") ysc <- "survival probabilities" else if (!is.null(type) && type == "cumhaz") ysc <- "cumulative hazard" else ysc <- "risk scores" } ysc } .recode_to <- function(x, lowest, highest = -1) { if (is.factor(x)) { x <- as.numeric(as.character(x)) } minval <- min(x, na.rm = TRUE) downsize <- minval - lowest x <- sapply(x, function(y) y - downsize) if (highest > lowest) x[x > highest] <- NA x } ggeffects/R/get_predictions_vglm.R0000644000176200001440000000627014046746430016757 0ustar liggesusersget_predictions_vglm <- function(model, fitfram, ci.lvl, linv, ...) { if (!requireNamespace("VGAM", quietly = TRUE)) { stop("Package `VGAM` needed to calculate adjusted predictions for a vector generalized linear model.", call. = FALSE) } se <- !is.null(ci.lvl) && !is.na(ci.lvl) mi <- insight::model_info(model) # compute ci, two-ways if (!is.null(ci.lvl) && !is.na(ci.lvl)) ci <- (1 + ci.lvl) / 2 else ci <- .975 if ((mi$is_ordinal || mi$is_multinomial) && !isTRUE(se)) { type <- "response" } else { type <- "link" } prdat <- VGAM::predictvglm( model, newdata = fitfram, type = type, se.fit = se, ... ) if (mi$is_ordinal || mi$is_multinomial) { # start here with cumulative link models resp <- insight::get_response(model) if (is.data.frame(resp)) resp.names <- colnames(resp) else resp.names <- levels(resp) if (se) { dat <- data.frame(predicted = prdat$fitted.values) resp.names <- resp.names[-1] } else { dat <- data.frame(predicted = prdat) linv <- function(mu) mu } colnames(dat) <- resp.names fitfram <- cbind(dat, fitfram) # for cumulative link models, we have predicted values for each response # category. Hence, gather columns fitfram <- .gather(fitfram, names_to = "response.level", values_to = "predicted", resp.names) fitfram$predicted <- linv(fitfram$predicted) if (is.matrix(fitfram$predicted)) fitfram$predicted <- as.vector(fitfram$predicted[, 2]) if (se) { d1 <- data.frame(ci.low = prdat$fitted.values - stats::qnorm(ci) * prdat$se.fit) d2 <- data.frame(ci.high = prdat$fitted.values + stats::qnorm(ci) * prdat$se.fit) d3 <- data.frame(se = prdat$se.fit) colnames(d1) <- sprintf("ci_low_%s", resp.names) colnames(d2) <- sprintf("ci_high_%s", resp.names) colnames(d3) <- sprintf("se_%s", resp.names) dat1 <- .gather(d1, names_to = "response.level", values_to = "conf.low") dat2 <- .gather(d2, names_to = "response.level", values_to = "conf.high") dat3 <- .gather(d3, names_to = "response.level", values_to = "se") fitfram$conf.low <- linv(dat1$conf.low) fitfram$conf.high <- linv(dat2$conf.high) if (is.matrix(fitfram$conf.low)) fitfram$conf.low <- as.vector(fitfram$conf.low[, 2]) if (is.matrix(fitfram$conf.high)) fitfram$conf.high <- as.vector(fitfram$conf.high[, 2]) attr(fitfram, "std.error") <- dat3$se fitfram$response.level <- sprintf("P[Y >= %s]", fitfram$response.level) } } else { # start here for other models prdat$fitted.values <- as.vector(prdat$fitted.values) fitfram$predicted <- suppressWarnings(linv(prdat$fitted.values)) # did user request standard errors? if yes, compute CI if (se) { # calculate CI fitfram$conf.low <- suppressWarnings(linv(prdat$fitted.values - stats::qnorm(ci) * prdat$se.fit)) fitfram$conf.high <- suppressWarnings(linv(prdat$fitted.values + stats::qnorm(ci) * prdat$se.fit)) } else { # no CI fitfram$conf.low <- NA fitfram$conf.high <- NA } } fitfram } ggeffects/R/get_predictions_svyglmnb.R0000644000176200001440000000107113565167350017650 0ustar liggesusersget_predictions_svyglmnb <- function(model, fitfram, ci.lvl, linv, model_class, value_adjustment, terms, vcov.fun, vcov.type, vcov.args, condition, interval, ...) { # does user want standard errors? se <- !is.null(ci.lvl) && !is.na(ci.lvl) prdat <- stats::predict( model, newdata = fitfram, type = "link", se.fit = se, ... ) # copy predictions .generic_prediction_data(model, fitfram, linv, prdat, se, ci.lvl, model_class, value_adjustment, terms, vcov.fun, vcov.type, vcov.args, condition, interval) } ggeffects/R/plot.R0000644000176200001440000011443314062321037013515 0ustar liggesusers#' @title Plot ggeffects-objects #' @name plot #' #' @description A generic plot-method for \code{ggeffects}-objects. #' #' @param x An object of class \code{ggeffects}, as returned by the functions #' from this package. #' @param ci Logical, if \code{TRUE}, confidence bands (for continuous variables #' at x-axis) resp. error bars (for factors at x-axis) are plotted. #' @param ci.style Character vector, indicating the style of the confidence #' bands. May be either \code{"ribbon"}, \code{"errorbar"}, \code{"dash"} or #' \code{"dot"}, to plot a ribbon, error bars, or dashed or dotted lines as #' confidence bands. #' @param facets,grid Logical, defaults to \code{TRUE}, if \code{x} has a column named #' \code{facet}, and defaults to \code{FALSE}, if \code{x} has no such #' column. Set \code{facets = TRUE} to wrap the plot into facets even #' for grouping variables (see 'Examples'). \code{grid} is an alias for #' \code{facets}. #' @param add.data,rawdata Logical, if \code{TRUE}, a layer with raw data from response by #' predictor on the x-axis, plotted as point-geoms, is added to the plot. #' @param limit.range Logical, if \code{TRUE}, limits the range of the prediction #' bands to the range of the data. #' @param residuals Logical, if \code{TRUE}, a layer with partial residuals is #' added to the plot. See vignette \href{https://cran.r-project.org/package=effects}{"Effect Displays with Partial Residuals"} #' from \pkg{effects} for more details on partial residual plots. #' @param residuals.line Logical, if \code{TRUE}, a loess-fit line is added to the #' partial residuals plot. Only applies if \code{residuals} is \code{TRUE}. #' @param collapse.group For mixed effects models, name of the grouping variable #' of random effects. If \code{collapse.group = TRUE}, data points "collapsed" #' by the first random effect groups are added to the plot. Else, if #' \code{collapse.group} is a name of a group factor, data is collapsed by #' that specific random effect. See \code{\link{collapse_by_group}} for #' further details. #' @param colors Character vector with color values in hex-format, valid #' color value names (see \code{demo("colors")}) or a name of a #' ggeffects-color-palette. #' Following options are valid for \code{colors}: #' \itemize{ #' \item If not specified, the color brewer palette "Set1" will be used. #' \item If \code{"gs"}, a greyscale will be used. #' \item If \code{"bw"}, the plot is black/white and uses different line types to distinguish groups. #' \item There are some pre-defined color-palettes in this package that can be used, e.g. \code{colors = "metro"}. See \code{\link[=show_pals]{show_pals()}} to show all available palettes. #' \item Else specify own color values or names as vector (e.g. \code{colors = c("#f00000", "#00ff00")}). #' } #' @param alpha Alpha value for the confidence bands. #' @param line.size Numeric, size of the line geoms. #' @param dot.size Numeric, size of the point geoms. #' @param dodge Value for offsetting or shifting error bars, to avoid overlapping. #' Only applies, if a factor is plotted at the x-axis (in such cases, the #' confidence bands are replaced by error bars automatically), or if #' \code{ci.style = "errorbars"}. #' @param use.theme Logical, if \code{TRUE}, a slightly tweaked version of ggplot's #' minimal-theme, \code{theme_ggeffects()}, is applied to the plot. If #' \code{FALSE}, no theme-modifications are applied. #' @param dot.alpha Alpha value for data points, when \code{add.data = TRUE}. #' @param jitter Numeric, between 0 and 1. If not \code{NULL} and #' \code{add.data = TRUE}, adds a small amount of random variation to #' the location of data points dots, to avoid overplotting. Hence the #' points don't reflect exact values in the data. May also be a numeric #' vector of length two, to add different horizontal and vertical jittering. #' For binary outcomes, raw data is not jittered by default to avoid that #' data points exceed the axis limits. #' @param log.y Logical, if \code{TRUE}, the y-axis scale is log-transformed. #' This might be useful for binomial models with predicted probabilities on #' the y-axis. #' @param show.legend Logical, shows or hides the plot legend. #' @param show.title Logical, shows or hides the plot title- #' @param show.x.title Logical, shows or hides the plot title for the x-axis. #' @param show.y.title Logical, shows or hides the plot title for the y-axis. #' @param connect.lines Logical, if \code{TRUE} and plot has point-geoms with #' error bars (this is usually the case when the x-axis is discrete), points #' of same groups will be connected with a line. #' @param one.plot Logical, if \code{TRUE} and \code{x} has a \code{panel} column #' (i.e. when four \code{terms} were used), a single, integrated plot is produced. #' @param base_size Base font size. #' @param base_family Base font family. #' @param ... Further arguments passed down to \code{ggplot::scale_y*()}, to #' control the appearance of the y-axis. #' @param residuals.type Deprecated. Formally was the residual type. Now is always \code{"working"}. #' #' @inheritParams get_title #' #' @inheritSection residualize_over_grid Partial Residuals #' #' @return A ggplot2-object. #' #' @note Load \code{library(ggplot2)} and use \code{theme_set(theme_ggeffects())} #' to set the \pkg{ggeffects}-theme as default plotting theme. You can then use #' further plot-modifiers from \pkg{sjPlot}, like \code{legend_style()} or #' \code{font_size()} without losing the theme-modifications. #' \cr \cr #' There are pre-defined colour palettes in this package. Use #' \code{show_pals()} to show all available colour palettes. #' #' @details For proportional odds logistic regression (see \code{?MASS::polr}) #' or cumulative link models in general, plots are automatically facetted #' by \code{response.level}, which indicates the grouping of predictions #' based on the level of the model's response. #' #' @examples #' library(sjlabelled) #' data(efc) #' efc$c172code <- as_label(efc$c172code) #' fit <- lm(barthtot ~ c12hour + neg_c_7 + c161sex + c172code, data = efc) #' #' dat <- ggpredict(fit, terms = "c12hour") #' plot(dat) #' #' \donttest{ #' # facet by group, use pre-defined color palette #' dat <- ggpredict(fit, terms = c("c12hour", "c172code")) #' plot(dat, facet = TRUE, colors = "hero") #' #' # don't use facets, b/w figure, w/o confidence bands #' dat <- ggpredict(fit, terms = c("c12hour", "c172code")) #' plot(dat, colors = "bw", ci = FALSE) #' #' # factor at x axis, plot exact data points and error bars #' dat <- ggpredict(fit, terms = c("c172code", "c161sex")) #' plot(dat) #' #' # for three variables, automatic facetting #' dat <- ggpredict(fit, terms = c("c12hour", "c172code", "c161sex")) #' plot(dat)} #' #' # show all color palettes #' show_pals() #' @export plot.ggeffects <- function(x, ci = TRUE, ci.style = c("ribbon", "errorbar", "dash", "dot"), facets, add.data = FALSE, limit.range = FALSE, residuals = FALSE, residuals.line = FALSE, collapse.group = FALSE, colors = "Set1", alpha = .15, dodge = .25, use.theme = TRUE, dot.alpha = .35, jitter = .2, log.y = FALSE, case = NULL, show.legend = TRUE, show.title = TRUE, show.x.title = TRUE, show.y.title = TRUE, dot.size = NULL, line.size = NULL, connect.lines = FALSE, grid, one.plot = TRUE, rawdata, residuals.type, ...) { if (!requireNamespace("ggplot2", quietly = FALSE)) { stop("Package `ggplot2` needed to produce marginal effects plots. Please install it by typing `install.packages(\"ggplot2\", dependencies = TRUE)` into the console.", call. = FALSE) } if (!missing(residuals.type)) warning("'residuals.type' is deprecated. Using 'working' residuals.") # check alias if (missing(rawdata)) rawdata <- add.data # set some defaults jitter.miss <- missing(jitter) if (isTRUE(jitter)) jitter <- .2 else if (is.logical(jitter) && length(jitter) == 1L && !is.na(jitter) && !jitter) jitter <- NULL # make sure we have two values, one for horizontal and one for vertical jittering if (!is.null(jitter) && length(jitter) == 1 && is.numeric(jitter)) { jitter <- c(jitter, jitter) } y.breaks <- NULL y.limits <- NULL # is x a factor? xif <- attr(x, "x.is.factor", exact = TRUE) x_is_factor <- !is.null(xif) && xif == "1" if (is.null(dot.size)) dot.size <- 2 if (is.null(line.size)) line.size <- .7 if (!missing(grid)) facets <- grid if (missing(ci.style) && x_is_factor) ci.style <- "errorbar" ci.style <- match.arg(ci.style) add.args <- lapply(match.call(expand.dots = FALSE)$`...`, function(x) x) if (!("breaks" %in% names(add.args)) && isTRUE(log.y)) { y.breaks <- unique(round(log2(pretty(c(min(x$conf.low), max(x$conf.high)))))) y.breaks[is.nan(y.breaks)] <- NA y.breaks[is.infinite(y.breaks)] <- NA y.breaks <- 2^y.breaks[!is.na(y.breaks)] y.limits <- c(min(y.breaks), max(y.breaks)) # this is a REALLY sloppy hack to avoid that axis limits are not 0 for # log-scale, and that axis limits cover the range of the plotted geoms # I think there's a more elegant solution, so please let me know... if (y.limits[1] > min(x$conf.low)) y.limits[1] <- y.limits[1] / 2 if (y.limits[2] < max(x$conf.high)) y.limits[2] <- y.limits[2] * 2 if (y.limits[1] > min(x$conf.low)) y.limits[1] <- y.limits[1] / 2 if (y.limits[2] < max(x$conf.high)) y.limits[2] <- y.limits[2] * 2 } # do we have groups and facets? has_groups <- .obj_has_name(x, "group") && length(unique(x$group)) > 1 has_facets <- .obj_has_name(x, "facet") && length(unique(x$facet)) > 1 has_panel <- .obj_has_name(x, "panel") && length(unique(x$panel)) > 1 # if we add data points, limit to range if (isTRUE(limit.range)) { raw_data <- attr(x, "rawdata", exact = TRUE) if (!is.null(raw_data)) { if (has_groups && has_facets) { ranges <- lapply(split(raw_data, list(raw_data$group, raw_data$facet)), function(i) range(i$x, na.rm = TRUE)) for (i in unique(raw_data$group)) { for (j in unique(raw_data$facet)) { if (any(is.infinite(ranges[[paste0(i, ".", j)]]))) { remove <- x$group == i & x$facet == j x$x[remove] <- NA } else { remove <- x$group == i & x$facet == j & x$x < ranges[[paste0(i, ".", j)]][1] x$x[remove] <- NA remove <- x$group == i & x$facet == j & x$x > ranges[[paste0(i, ".", j)]][2] x$x[remove] <- NA } } } } else if (has_groups) { ranges <- lapply(split(raw_data, raw_data$group), function(i) range(i$x, na.rm = TRUE)) for (i in names(ranges)) { remove <- x$group == i & x$x < ranges[[i]][1] x$x[remove] <- NA remove <- x$group == i & x$x > ranges[[i]][2] x$x[remove] <- NA } } else { remove <- x$x < min(raw_data$x, na.rm = TRUE) | x$x > max(raw_data$x, na.rm = TRUE) x$x[remove] <- NA } } } # partial residuals? if (residuals) { model <- .get_model_object(x) if (!is.null(model)) { residual_data <- residualize_over_grid(grid = x, model = model) attr(x, "residual_data") <- residual_data ## TODO for now, we allow no continuous grouping varialbles for partial residuals # it is difficult to match "raw data" values with the specific at-values # for continuous variables attr(x, "continuous.group") <- FALSE } else { warning("Could not find model object to extract residuals.", call. = FALSE) residals <- FALSE } } # collapse data by random effects? if (isTRUE(collapse.group) || (!is.null(collapse.group) && !isFALSE(collapse.group))) { if (isTRUE(collapse.group)) { # use first random effect collapse.group <- NULL } re_data <- collapse_by_group(x, model = .get_model_object(x), collapse.by = collapse.group, residuals = residuals) attr(x, "random_effects_data") <- re_data attr(x, "continuous.group") <- FALSE # no additional residuals or raw data rawdata <- add.data <- FALSE residuals <- FALSE attr(x, "residual_data") <- NULL } # convert x back to numeric if (!is.numeric(x$x)) { if (x_is_factor && !.is_numeric_factor(x$x)) { levels(x$x) <- seq_len(nlevels(x$x)) } x$x <- .factor_to_numeric(x$x) } # special solution for polr facet_polr <- FALSE if (.obj_has_name(x, "response.level") && length(unique(x$response.level)) > 1) { has_facets <- TRUE facet_polr <- TRUE } # remember if we have a b/w plot is_black_white <- colors[1] == "bw" # set default, if argument not specified if (has_facets) facets <- TRUE else if (missing(facets) || is.null(facets)) facets <- has_facets # facets, but only groups? here the user wants to # plot facets for the grouping variable facets_grp <- facets && !has_facets # set CI to false if we don't have SE and CI if ("conf.low" %in% names(which(colSums(is.na(x)) == nrow(x))) || !.obj_has_name(x, "conf.low")) ci <- FALSE # if we have a numeric variable as facet, also add variable name for more # intuitive labelling if (facets) { if (is.numeric(x$facet) || isTRUE(attr(x, "numeric.facet", exact = TRUE))) { x$facet <- sprintf( "%s = %g", attr(x, "terms", exact = TRUE)[3], .factor_to_numeric(x$facet) ) } } if (!has_panel) one.plot <- FALSE if (one.plot && !requireNamespace("see", quietly = TRUE)) { warning("Package `see` needed to plot multiple panels in one integrated figure. Please install it by typing `install.packages(\"see\", dependencies = TRUE)` into the console.", call. = FALSE) one.plot <- FALSE } if (has_panel) { panels <- unique(x$panel) p <- lapply(1:length(panels), function(.i) { .p <- panels[.i] attr(x, "panel.title") <- sprintf( "%s = %s", attr(x, "terms", exact = TRUE)[4], as.character(.p) ) if (one.plot && .i < length(panels)) { show_l <- FALSE } else { show_l <- show.legend } pl <- plot_panel( x = x[x$panel == .p, , drop = FALSE], colors = colors, has_groups = has_groups, facets_grp = facets_grp, facets = facets, facet_polr = facet_polr, is_black_white = is_black_white, x_is_factor = x_is_factor, alpha = alpha, dot.alpha = dot.alpha, dodge = dodge, ci = ci, ci.style = ci.style, dot.size = dot.size, line.size = line.size, connect.lines = connect.lines, case = case, jitter = jitter, jitter.miss = jitter.miss, rawdata = rawdata, residuals = residuals, residuals.line = residuals.line, show.title = show.title, show.x.title = show.x.title, show.y.title = show.y.title, show.legend = show_l, log.y = log.y, y.breaks = y.breaks, y.limits = y.limits, use.theme = use.theme, ... ) if (one.plot) { if (.i < length(panels)) { pl <- pl + ggplot2::labs(x = NULL) } if (.i > 1) { pl <- pl + ggplot2::labs(title = NULL) } } pl }) } else { p <- plot_panel( x = x, colors = colors, has_groups = has_groups, facets_grp = facets_grp, facets = facets, facet_polr = facet_polr, is_black_white = is_black_white, x_is_factor = x_is_factor, alpha = alpha, dot.alpha = dot.alpha, dodge = dodge, ci = ci, ci.style = ci.style, dot.size = dot.size, line.size = line.size, connect.lines = connect.lines, case = case, jitter = jitter, jitter.miss = jitter.miss, rawdata = rawdata, residuals = residuals, residuals.line = residuals.line, show.title = show.title, show.x.title = show.x.title, show.y.title = show.y.title, show.legend = show.legend, log.y = log.y, y.breaks = y.breaks, y.limits = y.limits, use.theme = use.theme, ... ) } if (has_panel && one.plot && requireNamespace("see", quietly = TRUE)) { do.call(see::plots, p) } else { p } } plot_panel <- function(x, colors, has_groups, facets_grp, facets, facet_polr, is_black_white, x_is_factor, alpha, dot.alpha, dodge, ci, ci.style, dot.size, line.size, connect.lines, case, jitter, jitter.miss, rawdata, residuals, residuals.line, show.title, show.x.title, show.y.title, show.legend, log.y, y.breaks, y.limits, use.theme, ...) { if (.obj_has_name(x, "group") && is.character(x$group)) x$group <- factor(x$group, levels = unique(x$group)) if (.obj_has_name(x, "facet") && is.character(x$facet)) x$facet <- factor(x$facet, levels = unique(x$facet)) if (.obj_has_name(x, "response.level") && is.character(x$response.level)) x$response.level <- ordered(x$response.level, levels = unique(x$response.level)) if (rawdata & isTRUE(attr(x, "continuous.group"))) { x$group_col <- as.numeric(as.character(x$group)) } else { x$group_col <- x$group } # base plot, set mappings ----- plot_data <- x[!is.na(x$x), ] if (has_groups && !facets_grp && is_black_white && x_is_factor) p <- ggplot2::ggplot(plot_data, ggplot2::aes_string(x = "x", y = "predicted", colour = "group_col", fill = "group_col", shape = "group")) else if (has_groups && !facets_grp && is_black_white && !x_is_factor) p <- ggplot2::ggplot(plot_data, ggplot2::aes_string(x = "x", y = "predicted", colour = "group_col", fill = "group_col", linetype = "group")) else if (has_groups && !facets_grp && colors[1] == "gs" && x_is_factor) p <- ggplot2::ggplot(plot_data, ggplot2::aes_string(x = "x", y = "predicted", colour = "group_col", fill = "group_col", shape = "group")) else if (has_groups && colors[1] != "bw") p <- ggplot2::ggplot(plot_data, ggplot2::aes_string(x = "x", y = "predicted", colour = "group_col", fill = "group_col")) else p <- ggplot2::ggplot(plot_data, ggplot2::aes_string(x = "x", y = "predicted")) # get color values ----- colors <- .get_colors(colors, length(unique(stats::na.omit(x$group))), isTRUE(attr(x, "continuous.group"))) # plot raw data points ----- # get raw data rawdat <- attr(x, "rawdata", exact = TRUE) if (rawdata) { p <- .add_raw_data_to_plot(p, x, rawdat, ci.style, dot.alpha, dot.size, dodge, jitter, jitter.miss, colors) } # plot partial residuals ----- # get residual data residual_data <- attr(x, "residual_data", exact = TRUE) if (isTRUE(residuals)) { p <- .add_residuals_to_plot(p, x, residual_data, residuals.line, ci.style, line.size, dot.alpha, dot.size, dodge, jitter, colors) } # plot random effects group data ----- # get re-group data random_effects_data <- attr(x, "random_effects_data", exact = TRUE) if (!is.null(random_effects_data)) { p <- .add_re_data_to_plot(p, x, random_effects_data, dot.alpha, dot.size, dodge, jitter) } # now plot the geom. we use a smoother for a continuous x, and # a point-geom, if x was a factor. In this case, the x-value is still # numeric, but we need to plot exact data points between categories # and no smoothing across all x-values if (x_is_factor) { # for x as factor p <- p + ggplot2::geom_point( position = ggplot2::position_dodge(width = dodge), size = dot.size ) } else { # classical line p <- p + ggplot2::geom_line(size = line.size, ggplot2::aes_string(group = "group")) } # connect dots with lines... if (x_is_factor && connect.lines) { p <- p + ggplot2::geom_line( size = line.size, position = ggplot2::position_dodge(width = dodge) ) } # CI ---- if (ci) { # for a factor on x-axis, use error bars if (x_is_factor) { if (ci.style == "errorbar") { p <- p + ggplot2::geom_errorbar( ggplot2::aes_string(ymin = "conf.low", ymax = "conf.high"), position = ggplot2::position_dodge(width = dodge), width = .1, size = line.size ) } else { lt <- switch( ci.style, dash = 2, dot = 3, 2 ) p <- p + ggplot2::geom_errorbar( ggplot2::aes_string(ymin = "conf.low", ymax = "conf.high", linetype = NULL), position = ggplot2::position_dodge(width = dodge), width = .1, linetype = lt, size = line.size ) } } else { if (ci.style == "ribbon") { # for continuous x, use ribbons by default p <- p + ggplot2::geom_ribbon( ggplot2::aes_string(ymin = "conf.low", ymax = "conf.high", colour = NULL, linetype = NULL, shape = NULL, group = "group"), alpha = alpha ) } else if (ci.style == "errorbar") { p <- p + ggplot2::geom_point( position = ggplot2::position_dodge(width = dodge), size = dot.size ) + ggplot2::geom_errorbar( ggplot2::aes_string(ymin = "conf.low", ymax = "conf.high", shape = NULL), position = ggplot2::position_dodge(width = dodge), size = line.size, width = 0 ) } else { lt <- switch( ci.style, dash = 2, dot = 3, 2 ) p <- p + ggplot2::geom_line( ggplot2::aes_string(y = "conf.low", linetype = NULL), linetype = lt ) + ggplot2::geom_line( ggplot2::aes_string(y = "conf.high", linetype = NULL), linetype = lt ) } } } # If we have x-axis-labels, use these to label the axis x_lab <- get_x_labels(x, case) if (!is.null(x_lab)) { p <- p + ggplot2::scale_x_continuous(breaks = unique(plot_data$x), labels = x_lab) } # facets ---- if (facets_grp) { # facet groups p <- p + ggplot2::facet_wrap(~group, scales = "free_x") # remove legends p <- p + ggplot2::guides(colour = "none", linetype = "none", shape = "none") } else if (facet_polr) { p <- p + ggplot2::facet_wrap(~response.level, scales = "free_x") } else if (facets) { p <- p + ggplot2::facet_wrap(~facet, scales = "free_x") } # set colors ---- if (isTRUE(rawdata) && isTRUE(attr(x, "continuous.group"))) { p <- p + ggplot2::scale_color_gradientn(colors = colors, aesthetics = c("colour", "fill"), guide = "legend", breaks = as.numeric(levels(x$group)), limits = range(c(rawdat$group_col, x$group_col))) } else { p <- p + ggplot2::scale_color_manual(values = colors, aesthetics = c("colour", "fill")) } # show/hide titles ---- if (!show.title) attr(x, "title") <- NULL if (!show.title) attr(x, "n.trials") <- NULL if (!show.x.title) attr(x, "x.title") <- NULL if (!show.y.title) attr(x, "y.title") <- NULL # set axis titles ---- p <- p + ggplot2::labs( title = get_title(x, case), x = get_x_title(x, case), y = get_y_title(x, case), fill = NULL, subtitle = get_sub_title(x) ) if (has_groups && show.legend) p <- p + ggplot2::labs( colour = get_legend_title(x, case), linetype = get_legend_title(x, case), shape = get_legend_title(x, case) ) # no legend for fill-aes ---- p <- p + ggplot2::guides(fill = "none") if (is_black_white) { p <- p + ggplot2::guides(colour = "none") + ggplot2::labs(colour = NULL) } # show or hide legend ----- if (!show.legend) { p <- p + ggplot2::labs( colour = NULL, linetype = NULL, shape = NULL ) + ggplot2::guides(colour = "none", linetype = "none", shape = "none") } # for binomial family, fix coord ---- if (attr(x, "logistic", exact = TRUE) == "1" && attr(x, "is.trial", exact = TRUE) == "0") { if (log.y) { if (is.null(y.breaks)) p <- p + ggplot2::scale_y_log10(labels = .percents, ...) else p <- p + ggplot2::scale_y_log10(labels = .percents, breaks = y.breaks, limits = y.limits, ...) } else { p <- p + ggplot2::scale_y_continuous(labels = .percents, ...) } } else if (log.y) { if (is.null(y.breaks)) p <- p + ggplot2::scale_y_log10(...) else p <- p + ggplot2::scale_y_log10(breaks = y.breaks, limits = y.limits, ...) } else { p <- p + ggplot2::scale_y_continuous(...) } # tweak theme if (use.theme) p <- p + theme_ggeffects() p } #' @export plot.ggalleffects <- function(x, ci = TRUE, ci.style = c("ribbon", "errorbar", "dash", "dot"), facets, add.data = FALSE, colors = "Set1", alpha = .15, dodge = .25, use.theme = TRUE, dot.alpha = .5, jitter = .2, log.y = FALSE, case = NULL, show.legend = TRUE, show.title = TRUE, show.x.title = TRUE, show.y.title = TRUE, dot.size = NULL, line.size = NULL, connect.lines = FALSE, grid, one.plot = TRUE, rawdata, ...) { if (!missing(grid)) facets <- grid if (missing(facets)) facets <- NULL # check alias if (missing(rawdata)) rawdata <- add.data if (isTRUE(facets)) { # merge all effect-data frames into one dat <- get_complete_df(x) rawdat <- suppressWarnings( do.call(rbind, lapply(x, function(d) { tmp <- attr(d, "rawdata") tmp$group <- d$group[1] tmp })) ) # copy raw data attr(dat, "rawdata") <- rawdat # set various attributes attr(dat, "x.is.factor") <- attr(x[[1]], "x.is.factor", exact = TRUE) attr(dat, "family") <- attr(x[[1]], "family", exact = TRUE) attr(dat, "link") <- attr(x[[1]], "link", exact = TRUE) attr(dat, "logistic") <- attr(x[[1]], "logistic", exact = TRUE) attr(dat, "fitfun") <- attr(x[[1]], "fitfun", exact = TRUE) graphics::plot( x = dat, ci = ci, ci.style = ci.style, facets = TRUE, add.data = rawdata, colors = colors, alpha = alpha, dodge = dodge, use.theme = use.theme, dot.alpha = dot.alpha, jitter = jitter, log.y = log.y, case = case, show.legend = show.legend, show.title = FALSE, show.x.title = show.x.title, show.y.title = FALSE, dot.size = dot.size, line.size = line.size, connect.lines = connect.lines, ... ) } else { lapply(x, function(.x) { graphics::plot( x = .x, ci = ci, facets = facets, rawdata = rawdata, colors = colors, alpha = alpha, dodge = dodge, use.theme = use.theme, dot.alpha = dot.alpha, jitter = jitter, log.y = log.y, case = case, show.legend = show.legend, show.title = show.title, show.x.title = show.x.title, show.y.title = show.y.title, dot.size = dot.size, line.size = line.size ) }) } } .percents <- function(x) { insight::format_value(x = x, as_percent = TRUE, digits = 0) } .add_raw_data_to_plot <- function(p, x, rawdat, ci.style, dot.alpha, dot.size, dodge, jitter, jitter.miss, colors) { if (!requireNamespace("ggplot2", quietly = FALSE)) { stop("Package `ggplot2` needed to produce marginal effects plots. Please install it by typing `install.packages(\"ggplot2\", dependencies = TRUE)` into the console.", call. = FALSE) } # we need an own aes for this # we plot rawdata first, so it doesn't overlay the # dots / lines for marginal effects if (!is.null(rawdat)) { # recode binary response to numeric? if so, make sure it starts with 0 if (identical(attributes(x)$logistic, "1")) { lowest <- 0 } else { lowest <- NULL } # make sure response is numeric rawdat$response <- .factor_to_numeric(rawdat$response, lowest = lowest) # check if we have a group-variable with at least two groups if (.obj_has_name(rawdat, "group")) { if (isTRUE(attr(x, "continuous.group"))) { rawdat$group_col <- as.numeric(as.character(rawdat$group)) } else { rawdat$group_col <- rawdat$group } rawdat$group <- as.factor(rawdat$group) # levels(rawdat$group) <- unique(x$group) grps <- .n_distinct(rawdat$group) > 1 } else { grps <- FALSE } # check if we have only selected values for groups, in this case # filter raw data to match grouping colours if (grps && isFALSE(attr(x, "continuous.group")) && .n_distinct(rawdat$group) > .n_distinct(x$group)) { rawdat <- rawdat[which(rawdat$group %in% x$group), , drop = FALSE] } # if we have groups, add colour aes, to map raw data to # grouping variable if (grps) mp <- ggplot2::aes_string(x = "x", y = "response", colour = "group_col") else mp <- ggplot2::aes_string(x = "x", y = "response") # for binary response, no jittering by default if ((attr(x, "logistic", exact = TRUE) == "1" && jitter.miss) || is.null(jitter)) { p <- p + ggplot2::geom_point( data = rawdat, mapping = mp, alpha = dot.alpha, size = dot.size, show.legend = FALSE, inherit.aes = FALSE, shape = 16 ) } else { # no jitter if (is.null(jitter)) { jitter <- c(0, 0) } if (ci.style == "errorbar") { if (grps) { p <- p + ggplot2::geom_point( data = rawdat, mapping = ggplot2::aes_string(x = "x", y = "response", colour = "group_col"), alpha = dot.alpha, size = dot.size, position = ggplot2::position_jitterdodge( jitter.width = jitter[1], jitter.height = jitter[2], dodge.width = dodge ), show.legend = FALSE, inherit.aes = FALSE, shape = 16 ) } else { p <- p + ggplot2::geom_point( data = rawdat, mapping = ggplot2::aes_string(x = "x", y = "response", fill = "group_col"), alpha = dot.alpha, size = dot.size, position = ggplot2::position_jitterdodge( jitter.width = jitter[1], jitter.height = jitter[2], dodge.width = dodge ), show.legend = FALSE, inherit.aes = FALSE, shape = 16, color = colors[1] ) } } else { p <- p + ggplot2::geom_jitter( data = rawdat, mapping = mp, alpha = dot.alpha, size = dot.size, width = jitter[1], height = jitter[2], show.legend = FALSE, inherit.aes = FALSE, shape = 16 ) } } } else { message("Raw data not available.") } p } .add_residuals_to_plot <- function(p, x, residuals, residuals.line, ci.style, line.size, dot.alpha, dot.size, dodge, jitter, colors) { if (!requireNamespace("ggplot2", quietly = FALSE)) { stop("Package `ggplot2` needed to produce marginal effects plots. Please install it by typing `install.packages(\"ggplot2\", dependencies = TRUE)` into the console.", call. = FALSE) } if (!is.null(residuals)) { # make sure x on x-axis is on same scale if (is.numeric(x$x) && !is.numeric(residuals$x)) { residuals$x <- .factor_to_numeric(residuals$x) } residuals$facet <- NULL residuals$panel <- NULL # check if we have a group-variable with at least two groups if (.obj_has_name(residuals, "group")) { if (isTRUE(attr(x, "continuous.group")) && is.numeric(x$group)) { residuals$group_col <- as.numeric(as.character(residuals$group)) } else { residuals$group_col <- as.factor(residuals$group) } residuals$group <- as.factor(residuals$group) grps <- .n_distinct(residuals$group) > 1 } else { grps <- FALSE } # check if we have only selected values for groups, in this case # filter raw data to match grouping colours if (grps && isFALSE(attr(x, "continuous.group")) && .n_distinct(residuals$group) > .n_distinct(x$group)) { residuals <- residuals[which(residuals$group %in% x$group), , drop = FALSE] } # if we have groups, add colour aes, to map raw data to # grouping variable if (grps) mp <- ggplot2::aes_string(x = "x", y = "predicted", colour = "group_col") else mp <- ggplot2::aes_string(x = "x", y = "predicted") # if ("group" %in% colnames(residuals)) { # if (isTRUE(attr(x, "continuous.group"))) { # residuals$group_col <- as.numeric(as.character(residuals$group)) # } else { # residuals$group_col <- residuals$group # } # residuals$group <- as.factor(residuals$group) # mp <- ggplot2::aes_string(x = "x", y = "predicted", colour = "group_col") # } else { # mp <- ggplot2::aes_string(x = "x", y = "predicted") # } if (is.null(jitter)) { p <- p + ggplot2::geom_point( data = residuals, mapping = mp, alpha = dot.alpha, size = dot.size, show.legend = FALSE, inherit.aes = FALSE, shape = 16 ) } else { p <- p + ggplot2::geom_jitter( data = residuals, mapping = mp, alpha = dot.alpha, size = dot.size, width = jitter[1], height = jitter[2], show.legend = FALSE, inherit.aes = FALSE, shape = 16 ) } if (isTRUE(residuals.line)) { p <- p + ggplot2::geom_smooth( data = residuals, mapping = mp, method = "loess", inherit.aes = FALSE, size = line.size, se = FALSE ) } } else { message("Partial residuals not available.") } p } .add_re_data_to_plot <- function(p, x, random_effects_data, dot.alpha, dot.size, dodge, jitter) { if (!requireNamespace("ggplot2", quietly = FALSE)) { stop("Package `ggplot2` needed to produce marginal effects plots. Please install it by typing `install.packages(\"ggplot2\", dependencies = TRUE)` into the console.", call. = FALSE) } # make sure x on x-axis is on same scale if (is.numeric(x$x) && !is.numeric(random_effects_data$x)) { random_effects_data$x <- .factor_to_numeric(random_effects_data$x) } if ("response" %in% names(random_effects_data)) { mp <- ggplot2::aes_string(x = "x", y = "response", colour = "group_col") } else { mp <- ggplot2::aes_string(x = "x", y = "predicted", colour = "group_col") } if (is.null(jitter)) { p <- p + ggplot2::geom_point( data = random_effects_data, mapping = mp, alpha = dot.alpha, size = dot.size, position = ggplot2::position_dodge(width = dodge), show.legend = FALSE, inherit.aes = FALSE, shape = 16 ) } else { p <- p + ggplot2::geom_point( data = random_effects_data, mapping = mp, alpha = dot.alpha, size = dot.size, position = ggplot2::position_jitterdodge( jitter.width = jitter[1], jitter.height = jitter[2], dodge.width = dodge ), show.legend = FALSE, inherit.aes = FALSE, shape = 16 ) } p } .get_model_object <- function(x) { obj_name <- attr(x, "model.name", exact = TRUE) model <- NULL if (!is.null(obj_name)) { model <- tryCatch({ get(obj_name, envir = parent.frame()) }, error = function(e) { NULL }) if (is.null(model)) { model <- tryCatch({ get(obj_name, envir = globalenv()) }, error = function(e) { NULL }) } } model } ggeffects/R/residualize_over_grid.R0000644000176200001440000001236214046746430017127 0ustar liggesusers#' @title Compute partial residuals from a data grid #' @name residualize_over_grid #' #' @description This function computes partial residuals based on a data grid, #' where the data grid is usually a data frame from all combinations of factor #' variables or certain values of numeric vectors. This data grid is usually used #' as \code{newdata} argument in \code{predict()}, and can be created with #' \code{\link{new_data}}. #' #' @param grid A data frame representing the data grid, or an object of class \code{ggeffects}, as returned by \code{ggpredict()} and others. #' @param model The model for which to compute partial residuals. The data grid \code{grid} should match to predictors in the model. #' @param pred_name The name of the focal predictor, for which partial residuals are computed. #' @param protect_names Logical, if \code{TRUE}, preserves column names from the \code{ggeffects} objects that is used as \code{grid}. #' @param ... Currently not used. #' @param type Deprecated. Formally was the residual type. Now is always \code{"working"}. #' #' @section Partial Residuals: #' For \strong{generalized linear models} (glms), residualized scores are #' computed as \code{inv.link(link(Y) + r)} where \code{Y} are the predicted #' values on the response scale, and \code{r} are the \emph{working} residuals. #' \cr\cr #' For (generalized) linear \strong{mixed models}, the random effect are also #' partialled out. #' #' @references Fox J, Weisberg S. Visualizing Fit and Lack of Fit in Complex Regression Models with Predictor Effect Plots and Partial Residuals. Journal of Statistical Software 2018;87. #' #' @return A data frame with residuals for the focal predictor. #' #' @examples #' library(ggeffects) #' set.seed(1234) #' x <- rnorm(200) #' z <- rnorm(200) #' # quadratic relationship #' y <- 2 * x + x^2 + 4 * z + rnorm(200) #' #' d <- data.frame(x, y, z) #' model <- lm(y ~ x + z, data = d) #' #' pr <- ggpredict(model, c("x [all]", "z")) #' head(residualize_over_grid(pr, model)) #' @export residualize_over_grid <- function(grid, model, ...) { UseMethod("residualize_over_grid") } #' @rdname residualize_over_grid #' @export residualize_over_grid.data.frame <- function(grid, model, pred_name, type, ...) { if (!missing(type)) warning("'residuals.type' is deprecated. Using 'working' residuals.") old_d <- insight::get_predictors(model) fun_link <- insight::link_function(model) inv_fun <- insight::link_inverse(model) predicted <- grid[[pred_name]] grid[[pred_name]] <- NULL is_fixed <- sapply(grid, function(x) length(unique(x))) == 1 grid <- grid[,!is_fixed, drop = FALSE] old_d <- old_d[, colnames(grid)[colnames(grid) %in% colnames(old_d)], drop = FALSE] if (!.is_grid(grid)) { stop("Grid for partial residuals must be a fully crossed grid.") } # for each var best_match <- NULL for (p in colnames(old_d)) { if (is.factor(old_d[[p]]) || is.logical(old_d[[p]]) || is.character(old_d[[p]])) { grid[[p]] <- as.character(grid[[p]]) old_d[[p]] <- as.character(old_d[[p]]) } else { grid[[p]] <- .validate_num(grid[[p]]) } # if factor / logical / char in old data, find where it is equal # if numeric in old data, find where it is closest best_match <- .closest(old_d[[p]], grid[[p]], best_match = best_match) } idx <- apply(best_match, 2, which) idx <- sapply(idx, "[", 1) res <- tryCatch( stats::residuals(model, type = "working"), error = function(e) { NULL } ) if (is.null(res)) { warning("Could not extract residuals.", call. = FALSE) return(NULL) } points <- grid[idx, , drop = FALSE] points[[pred_name]] <- inv_fun(fun_link(predicted[idx]) + res) # add errors points } #' @rdname residualize_over_grid #' @export residualize_over_grid.ggeffects <- function(grid, model, protect_names = TRUE, ...) { new_d <- as.data.frame(grid) new_d <- new_d[colnames(new_d) %in% c("x", "group", "facet", "panel", "predicted")] colnames(new_d)[colnames(new_d) %in% c("x", "group", "facet","panel")] <- attr(grid, "terms") points <- residualize_over_grid(new_d, model, pred_name = "predicted", ...) if (protect_names && !is.null(points)) { colnames_gge <- c("x", "group", "facet","panel") colnames_orig <- attr(grid,"terms") for (i in seq_along(colnames_orig)) { colnames(points)[colnames(points) == colnames_orig[i]] <- colnames_gge[i] } } points } .is_grid <- function(df) { unq <- lapply(df, unique) if (prod(sapply(unq, length)) != nrow(df)) { return(FALSE) } df2 <- do.call(expand.grid, args = unq) df2$..1 <- 1 res <- merge(df,df2, by = colnames(df), all = TRUE) return(sum(res$..1) == sum(df2$..1)) } .closest <- function(x, target, best_match) { if (is.numeric(x)) { # AD <- outer(x, target, FUN = function(x, y) abs(x - y)) AD <- abs(outer(x, target, FUN = `-`)) idx <- apply(AD, 1, function(x) x == min(x)) } else { idx <- t(outer(x, target, FUN = `==`)) } if (is.matrix(best_match)) { idx <- idx & best_match } idx } .validate_num <- function(x) { if (!is.numeric(x)) { x <- as.numeric(as.character(x)) } x } ggeffects/R/get_predictions_generic2.R0000644000176200001440000000333714046746430017511 0ustar liggesusersget_predictions_generic2 <- function(model, fitfram, ci.lvl, linv, type, model_class, value_adjustment, terms, vcov.fun, vcov.type, vcov.args, condition, interval, ...) { # get prediction type. pt <- switch( model_class, "betareg" = , "vgam" = , "feglm" = , "glmx" = , "fixest" = "link", "response" ) se <- (!is.null(ci.lvl) && !is.na(ci.lvl)) || !is.null(vcov.fun) # compute ci, two-ways if (!is.null(ci.lvl) && !is.na(ci.lvl)) ci <- (1 + ci.lvl) / 2 else ci <- .975 # get predictions prdat <- stats::predict( model, newdata = fitfram, type = pt, ... ) fitfram$predicted <- as.vector(prdat) # get standard errors from variance-covariance matrix se.pred <- .standard_error_predictions( model = model, prediction_data = fitfram, value_adjustment = value_adjustment, type = type, terms = terms, model_class = model_class, vcov.fun = vcov.fun, vcov.type = vcov.type, vcov.args = vcov.args, condition = condition, interval = interval ) if (.check_returned_se(se.pred) && isTRUE(se)) { se.fit <- se.pred$se.fit fitfram <- se.pred$prediction_data # CI fitfram$conf.low <- linv(fitfram$predicted - stats::qnorm(ci) * se.fit) fitfram$conf.high <- linv(fitfram$predicted + stats::qnorm(ci) * se.fit) # copy standard errors attr(fitfram, "std.error") <- se.fit attr(fitfram, "prediction.interval") <- attr(se.pred, "prediction_interval") } else { # CI fitfram$conf.low <- NA fitfram$conf.high <- NA } fitfram$predicted <- linv(fitfram$predicted) fitfram } ggeffects/R/get_predictions_gamlss.R0000644000176200001440000000317614046746430017302 0ustar liggesusersget_predictions_gamlss <- function(model, fitfram, ci.lvl, terms, model_class, value_adjustment, condition, ...) { se <- !is.null(ci.lvl) && !is.na(ci.lvl) # compute ci, two-ways if (!is.null(ci.lvl) && !is.na(ci.lvl)) ci <- (1 + ci.lvl) / 2 else ci <- .975 prdat <- suppressMessages( stats::predict( model, newdata = fitfram, type = "link", se.fit = FALSE, ... )) fitfram$predicted <- as.vector(prdat) # check whether prediction are requested for specific distribution parameter # and if so, use correct link-inverse function. add.args <- lapply(match.call(expand.dots = FALSE)$`...`, function(x) x) if ("what" %in% names(add.args)) what <- eval(add.args[["what"]]) else what <- "mu" linv <- insight::link_inverse(model, what = what) # did user request standard errors? if yes, compute CI se.pred <- .standard_error_predictions( model = model, prediction_data = fitfram, value_adjustment = value_adjustment, terms = terms, model_class = model_class, condition = condition ) if (se && .check_returned_se(se.pred)) { se.fit <- se.pred$se.fit fitfram <- se.pred$prediction_data # CI fitfram$conf.low <- linv(fitfram$predicted - stats::qnorm(ci) * se.fit) fitfram$conf.high <- linv(fitfram$predicted + stats::qnorm(ci) * se.fit) # copy standard errors attr(fitfram, "std.error") <- se.fit } else { # CI fitfram$conf.low <- NA fitfram$conf.high <- NA } fitfram$predicted <- linv(fitfram$predicted) fitfram } ggeffects/R/utils_typical_value.R0000644000176200001440000000572214046746430016632 0ustar liggesusers.typical_value <- function(x, fun = "mean", weights = NULL, predictor = NULL, log_terms = NULL, emmeans.only = FALSE, ...) { # check if we have named vectors and find the requested function # for special functions for factors, convert to numeric first fnames <- names(fun) if (!is.null(fnames)) { if (is.integer(x)) { fun <- fun[which(fnames %in% c("integer", "i"))] x <- as.numeric(x) } else if (is.numeric(x)) { fun <- fun[which(fnames %in% c("numeric", "n"))] } else if (is.factor(x)) { fun <- fun[which(fnames %in% c("factor", "f"))] if (fun != "mode") x <- as.numeric(x, keep.labels = FALSE) } } # for weighted mean, check that weights are of same length as x if (fun == "weighted.mean" && !is.null(weights)) { # make sure weights and x have same length if (length(weights) != length(x)) { # if not, tell user and change function to mean warning("Vector of weights is of different length than `x`. Using `mean` as function for typical value.", call. = FALSE) fun <- "mean" } # make sure weights are different from 1 if (all(weights == 1)) { # if not, tell user and change function to mean warning("All weight values are `1`. Using `mean` as function for typical value.", call. = FALSE) fun <- "mean" } } # no weights, than use normal mean function if (fun == "weighted.mean" && is.null(weights)) fun <- "mean" if (fun == "median") myfun <- get("median", asNamespace("stats")) else if (fun == "weighted.mean") myfun <- get("weighted.mean", asNamespace("stats")) else if (fun == "mode") myfun <- get(".mode_value", asNamespace("ggeffects")) else if (fun == "zero") return(0) else myfun <- get("mean", asNamespace("base")) if (is.integer(x)) { out <- stats::median(x, na.rm = TRUE) } else if (is.numeric(x)) { if (fun == "weighted.mean") out <- do.call(myfun, args = list(x = x, na.rm = TRUE, w = weights, ...)) else out <- do.call(myfun, args = list(x = x, na.rm = TRUE, ...)) } else if (is.factor(x)) { if (fun != "mode") out <- levels(x)[1] else out <- .mode_value(x) } else { out <- .mode_value(x) } # if a log-transformed variable is held constant, we need to check # that it's not negative for its typical value - else, predict() # might fail due to log()... if (!is.null(log_terms) && !is.null(predictor) && predictor %in% log_terms) { if (out <= 0) out <- .5 } out } .mode_value <- function(x, ...) { # create frequency table, to find most common value counts <- table(x) modus <- names(counts)[max(counts) == counts] # in case all values appear equally often, use first value if (length(modus) > 1) modus <- modus[1] # check if it's numeric if (!is.na(suppressWarnings(as.numeric(modus)))) as.numeric(modus) else modus } ggeffects/R/ggemmeans_zi_predictions.R0000644000176200001440000000534414030655632017615 0ustar liggesusers.ggemmeans_zi_predictions <- function(model, model_frame, preds, ci.lvl, terms, cleaned_terms, value_adjustment, condition, nsim = 1000, type = "fe") { prdat <- exp(preds$x1$emmean) * (1 - stats::plogis(preds$x2$emmean)) # compute ci, two-ways if (!is.null(ci.lvl) && !is.na(ci.lvl)) ci <- (1 + ci.lvl) / 2 else ci <- .975 # data grid newdata <- .data_grid( model = model, model_frame = model_frame, terms = terms, value_adjustment = value_adjustment, factor_adjustment = FALSE, show_pretty_message = FALSE, condition = condition ) # 2nd data grid, reasons see below data_grid <- .data_grid( model = model, model_frame = model_frame, terms = terms, value_adjustment = value_adjustment, show_pretty_message = FALSE, condition = condition, emmeans.only = FALSE ) # Since the zero inflation and the conditional model are working in "opposite # directions", confidence intervals can not be derived directly from the # "predict()"-function. Thus, confidence intervals for type = "fe.zi" are # based on quantiles of simulated draws from a multivariate normal distribution # (see also _Brooks et al. 2017, pp.391-392_ for details). prdat.sim <- .simulate_zi_predictions(model, newdata, nsim, terms, value_adjustment, condition) if (is.null(prdat.sim)) { stop("Predicted values could not be computed. Try reducing number of simulation, using argument `nsim` (e.g. `nsim = 100`)", call. = FALSE) } # we need two data grids here: one for all combination of levels from the # model predictors ("newdata"), and one with the current combinations only # for the terms in question ("data_grid"). "sims" has always the same # number of rows as "newdata", but "data_grid" might be shorter. So we # merge "data_grid" and "newdata", add mean and quantiles from "sims" # as new variables, and then later only keep the original observations # from "data_grid" - by this, we avoid unequal row-lengths. sims <- exp(prdat.sim$cond) * (1 - stats::plogis(prdat.sim$zi)) prediction_data <- .join_simulations(data_grid, newdata, prdat, sims, ci, cleaned_terms) if (type == "re.zi") { revar <- .get_residual_variance(model) # get link-function and back-transform fitted values # to original scale, so we compute proper CI if (!is.null(revar)) { lf <- insight::link_function(model) prediction_data$conf.low <- exp(lf(prediction_data$conf.low) - stats::qnorm(ci) * sqrt(revar)) prediction_data$conf.high <- exp(lf(prediction_data$conf.high) + stats::qnorm(ci) * sqrt(revar)) prediction_data$std.error <- sqrt(prediction_data$std.error^2 + revar) } } prediction_data } ggeffects/R/standard_error_predictions.R0000644000176200001440000001352114046746430020161 0ustar liggesusers# get standard errors of predictions from model matrix and vcov ---- .standard_error_predictions <- function( model, prediction_data, value_adjustment, terms, model_class = NULL, type = "fe", vcov.fun = NULL, vcov.type = NULL, vcov.args = NULL, condition = NULL, interval = NULL) { se <- tryCatch( { .safe_se_from_vcov( model, prediction_data, value_adjustment, terms, model_class, type, vcov.fun, vcov.type, vcov.args, condition, interval ) }, error = function(x) { x }, warning = function(x) { NULL }, finally = function(x) { NULL } ) if (is.null(se) || inherits(se, c("error", "simpleError"))) { insight::print_color("Error: Confidence intervals could not be computed.\n", "red") if (inherits(se, c("error", "simpleError"))) { cat(sprintf("* Reason: %s\n", .safe_deparse(se[[1]]))) err.source <- .safe_deparse(se[[2]]) if (all(grepl("^(?!(safe_se_from_vcov))", err.source, perl = TRUE))) { cat(sprintf("* Source: %s\n", err.source)) } } se <- NULL } se } .safe_se_from_vcov <- function(model, prediction_data, value_adjustment, terms, model_class, type, vcov.fun, vcov.type, vcov.args, condition, interval) { model_frame <- insight::get_data(model) # check random effect terms. We can't compute SE if data has # factors with only one level, however, if user conditions on # random effects and only conditions on one level, it is indeed # possible to calculate SE - so, ignore random effects for the # check of one-level-factors only re.terms <- insight::find_random(model, split_nested = TRUE, flatten = TRUE) # we can't condition on categorical variables if (!is.null(condition)) { cn <- names(condition) cn.factors <- sapply(cn, function(.x) is.factor(model_frame[[.x]]) && !(.x %in% re.terms)) condition <- condition[!cn.factors] if (.is_empty(condition)) condition <- NULL } # copy data frame with predictions newdata <- .data_grid( model, model_frame, terms, value_adjustment = value_adjustment, factor_adjustment = FALSE, show_pretty_message = FALSE, condition = condition ) # make sure we have enough values to compute CI # nlevels_terms <- sapply( # colnames(newdata), # function(.x) !(.x %in% re.terms) && is.factor(newdata[[.x]]) && nlevels(newdata[[.x]]) == 1 # ) # if (any(nlevels_terms)) { # not_enough <- colnames(newdata)[which(nlevels_terms)[1]] # remove_lvl <- paste0("[", gsub(pattern = "(.*)\\[(.*)\\]", replacement = "\\2", x = terms[which(.clean_terms(terms) == not_enough)]), "]", collapse = "") # stop(sprintf("`%s` does not have enough factor levels. Try to remove `%s`.", not_enough, remove_lvl), call. = TRUE) # } # add response to newdata. For models fitted with "glmmPQL", # the response variable is renamed internally to "zz". if (inherits(model, "glmmPQL")) { new.resp <- 0 names(new.resp) <- "zz" } else { fr <- insight::find_response(model, combine = FALSE) new.resp <- rep(0, length.out = length(fr)) names(new.resp) <- fr } new.resp <- new.resp[setdiff(names(new.resp), colnames(newdata))] newdata <- cbind(as.list(new.resp), newdata) # clean terms from brackets terms <- .clean_terms(terms) # sort data by grouping levels, so we have the correct order # to slice data afterwards if (length(terms) > 2) { trms <- terms[3] newdata <- newdata[order(newdata[[trms]]), , drop = FALSE] prediction_data <- prediction_data[order(prediction_data[[trms]]), , drop = FALSE] } if (length(terms) > 1) { trms <- terms[2] newdata <- newdata[order(newdata[[trms]]), , drop = FALSE] prediction_data <- prediction_data[order(prediction_data[[trms]]), , drop = FALSE] } trms <- terms[1] newdata <- newdata[order(newdata[[trms]]), , drop = FALSE] prediction_data <- prediction_data[order(prediction_data[[trms]]), , drop = FALSE] # rownames were resorted as well, which causes troubles in model.matrix rownames(newdata) <- NULL rownames(prediction_data) <- NULL vmatrix <- tryCatch( { .vcov_helper(model, model_frame, model_class, newdata, vcov.fun, vcov.type, vcov.args, terms) }, error = function(e) { NULL } ) pr_int <- FALSE if (is.null(vmatrix)) { message("Could not compute variance-covariance matrix of predictions. No confidence intervals are returned.") se.fit <- NULL } else { pvar <- diag(vmatrix) # condition on random effect variances if (type == "re" || (!is.null(interval) && interval == "prediction")) { sig <- .get_residual_variance(model) if (!is.null(sig) && sig > 0.0001) { pvar <- pvar + sig pr_int <- TRUE } } se.fit <- sqrt(pvar) n_pred <- nrow(prediction_data) n_se <- length(se.fit) # shorten to length of prediction_data if (!is.null(model_class) && model_class %in% c("polr", "multinom", "mixor")) { se.fit <- rep(se.fit, each = .n_distinct(prediction_data$response.level)) } else if (type == "re" && n_se < n_pred && n_pred %% n_se == 0) { se.fit <- rep(se.fit, times = n_pred / n_se) } else { se.fit <- se.fit[1:n_pred] } } std_error <- list(prediction_data = prediction_data, se.fit = se.fit) attr(std_error, "prediction_interval") <- pr_int std_error } ggeffects/R/get_predictions_clmm.R0000644000176200001440000000311614046746430016736 0ustar liggesusersget_predictions_clmm <- function(model, terms, value_adjustment, condition, ci.lvl, linv, ...) { if (!requireNamespace("emmeans")) { stop("Package `emmeans` required to compute estimated marginal means for clmm-models.", call. = FALSE) } values.at <- .data_grid( model = model, model_frame = insight::get_data(model), terms = terms, value_adjustment = value_adjustment, condition = condition, show_pretty_message = FALSE, emmeans.only = TRUE ) # no predicted values at random terms allowed re.terms <- insight::find_random(model, split_nested = TRUE, flatten = TRUE) fe.terms <- insight::find_predictors(model, flatten = TRUE) if (any(re.terms %in% names(values.at)) && !any(re.terms %in% fe.terms)) { insight::print_color("Predicted values can't be computed for levels of random effects from 'clmm' models.\n", "red") cat(sprintf("Please remove following variables from 'terms': %s\n", paste0(re.terms[which(re.terms %in% names(values.at))], collapse = ", "))) return(NULL) } emmpred <- emmeans::emmeans( object = model, spec = c(insight::find_response(model, combine = FALSE), .clean_terms(terms)), at = values.at, mode = "prob" ) fitfram <- as.data.frame(stats::confint(emmpred, level = ci.lvl)) fitfram <- .var_rename( fitfram, prob = "predicted", SE = "std.error", asymp.LCL = "conf.low", asymp.UCL = "conf.high" ) colnames(fitfram)[1] <- "response.level" # copy standard errors attr(fitfram, "std.error") <- fitfram$std.error fitfram } ggeffects/R/get_predictions_bayesx.R0000644000176200001440000000047013564761056017306 0ustar liggesusersget_predictions_bayesx <- function(model, data_grid, ...) { prdat <- suppressMessages( stats::predict( object = model, newdata = data_grid, type = "link" )) data_grid$predicted <- as.vector(prdat) data_grid$conf.low <- NA data_grid$conf.high <- NA data_grid } ggeffects/R/get_predictions_bamlss.R0000644000176200001440000000053113564761043017266 0ustar liggesusersget_predictions_bamlss <- function(model, data_grid, linv, ...) { prdat <- suppressMessages( stats::predict( object = model, newdata = data_grid, model = "mu", type = "link" )) data_grid$predicted <- linv(as.vector(prdat)) data_grid$conf.low <- NA data_grid$conf.high <- NA data_grid } ggeffects/R/get_predictions_lmrob_base.R0000644000176200001440000000114013451124203020071 0ustar liggesusersget_predictions_lmrob_base <- function(model, fitfram, ci.lvl, ...) { if (!is.null(ci.lvl) && !is.na(ci.lvl)) ci <- "confidence" else ci <- "none" prdat <- stats::predict( model, newdata = fitfram, type = "response", interval = ci, level = ci.lvl, ... ) # get predicted values, on link-scale fitfram$predicted <- prdat[, "fit"] if (ci == "none") { fitfram$conf.low <- NA fitfram$conf.high <- NA } else { fitfram$conf.low <- prdat[, "lwr"] fitfram$conf.high <- prdat[, "upr"] } fitfram } ggeffects/R/get_predictions_mixor.R0000644000176200001440000000305514036250437017142 0ustar liggesusersget_predictions_mixor <- function(model, fitfram, ci.lvl, linv, value_adjustment, terms, model_class, condition, ...) { se <- (!is.null(ci.lvl) && !is.na(ci.lvl)) # compute ci, two-ways if (!is.null(ci.lvl) && !is.na(ci.lvl)) ci <- (1 + ci.lvl) / 2 else ci <- .975 prdat <- stats::predict( model, newdata = fitfram, ... ) prdat <- as.data.frame(prdat$predicted) # bind predictions to model frame fitfram <- cbind(prdat, fitfram) # for proportional ordinal logistic regression (see MASS::polr), # we have predicted values for each response category. Hence, # gather columns fitfram <- .gather(fitfram, names_to = "response.level", values_to = "predicted", colnames(prdat)) se.pred <- .standard_error_predictions( model = model, prediction_data = fitfram, value_adjustment = value_adjustment, terms = terms, model_class = model_class, condition = condition ) if (.check_returned_se(se.pred) && isTRUE(se)) { se.fit <- se.pred$se.fit fitfram <- se.pred$prediction_data # CI fitfram$conf.low <- linv(stats::qlogis(fitfram$predicted) - stats::qnorm(ci) * se.fit) fitfram$conf.high <- linv(stats::qlogis(fitfram$predicted) + stats::qnorm(ci) * se.fit) # copy standard errors attr(fitfram, "std.error") <- se.fit attr(fitfram, "prediction.interval") <- attr(se.pred, "prediction_interval") } else { # CI fitfram$conf.low <- NA fitfram$conf.high <- NA } fitfram } ggeffects/R/utils_get_data_grid.R0000644000176200001440000004610314053423627016542 0ustar liggesusers# "factor_adjustment" indicates if factors should be held constant or not # need to be false for computing std.error for merMod objects # value_adjustment is the function to calculate at which value non-focal # terms are held constant (mean, median, ...) .data_grid <- function(model, model_frame, terms, value_adjustment, factor_adjustment = TRUE, show_pretty_message = TRUE, condition = NULL, emmeans.only = FALSE) { # special handling for coxph if (inherits(model, c("coxph", "coxme"))) { surv.var <- which(colnames(model_frame) == insight::find_response(model)) model_frame <- .remove_column(model_frame, surv.var) } model_info <- .get_model_info(model) # make sure we don't have arrays as variables model_frame[] <- lapply(model_frame, function(i) if (is.array(i)) as.data.frame(i) else i) # check for logical variables, might not work if (any(sapply(model_frame, is.logical))) { stop("Variables of type 'logical' do not work, please coerce to factor and fit the model again.", call. = FALSE) } # any weights? w <- insight::get_weights(model) if (is.null(w) || all(w == 1)) w <- NULL # get random effects (grouping factor) random_effect_terms <- insight::find_random(model, split_nested = TRUE, flatten = TRUE) ## TODO check for other panelr models # clean variable names # if (!inherits(model, "wbm")) { colnames(model_frame) <- insight::clean_names(colnames(model_frame)) # } # get specific levels focal_terms <- .get_representative_values(terms, model_frame) # and all specified variables all_terms <- .clean_terms(terms) # check if user has any predictors with log-transformation inside # model formula, but *not* used back-transformation "exp". Tell user # so she's aware of the problem offset_log_term <- tryCatch( { olt <- NULL if (!inherits(model, "brmsfit") && show_pretty_message && .has_log(model)) { any_offset_log_term <- .get_offset_log_terms(model) # check if we have offset() in formula, with transformed variable if (any(any_offset_log_term)) { clean.term <- insight::find_predictors(model, effects = "all", component = "all", flatten = FALSE) clean.term <- unlist(clean.term[c("conditional", "random", "instruments")])[any_offset_log_term] # try to back-transform offset_function <- .get_offset_transformation(model) if (identical(offset_function, "log")) { warning(sprintf("Model uses a transformed offset term. Predictions may not be correct. Please apply transformation of offset term to the data before fitting the model and use 'offset(%s)' in the model formula.\n", clean.term), call. = FALSE) olt <- clean.term } } } olt }, error = function(x) { NULL }, warning = function(x) { NULL }, finally = function(x) { NULL } ) # Check if model has splines, and if so, tell user that he may use # all values - except for gam and vgam models. "predict()" seems # stable even for large data frame for gam/vgam. Especially for # mixed models, computing SE and CI is very memory consuming, leading # to memory allocation errors. That's why by default values for continuous # variables are "prettified" to a smaller set of unique values. use_all_values <- FALSE # for these models, always all values are used all_values_models <- c("Gam", "gam", "vgam", "glm", "lm", "brmsfit", "bamlss", "gamlss", "glmx", "feglm") if (.has_splines(model) && !.uses_all_tag(terms)) { if (inherits(model, all_values_models)) { use_all_values <- TRUE } else if (show_pretty_message) { message(sprintf("Model contains splines or polynomial terms. Consider using `terms=\"%s [all]\"` to get smooth plots. See also package-vignette 'Marginal Effects at Specific Values'.", all_terms[1])) show_pretty_message <- FALSE } } if (.has_poly(model) && !.uses_all_tag(terms) && !use_all_values) { if (inherits(model, all_values_models)) { use_all_values <- TRUE } else if (show_pretty_message) { message(sprintf("Model contains polynomial or cubic / quadratic terms. Consider using `terms=\"%s [all]\"` to get smooth plots. See also package-vignette 'Marginal Effects at Specific Values'.", all_terms[1])) show_pretty_message <- FALSE } } if (.has_trigonometry(model) && !.uses_all_tag(terms) && !use_all_values) { if (inherits(model, all_values_models)) { use_all_values <- TRUE } else if (show_pretty_message) { message(sprintf("Model contains trigonometric terms (sinus, cosinus, ...). Consider using `terms=\"%s [all]\"` to get smooth plots. See also package-vignette 'Marginal Effects at Specific Values'.", all_terms[1])) show_pretty_message <- FALSE } } # find terms for which no specific values are given conditional_terms <- which(!(all_terms %in% names(focal_terms))) # prettify numeric vectors, get representative values constant_levels <- .prettify_data( conditional_terms = conditional_terms, original_model_frame = model_frame, terms = all_terms, use_all_values = use_all_values, show_pretty_message = show_pretty_message && model_info$is_binomial ) names(constant_levels) <- all_terms[conditional_terms] focal_terms <- c(focal_terms, constant_levels) # check for monotonic terms and valid values. In case 'mo()' is used, # and predictor is numeric, prettyfied values in the data grid are based # on the range of the numeric variable, although only those values are allowed # in the data grid that actually appear in the data if (inherits(model, "brmsfit")) { model_terms <- insight::find_terms(model, flatten = TRUE) monotonics <- grepl("mo\\((.*)\\)", model_terms) if (any(monotonics)) { mo_terms <- gsub("mo\\((.*)\\)", "\\1", model_terms[monotonics]) invalid_levels <- unlist(lapply(mo_terms, function(mt) { if (mt %in% names(focal_terms) && mt %in% colnames(model_frame)) { !all(model_frame[[mt]] %in% focal_terms[[mt]]) } else { FALSE } })) if (any(invalid_levels)) { stop(insight::format_message(sprintf("Variable(s) '%s' are used as monotonic effects, however, only values that are also present in the data are allowed for predictions. Consider converting variables used in 'mo()' into (ordered) factors before fitting the model.", paste0(mo_terms, collapse = ", "))), call. = FALSE) } } } ## TODO check for other panelr models # get names of all predictor variable # if (inherits(model, "wbm")) { # model_predictors <- colnames(model_frame) # } else { # model_predictors <- insight::find_predictors(model, effects = "all", component = "all", flatten = TRUE) # } offset_term <- .offset_term(model, show_pretty_message) model_predictors <- c(insight::find_predictors(model, effects = "all", component = "all", flatten = TRUE), offset_term) if (inherits(model, "wbm")) { model_predictors <- unique(c(insight::find_response(model), model_predictors, model@call_info$id, model@call_info$wave)) } # check if offset term is in model frame if (!is.null(offset_term) && !(offset_term %in% colnames(model_frame))) { model_frame <- .add_offset_to_mf(model, model_frame, offset_term) } # get count of terms, and number of columns n_predictors <- length(model_predictors) # remove NA from values, so we don't have expanded data grid # with missing values. this causes an error with predict() if (any(sapply(focal_terms, anyNA))) { focal_terms <- lapply(focal_terms, function(.x) as.vector(stats::na.omit(.x))) } ## TODO check, it should actually no longer happen that # the values of "model_predictors" are not in the column names of # the model frame "model_frame" # names of predictor variables may vary, e.g. if log(x) # or poly(x) etc. is used. so check if we have correct # predictor names that also appear in model frame ## TODO brms does currently not support "terms()" generic if (!inherits(model, "wbm")) { if (sum(!(model_predictors %in% colnames(model_frame))) > 0 && !inherits(model, c("brmsfit", "MCMCglmm"))) { # get terms from model directly model_predictors <- tryCatch( { attr(stats::terms(model), "term.labels", exact = TRUE) }, error = function(e) { NULL } ) } # 2nd check if (is.null(model_predictors) || sum(!(model_predictors %in% colnames(model_frame))) > 0) { # get terms from model frame column names model_predictors <- colnames(model_frame) # we may have more terms now, e.g. intercept. remove those now if (length(model_predictors) > n_predictors) model_predictors <- model_predictors[2:(n_predictors + 1)] } } else { model_predictors <- model_predictors[model_predictors %in% colnames(model_frame)] } # keep those, which we did not process yet model_predictors <- model_predictors[!(model_predictors %in% names(focal_terms))] # if we have weights, and typical value is mean, use weighted means # as function for the typical values if (!.is_empty(w) && length(w) == nrow(model_frame) && value_adjustment == "mean") { value_adjustment <- "weighted.mean" } if (value_adjustment == "weighted.mean" && .is_empty(w)) { value_adjustment <- "mean" } # do we have variables that should be held constant at a # specific value? if (!is.null(condition) && !is.null(names(condition))) { focal_terms <- c(focal_terms, as.list(condition)) model_predictors <- model_predictors[!(model_predictors %in% names(condition))] } # add all constant values to list. For those predictors that have to be # held constant, use "typical" values - mean/median for numeric values, # reference level for factors and most common element for character vectors if (isTRUE(emmeans.only)) { # adjust constant values, special handling for emmeans only constant_values <- lapply(model_predictors, function(x) { pred <- model_frame[[x]] if (!is.factor(pred) && !x %in% random_effect_terms) { .typical_value(pred, fun = value_adjustment, weights = w, predictor = x, log_terms = .which_log_terms(model), emmeans.only = emmeans.only) } }) names(constant_values) <- model_predictors constant_values <- .compact_list(constant_values) } else if (factor_adjustment) { # adjust constant values, factors set to reference level constant_values <- lapply(model_predictors, function(x) { pred <- model_frame[[x]] if (is.factor(pred)) pred <- droplevels(pred) .typical_value(pred, fun = value_adjustment, weights = w, predictor = x, log_terms = .which_log_terms(model)) }) names(constant_values) <- model_predictors } else { # adjust constant values, use all factor levels re.grp <- insight::find_random(model, split_nested = TRUE, flatten = TRUE) # if factors should not be held constant (needed when computing # std.error for merMod objects), we need all factor levels, # and not just the typical value constant_values <- lapply(model_predictors, function(.x) { # get group factors from random effects is.re.grp <- !is.null(re.grp) && .x %in% re.grp x <- model_frame[[.x]] # only get levels if not random effect if (is.factor(x) && !is.re.grp) { levels(droplevels(x)) } else { if (is.factor(x)) x <- droplevels(x) .typical_value(x, fun = value_adjustment, weights = w) } }) names(constant_values) <- model_predictors } # for brms-models with additional response information, we need # also the number of trials to calculate predictions n.trials <- NULL if (!is.null(model_info) && model_info$is_trial && inherits(model, "brmsfit")) { tryCatch( { rv <- insight::find_response(model, combine = FALSE) # check if trials-variable is held constant at another value already if (!(rv[2] %in% names(condition))) { n.trials <- as.integer(stats::median(model_frame[[rv[2]]])) if (!.is_empty(n.trials)) { constant_values <- c(constant_values, list(n.trials)) names(constant_values)[length(constant_values)] <- rv[2] } } }, error = function(x) { NULL } ) } # for MixMod, we need mean value of response as well... if (inherits(model, c("MixMod", "MCMCglmm"))) { constant_values <- c(constant_values, .typical_value(insight::get_response(model))) names(constant_values)[length(constant_values)] <- insight::find_response(model, combine = FALSE) } # add constant values. focal_terms <- c(focal_terms, constant_values) # stop here for emmeans-objects if (isTRUE(emmeans.only)) { # remove grouping factor of RE from constant values # only applicable for MixMod objects if (inherits(model, "MixMod") && !is.null(random_effect_terms) && !.is_empty(constant_values) && any(random_effect_terms %in% names(constant_values))) { constant_values <- constant_values[!(names(constant_values) %in% random_effect_terms)] } # save names focal_term_names <- names(focal_terms) # restore original type focal_terms <- lapply(focal_term_names, function(x) { # check for consistent vector type: numeric if (is.numeric(model_frame[[x]]) && !is.numeric(focal_terms[[x]])) return(.factor_to_numeric(focal_terms[[x]])) # check for consistent vector type: factor if (is.factor(model_frame[[x]]) && !is.factor(focal_terms[[x]])) return(as.character(focal_terms[[x]])) # else return original vector return(focal_terms[[x]]) }) # add back names names(focal_terms) <- focal_term_names # save constant values as attribute attr(focal_terms, "constant.values") <- constant_values attr(focal_terms, "n.trials") <- n.trials # remember if grouping "factor" is numeric. this is possibly required # later when plotting data points for continuous predictors that are # held constant at their mean/sd values or similar. if (length(terms) > 1) { attr(focal_terms, "continuous.group") <- is.numeric(focal_terms[[2]]) } else { attr(focal_terms, "continuous.group") <- FALSE } return(focal_terms) } # create data frame with all unqiue combinations dat <- as.data.frame(expand.grid(focal_terms, stringsAsFactors = TRUE)) # we have to check type consistency. If user specified certain value # (e.g. "education [1,3]"), these are returned as string and coerced # to factor, even if original vector was numeric. In this case, we have # to coerce back these variables. Else, predict() complains that model # was fitted with numeric, but newdata has factor (or vice versa). datlist <- lapply(colnames(dat), function(x) { # check for consistent vector type: numeric if (is.numeric(model_frame[[x]]) && !is.numeric(dat[[x]])) return(.factor_to_numeric(dat[[x]])) # check for consistent vector type: factor if (is.factor(model_frame[[x]]) && !is.factor(dat[[x]])) return(as.factor(dat[[x]])) # else return original vector return(dat[[x]]) }) # get list names. we need to remove patterns like "log()" etc. names(datlist) <- names(focal_terms) datlist <- as.data.frame(datlist) # in case we have variable names with white space, fix here if (any(names(focal_terms) != colnames(datlist))) { colnames(datlist) <- names(focal_terms) } if (inherits(model, "wbm")) { colnames(datlist) <- names(focal_terms) } # check if predictions should be conditioned on random effects, # but not on each group level. If so, set random effect to NA # which will return predictions on a population level. # See ?glmmTMB::predict if (inherits(model, c("glmmTMB", "merMod", "rlmerMod", "MixMod", "brmsfit", "lme"))) { cleaned_terms <- .clean_terms(terms) # check if we have fixed effects as grouping factor in random effects as well... cleaned_terms <- unique(c(cleaned_terms, insight::find_predictors(model, effects = "fixed", flatten = TRUE))) # if so, remove from random-effects here random_effect_terms <- random_effect_terms[!(random_effect_terms %in% cleaned_terms)] if (!.is_empty(random_effect_terms) && !.is_empty(constant_values)) { # need to check if predictions are conditioned on specific # value if random effect if (inherits(model, c("glmmTMB", "brmsfit", "MixMod"))) { for (i in random_effect_terms) { if (i %in% names(constant_values)) { datlist[[i]] <- NA constant_values[i] <- "NA (population-level)" } } } else if (inherits(model, c("merMod", "rlmerMod", "lme"))) { for (i in random_effect_terms) { if (i %in% names(constant_values)) { datlist[[i]] <- 0 constant_values[i] <- "0 (population-level)" } } } if (inherits(model, "rlmerMod")) { datlist[] <- lapply(colnames(datlist), function(x) { if (x %in% names(constant_values) && !(x %in% random_effect_terms) && is.factor(datlist[[x]])) { levels(datlist[[x]]) <- levels(model_frame[[x]]) } datlist[[x]] }) } } } # save constant values as attribute attr(datlist, "constant.values") <- constant_values attr(datlist, "n.trials") <- n.trials # remember if grouping "factor" is numeric. this is possibly required # later when plotting data points for continuous predictors that are # held constant at their mean/sd values or similar. if (length(terms) > 1) { attr(datlist, "continuous.group") <- is.numeric(datlist[[2]]) } else { attr(datlist, "continuous.group") <- FALSE } w <- insight::find_weights(model) if (!is.null(w) && !inherits(model, "brmsfit")) { datlist$.w <- as.numeric(NA) colnames(datlist)[ncol(datlist)] <- w } datlist } .add_offset_to_mf <- function(x, model_frame, offset_term) { # first try, parent frame dat <- tryCatch( { eval(x$call$data, envir = parent.frame()) }, error = function(e) { NULL } ) if (is.null(dat)) { # second try, global env dat <- tryCatch( { eval(x$call$data, envir = globalenv()) }, error = function(e) { NULL } ) } if (!is.null(dat) && .obj_has_name(x$call, "subset")) { dat <- subset(dat, subset = eval(x$call$subset)) } tryCatch( { dat <- stats::na.omit(dat[c(intersect(colnames(model_frame), colnames(dat)), offset_term)]) cbind(model_frame, dat[offset_term]) }, error = function(e) { model_frame } ) } ggeffects/R/get_predictions_tobit.R0000644000176200001440000000172313451124203017114 0ustar liggesusersget_predictions_tobit <- function(model, fitfram, ci.lvl, linv, ...) { # does user want standard errors? se <- !is.null(ci.lvl) && !is.na(ci.lvl) # compute ci, two-ways if (!is.null(ci.lvl) && !is.na(ci.lvl)) ci <- (1 + ci.lvl) / 2 else ci <- .975 prdat <- stats::predict( model, newdata = fitfram, type = "lp", se.fit = se, ... ) # did user request standard errors? if yes, compute CI if (se) { # copy predictions fitfram$predicted <- linv(prdat$fit) # calculate CI fitfram$conf.low <- linv(prdat$fit - stats::qnorm(ci) * prdat$se.fit) fitfram$conf.high <- linv(prdat$fit + stats::qnorm(ci) * prdat$se.fit) # copy standard errors attr(fitfram, "std.error") <- prdat$se.fit } else { # copy predictions fitfram$predicted <- linv(as.vector(prdat)) # no CI fitfram$conf.low <- NA fitfram$conf.high <- NA } fitfram } ggeffects/R/get_predictions_glm.R0000644000176200001440000000171514030655632016564 0ustar liggesusersget_predictions_glm <- function(model, fitfram, ci.lvl, linv, value_adjustment, model_class, terms, vcov.fun, vcov.type, vcov.args, condition, interval, type, ...) { # does user want standard errors? se <- !is.null(ci.lvl) && !is.na(ci.lvl) && is.null(vcov.fun) if (type == "sim") { # compute ci, two-ways if (!is.null(ci.lvl) && !is.na(ci.lvl)) ci <- (1 + ci.lvl) / 2 else ci <- .975 # simulate predictions .do_simulate(model, terms, ci, ...) } else { # for models from "robust"-pkg (glmRob) we need to # suppress warnings about fake models prdat <- suppressWarnings(stats::predict.glm( model, newdata = fitfram, type = "link", se.fit = se, ... )) # copy predictions .generic_prediction_data(model, fitfram, linv, prdat, se, ci.lvl, model_class, value_adjustment, terms, vcov.fun, vcov.type, vcov.args, condition, interval) } } ggeffects/R/get_predictions_glmmTMB.R0000644000176200001440000001500314046746430017303 0ustar liggesusersget_predictions_glmmTMB <- function(model, data_grid, ci.lvl, linv, type, terms, value_adjustment, condition, ...) { # does user want standard errors? se <- !is.null(ci.lvl) && !is.na(ci.lvl) # compute ci, two-ways if (!is.null(ci.lvl) && !is.na(ci.lvl)) ci <- (1 + ci.lvl) / 2 else ci <- .975 # copy object predicted_data <- data_grid model_info <- insight::model_info(model) clean_terms <- .clean_terms(terms) # check if we have zero-inflated model part if (!model_info$is_zero_inflated && type %in% c("fe.zi", "re.zi", "zi.prob")) { if (type == "zi.prob") stop("Model has no zero-inflation part.") else if (type == "fe.zi") type <- "fe" else type <- "re" message(sprintf("Model has no zero-inflation part. Changing prediction-type to \"%s\".", type)) } # check whether predictions should be conditioned # on random effects (grouping level) or not. if (type %in% c("fe", "fe.zi")) ref <- NA else ref <- NULL additional_dot_args <- lapply(match.call(expand.dots = FALSE)$`...`, function(x) x) if ("nsim" %in% names(additional_dot_args)) nsim <- eval(additional_dot_args[["nsim"]]) else nsim <- 1000 # predictions conditioned on zero-inflation component if (type %in% c("fe.zi", "re.zi")) { prdat <- as.vector(stats::predict( model, newdata = data_grid, type = "response", se.fit = FALSE, re.form = ref, ... )) if (!se) { predicted_data$predicted <- prdat predicted_data$conf.low <- NA predicted_data$conf.high <- NA } else { model_frame <- insight::get_data(model) # we need a data grid with combination from *all* levels for # all model predictors, so the data grid has the same number # of rows as our simulated data from ".simulate_zi_predictions" newdata <- .data_grid( model = model, model_frame = model_frame, terms = terms, value_adjustment = value_adjustment, factor_adjustment = FALSE, show_pretty_message = FALSE, condition = condition ) # Since the zero inflation and the conditional model are working in "opposite # directions", confidence intervals can not be derived directly from the # "predict()"-function. Thus, confidence intervals for type = "fe.zi" are # based on quantiles of simulated draws from a multivariate normal distribution # (see also _Brooks et al. 2017, pp.391-392_ for details). prdat.sim <- .simulate_zi_predictions(model, newdata, nsim, terms, value_adjustment, condition) if (any(sapply(prdat.sim, nrow) == 0)) { stop("Could not simulate predictions. Maybe you have used 'scale()' in the formula? If so, please standardize your data before fitting the model.", call. = FALSE) } if (is.null(prdat.sim) || inherits(prdat.sim, c("error", "simpleError"))) { insight::print_color("Error: Confidence intervals could not be computed.\n", "red") if (inherits(prdat.sim, c("error", "simpleError"))) { cat(sprintf("* Reason: %s\n", .safe_deparse(prdat.sim[[1]]))) cat(sprintf("* Source: %s\n", .safe_deparse(prdat.sim[[2]]))) } predicted_data$predicted <- prdat predicted_data$conf.low <- NA predicted_data$conf.high <- NA } else { # we need two data grids here: one for all combination of levels from the # model predictors ("newdata"), and one with the current combinations only # for the terms in question ("data_grid"). "sims" has always the same # number of rows as "newdata", but "data_grid" might be shorter. So we # merge "data_grid" and "newdata", add mean and quantiles from "sims" # as new variables, and then later only keep the original observations # from "data_grid" - by this, we avoid unequal row-lengths. sims <- exp(prdat.sim$cond) * (1 - stats::plogis(prdat.sim$zi)) predicted_data <- .join_simulations(data_grid, newdata, prdat, sims, ci, clean_terms) if (type == "re.zi") { revar <- .get_residual_variance(model) # get link-function and back-transform fitted values # to original scale, so we compute proper CI if (!is.null(revar)) { lf <- insight::link_function(model) predicted_data$conf.low <- exp(lf(predicted_data$conf.low) - stats::qnorm(ci) * sqrt(revar)) predicted_data$conf.high <- exp(lf(predicted_data$conf.high) + stats::qnorm(ci) * sqrt(revar)) predicted_data$std.error <- sqrt(predicted_data$std.error^2 + revar) } } } } } else if (type == "sim") { # predictions conditioned on zero-inflation component and random # effects, based on simulations predicted_data <- simulate_predictions(model, nsim, clean_terms, ci, type) } else { # predictions conditioned on count or zi-component only if (type == "zi.prob") { prdat <- stats::predict( model, newdata = data_grid, type = "zlink", se.fit = se, re.form = ref, ... ) linv <- stats::plogis } else { prdat <- stats::predict( model, newdata = data_grid, type = "link", se.fit = se, re.form = ref, ... ) } # did user request standard errors? if yes, compute CI if (se) { predicted_data$predicted <- linv(prdat$fit) # add random effect uncertainty to s.e. if (type %in% c("re", "re.zi")) { pvar <- prdat$se.fit^2 prdat$se.fit <- sqrt(pvar + .get_residual_variance(model)) } # calculate CI predicted_data$conf.low <- linv(prdat$fit - stats::qnorm(ci) * prdat$se.fit) predicted_data$conf.high <- linv(prdat$fit + stats::qnorm(ci) * prdat$se.fit) predicted_data$std.error <- prdat$se.fit } else { # copy predictions predicted_data$predicted <- linv(as.vector(prdat)) # no CI predicted_data$conf.low <- NA predicted_data$conf.high <- NA } } if (.obj_has_name(predicted_data, "std.error")) { # copy standard errors attr(predicted_data, "std.error") <- predicted_data$std.error predicted_data <- .remove_column(predicted_data, "std.error") } attr(predicted_data, "prediction.interval") <- type %in% c("re", "re.zi") predicted_data } ggeffects/R/collapse_re_data.R0000644000176200001440000000542314046746430016030 0ustar liggesusers#' @title Collapse raw data by random effect groups #' @name collapse_by_group #' #' @description This function extracts the raw data points (i.e. the data #' that was used to fit the model) and "averages" (i.e. "collapses") the #' response variable over the levels of the grouping factor given in #' \code{collapse.by}. Only works with mixed models. #' #' @param collapse.by Name of the (random effects) grouping factor. Data is #' collapsed by the levels of this factor. #' @param residuals Logical, if \code{TRUE}, collapsed partial residuals instead #' of raw data by the levels of the grouping factor. #' @inheritParams residualize_over_grid #' #' @return A data frame with raw data points, averaged over the levels of #' the given grouping factor from the random effects. The group level of #' the random effect is saved in the column \code{"random"}. #' #' @examples #' library(ggeffects) #' if (require("lme4", quietly = TRUE)) { #' data(efc) #' efc$e15relat <- as.factor(efc$e15relat) #' efc$c161sex <- as.factor(efc$c161sex) #' levels(efc$c161sex) <- c("male", "female") #' model <- lmer(neg_c_7 ~ c161sex + (1 | e15relat), data = efc) #' me <- ggpredict(model, terms = "c161sex") #' head(attributes(me)$rawdata) #' collapse_by_group(me, model, "e15relat") #' } #' @export collapse_by_group <- function(grid, model, collapse.by = NULL, residuals = FALSE) { if (!insight::is_mixed_model(model)) { stop("This function only works with mixed effects models.", call. = FALSE) } data <- insight::get_data(model) if (is.null(collapse.by)) { collapse.by <- insight::find_random(model, flatten = TRUE) } if (length(collapse.by) > 1) { collapse.by <- collapse.by[1] warning("More than one random grouping variable found.", "\n Using `", collapse.by, "`.", call. = FALSE) } if (!collapse.by %in% colnames(data)) { stop("Could not find `", collapse.by, "` column.", call. = FALSE) } if (residuals) { rawdata <- residualize_over_grid(grid, model, protect_names = TRUE) y_name <- "predicted" } else { rawdata <- attr(grid, "rawdata", exact = TRUE) y_name <- "response" if (any(sapply(rawdata[-(1:2)], Negate(is.factor))) || attr(grid, "x.is.factor", exact = TRUE) == "0") { warning("Collapsing usually not informative across a continuous variable.", call. = FALSE) } } rawdata$random <- factor(data[[collapse.by]]) agg_data <- stats::aggregate(rawdata[[y_name]], by = rawdata[colnames(rawdata) != y_name], FUN = mean) colnames(agg_data)[ncol(agg_data)] <- y_name colnames(agg_data)[colnames(agg_data) == "group"] <- "group_col" agg_data } ggeffects/R/get_predictions_lrm.R0000644000176200001440000000171114046746430016577 0ustar liggesusersget_predictions_lrm <- function(model, fitfram, ci.lvl, linv, ...) { # does user want standard errors? se <- !is.null(ci.lvl) && !is.na(ci.lvl) # compute ci, two-ways if (!is.null(ci.lvl) && !is.na(ci.lvl)) ci <- (1 + ci.lvl) / 2 else ci <- .975 prdat <- stats::predict( model, newdata = fitfram, type = "lp", se.fit = se, ... ) # copy predictions fitfram$predicted <- stats::plogis(prdat$linear.predictors) # did user request standard errors? if yes, compute CI if (se) { # calculate CI fitfram$conf.low <- stats::plogis(prdat$linear.predictors - stats::qnorm(ci) * prdat$se.fit) fitfram$conf.high <- stats::plogis(prdat$linear.predictors + stats::qnorm(ci) * prdat$se.fit) # copy standard errors attr(fitfram, "std.error") <- prdat$se.fit } else { # No CI fitfram$conf.low <- NA fitfram$conf.high <- NA } fitfram } ggeffects/R/get_predictions_multinom.R0000644000176200001440000000247513577107552017665 0ustar liggesusersget_predictions_multinom <- function(model, fitfram, ci.lvl, linv, value_adjustment, terms, model_class, ...) { # compute ci, two-ways if (!is.null(ci.lvl) && !is.na(ci.lvl)) ci <- (1 + ci.lvl) / 2 else ci <- .975 prdat <- stats::predict( model, newdata = fitfram, type = "probs", ... ) if (is.data.frame(prdat) || is.matrix(prdat)) nc <- 1:ncol(prdat) else nc <- 1 # Matrix to vector tmp <- cbind(as.data.frame(prdat), fitfram) fitfram <- .gather(tmp, names_to = "response.level", values_to = "predicted", colnames(tmp)[nc]) # se.pred <- # .standard_error_predictions( # model = model, # prediction_data = fitfram, # value_adjustment = value_adjustment, # terms = terms, # model_class = model_class # ) # # if (!is.null(se.pred)) { # se.fit <- se.pred$se.fit # fitfram <- se.pred$prediction_data # # CI # fitfram$conf.low <- linv(stats::qlogis(fitfram$predicted) - stats::qnorm(ci) * se.fit) # fitfram$conf.high <- linv(stats::qlogis(fitfram$predicted) + stats::qnorm(ci) * se.fit) # } else { # # No CI # fitfram$conf.low <- NA # fitfram$conf.high <- NA # } # No CI fitfram$conf.low <- NA fitfram$conf.high <- NA fitfram } ggeffects/R/get_predictions_clm.R0000644000176200001440000000274514046746430016570 0ustar liggesusersget_predictions_clm <- function(model, data_grid, ci.lvl, linv, ...) { # does user want standard errors? se <- !is.null(ci.lvl) && !is.na(ci.lvl) # compute ci, two-ways if (!is.null(ci.lvl) && !is.na(ci.lvl)) ci <- (1 + ci.lvl) / 2 else ci <- .975 # prediction, with CI prdat <- stats::predict( model, newdata = data_grid, type = "prob", interval = se, level = ci, ... ) # convert to data frame. prdat <- as.data.frame(prdat) # bind predictions to model frame data_grid <- cbind(prdat, data_grid) # get levels of response lv <- levels(insight::get_response(model)) # for proportional ordinal logistic regression (see ordinal::clm), # we have predicted values for each response category. Hence, # gather columns. Since we also have conf. int. for each response # category, we need to gather multiple columns at once if (isTRUE(se)) { # length of each variable block l <- seq_len(ncol(prdat) / 3) colnames(data_grid)[l] <- lv data_grid <- .multiple_gather( data_grid, names_to = "response.level", values_to = c("predicted", "conf.low", "conf.high"), columns = list(l, l + length(l), l + 2 * length(l)) ) } else { data_grid <- .gather(data_grid, names_to = "response.level", values_to = "predicted", colnames(prdat)) # No CI data_grid$conf.low <- NA data_grid$conf.high <- NA } data_grid } ggeffects/R/get_predictions_wbm.R0000644000176200001440000000236014030655632016567 0ustar liggesusersget_predictions_wbm <- function(model, fitfram, ci.lvl, linv, type, terms, condition, ...) { # does user want standard errors? se <- !is.null(ci.lvl) && !is.na(ci.lvl) # compute ci, two-ways if (!is.null(ci.lvl) && !is.na(ci.lvl)) ci <- (1 + ci.lvl) / 2 else ci <- .975 # check whether predictions should be conditioned # on random effects (grouping level) or not. if (type == "fe") ref <- NA else ref <- NULL if (type == "sim") { # simulate predictions fitfram <- .do_simulate(model, terms, ci, ...) } else { pred <- suppressWarnings(stats::predict( model, newdata = fitfram, type = "link", re.form = ref, allow.new.levels = TRUE, use.re.var = type == "re", se.fit = se, ... )) if (se) { fitfram$predicted <- linv(pred$fit) fitfram$conf.low <- linv(pred$fit - stats::qnorm(ci) * pred$se.fit) fitfram$conf.high <- linv(pred$fit + stats::qnorm(ci) * pred$se.fit) # copy standard errors attr(fitfram, "std.error") <- pred$se.fit } else { fitfram$predicted <- linv(as.vector(pred)) fitfram$conf.low <- NA fitfram$conf.high <- NA } } fitfram } ggeffects/R/getter.R0000644000176200001440000001131114036250437014026 0ustar liggesusers#' @title Get titles and labels from data #' @name get_title #' #' @description Get variable and value labels from \code{ggeffects}-objects. Functions #' like \code{ggpredict()} or \code{ggeffect()} save #' information on variable names and value labels as additional attributes #' in the returned data frame. This is especially helpful for labelled #' data (see \CRANpkg{sjlabelled}), since these labels can be used to #' set axis labels and titles. #' #' @param x An object of class \code{ggeffects}, as returned by any ggeffects-function; #' for \code{get_complete_df()}, must be a list of \code{ggeffects}-objects. #' @param case Desired target case. Labels will automatically converted into the #' specified character case. See \code{?sjlabelled::convert_case} for #' more details on this argument. #' #' @return The titles or labels as character string, or \code{NULL}, if variables #' had no labels; \code{get_complete_df()} returns the input list \code{x} #' as single data frame, where the grouping variable indicates the #' predicted values for each term. #' #' @examples #' if (require("sjmisc", quietly = TRUE) && #' require("ggplot2", quietly = TRUE) && #' require("effects", quietly = TRUE)) { #' data(efc) #' efc$c172code <- to_factor(efc$c172code) #' fit <- lm(barthtot ~ c12hour + neg_c_7 + c161sex + c172code, data = efc) #' #' mydf <- ggpredict(fit, terms = c("c12hour", "c161sex", "c172code")) #' #' ggplot(mydf, aes(x = x, y = predicted, colour = group)) + #' stat_smooth(method = "lm") + #' facet_wrap(~facet, ncol = 2) + #' labs( #' x = get_x_title(mydf), #' y = get_y_title(mydf), #' colour = get_legend_title(mydf) #' ) #' #' # adjusted predictions, a list of data frames (one data frame per term) #' eff <- ggeffect(fit) #' eff #' get_complete_df(eff) #' #' # adjusted predictions for education only, and get x-axis-labels #' mydat <- eff[["c172code"]] #' ggplot(mydat, aes(x = x, y = predicted, group = group)) + #' stat_summary(fun = sum, geom = "line") + #' scale_x_discrete(labels = get_x_labels(mydat)) #' } #' @export get_title <- function(x, case = NULL) { if (.is_empty(x)) return(NULL) if (!inherits(x, "ggeffects")) stop("`x` must be of class `ggeffects`.", call. = FALSE) sjlabelled::convert_case(attr(x, which = "title", exact = TRUE), case) } #' @rdname get_title #' @export get_x_title <- function(x, case = NULL) { if (.is_empty(x)) return(NULL) if (!inherits(x, "ggeffects")) stop("`x` must be of class `ggeffects`.", call. = FALSE) sjlabelled::convert_case(attr(x, which = "x.title", exact = TRUE), case) } #' @rdname get_title #' @export get_y_title <- function(x, case = NULL) { if (.is_empty(x)) return(NULL) if (!inherits(x, "ggeffects")) stop("`x` must be of class `ggeffects`.", call. = FALSE) sjlabelled::convert_case(attr(x, which = "y.title", exact = TRUE), case) } #' @rdname get_title #' @export get_legend_title <- function(x, case = NULL) { if (.is_empty(x)) return(NULL) if (!inherits(x, "ggeffects")) stop("`x` must be of class `ggeffects`.", call. = FALSE) sjlabelled::convert_case(attr(x, which = "legend.title", exact = TRUE), case) } #' @rdname get_title #' @export get_legend_labels <- function(x, case = NULL) { if (.is_empty(x)) return(NULL) if (!inherits(x, "ggeffects")) stop("`x` must be of class `ggeffects`.", call. = FALSE) sjlabelled::convert_case(attr(x, which = "legend.labels", exact = TRUE), case) } #' @rdname get_title #' @export get_x_labels <- function(x, case = NULL) { if (.is_empty(x)) return(NULL) if (!inherits(x, "ggeffects")) stop("`x` must be of class `ggeffects`.", call. = FALSE) labs <- attr(x, which = "x.axis.labels", exact = TRUE) if (!is.numeric(labs)) { sjlabelled::convert_case(attr(x, which = "x.axis.labels", exact = TRUE), case) } else { labs } } #' @rdname get_title #' @export get_complete_df <- function(x, case = NULL) { suppressWarnings(do.call(rbind, lapply(x, function(df) { df$x <- .factor_to_numeric(df$x) df }))) } get_sub_title <- function(x, case = NULL) { if (.is_empty(x)) return(NULL) if (!inherits(x, "ggeffects")) stop("`x` must be of class `ggeffects`.", call. = FALSE) st <- attr(x, which = "n.trials", exact = TRUE) panel <- attr(x, which = "panel.title", exact = TRUE) if (!is.null(panel)) sjlabelled::convert_case(panel, case) else if (!is.null(st)) sprintf("(for %s trials)", st) else NULL } ggeffects/R/backports.R0000644000176200001440000000025214004226715014523 0ustar liggesusersisTRUE <- function(x) { is.logical(x) && length(x) == 1L && !is.na(x) && x } isFALSE <- function(x) { is.logical(x) && length(x) == 1L && !is.na(x) && !x } ggeffects/R/utils_get_representative_values.R0000644000176200001440000001046714046746430021251 0ustar liggesusers# return levels, as list # c("age", "edu [1,3]", "sex [2]") would return a list: # $edu [1] 1 3; $sex [1] 2 .get_representative_values <- function(x, model_frame = NULL) { # get variable with suffix terms_with_suffix <- which(as.vector(regexpr(pattern = "([^\\]]*)\\]", text = x, perl = TRUE)) != -1) # is empty? if (.is_empty(terms_with_suffix)) return(NULL) # get variable names. needed later to set as # names attributes at_terms <- .clean_terms(x)[terms_with_suffix] # get levels inside brackets at_levels <- unlist(regmatches(x, gregexpr(pattern = "\\[(.*)\\]", text = x, perl = TRUE))) # remove brackets at_levels <- gsub("(\\[*)(\\]*)", "", at_levels) # see if we have multiple values, split at comma at_levels <- lapply(strsplit(at_levels, ",", fixed = TRUE), trimws) # moderator pattern at_pattern <- c("minmax", "meansd", "zeromax", "quart2", "all", "quart") # now check for ranges at_levels <- mapply(function(x, y) { # Here we may have a range of values. we then create the # sequence with all values from this range if (any(grepl(":", x, fixed = TRUE))) { # values at sequence (from to) ------------------------------------------ from_to_by <- s <- unlist(lapply(strsplit(x, ":", fixed = TRUE), trimws)) if (grepl("by", s[2], fixed = TRUE)) { from_to_by[2] <- sub("(.*)(\\s*)by(\\s*)=(.*)", "\\1", x = s[2]) from_to_by[3] <- sub("(.*)(\\s*)by(\\s*)=(.*)", "\\4", x = s[2]) } else if (grepl("by", s[3], fixed = TRUE)) { from_to_by[3] <- sub("by(\\s*)=(.*)", "\\2", x = s[3]) } else { from_to_by[3] <- "1" } from_to_by <- as.numeric(from_to_by) x <- seq(from = from_to_by[1], to = from_to_by[2], by = from_to_by[3]) } else if (length(x) == 1 && grepl("^n(\\s*)=", x)) { # values at pretty range ----------------------------------------------- steps <- as.numeric(trimws(substring(gsub(" ", "", x), first = 3))) x <- pretty_range(model_frame[[y]], n = steps) } else if (length(x) == 1 && grepl("^sample(\\s*)=", x)) { # values at random samples --------------------------------------------- size <- as.numeric(trimws(substring(gsub(" ", "", x), first = 8))) lev <- stats::na.omit(unique(model_frame[[y]])) pos <- sample.int(n = length(lev), size = size, replace = FALSE) x <- lev[pos] if (is.factor(x)) { if (.is_numeric_factor(x)) { x <- .factor_to_numeric(droplevels(x)) } else { x <- as.character(x) } } } else if (length(x) == 1 && grepl("[[:alpha:]]", x)) { # values at function --------------------------------------------- # else, we also may have a character expression. This may # either be the name of a valid function. In this case, we # transform the values for predictions using this function. # Else, it also might be the name of a value labels, so no # valid function name. In this case, simply return the label. if (x == "pretty") { # return pretty range x <- pretty_range(model_frame[[y]]) } else if (x %in% at_pattern) { # return at specific values x <- values_at(model_frame[[y]], values = x) } else { # transform values by function funtrans <- try(match.fun(x), silent = TRUE) if (!inherits(funtrans, "try-error") && !is.null(model_frame)) { x <- funtrans(sort(unique(model_frame[[y]]))) # is x a value from vector? } else if (x %in% unique(model_frame[[y]])) { x # return values of a vector } else { x <- tryCatch({ get(x, envir = parent.frame()) }, error = function(e) { NULL }) if (is.null(x)) { x <- tryCatch({ get(x, envir = globalenv()) }, error = function(e) { NULL }) } } } } x }, at_levels, at_terms, SIMPLIFY = FALSE) # check if levels were numeric or not... suppressWarnings( if (!anyNA(unlist(lapply(at_levels, as.numeric)))) { at_levels <- lapply(at_levels, as.numeric) } ) stats::setNames(at_levels, at_terms) } ggeffects/R/themes.R0000644000176200001440000001056614046746430014040 0ustar liggesusers#' @rdname plot #' @export theme_ggeffects <- function(base_size = 11, base_family = "") { if (!requireNamespace("ggplot2", quietly = FALSE)) { stop("Package `ggplot2` needed to for this function.", call. = FALSE) } (ggplot2::theme_minimal(base_size = base_size, base_family = base_family) + ggplot2::theme( axis.line.x = ggplot2::element_line(colour = "grey80"), axis.line.y = ggplot2::element_line(colour = "grey80"), axis.text = ggplot2::element_text(colour = "grey50"), axis.title = ggplot2::element_text(colour = "grey30"), strip.background = ggplot2::element_rect(colour = "grey70", fill = "grey90"), strip.text = ggplot2::element_text(colour = "grey30"), legend.title = ggplot2::element_text(colour = "grey30"), legend.text = ggplot2::element_text(colour = "grey30") )) } ggeffects_colors <- list( `aqua` = c("#BAF5F3", "#46A9BE", "#8B7B88", "#BD7688", "#F2C29E"), `warm` = c("#072835", "#664458", "#C45B46", "#F1B749", "#F8EB85"), `dust` = c("#232126", "#7B5756", "#F7B98B", "#F8F7CF", "#AAAE9D"), `blambus` = c("#E02E1F", "#5D8191", "#BD772D", "#494949", "#F2DD26"), `simply` = c("#CD423F", "#0171D3", "#018F77", "#FCDA3B", "#F5C6AC"), `us` = c("#004D80", "#376C8E", "#37848E", "#9BC2B6", "#B5D2C0"), `reefs` = c("#43a9b6", "#218282", "#dbdcd1", "#44515c", "#517784"), `breakfast club` = c("#b6411a", "#4182dd", "#2d6328", "#eec3d8", "#ecf0c8"), `metro` = c("#d11141", "#00aedb", "#00b159", "#f37735", "#8c8c8c", "#ffc425", "#cccccc"), `viridis` = c("#440154", "#46337E", "#365C8D", "#277F8E", "#1FA187", "#4AC16D", "#9FDA3A", "#FDE725"), `ipsum` = c("#3f2d54", "#75b8d1", "#2d543d", "#d18975", "#8fd175", "#d175b8", "#758bd1", "#d1ab75", "#c9d175"), `quadro` = c("#ff0000", "#1f3c88", "#23a393", "#f79f24", "#625757"), `eight` = c("#003f5c", "#2f4b7c", "#665191", "#a05195", "#d45087", "#f95d6a", "#ff7c43", "#ffa600"), `circus` = c("#C1241E", "#0664C9", "#EBD90A", "#6F130D", "#111A79"), `system` = c("#0F2838", "#F96207", "#0DB0F3", "#04EC04", "#FCC44C"), `hero` = c("#D2292B", "#165E88", "#E0BD1C", "#D57028", "#A5CB39", "#8D8F70"), `flat` = c("#c0392b", "#2980b9", "#16a085", "#f39c12", "#8e44ad", "#7f8c8d", "#d35400"), `social` = c("#b92b27", "#0077B5", "#00b489", "#f57d00", "#410093", "#21759b", "#ff3300"), `set1` = c("#E41A1C", "#377EB8", "#4DAF4A", "#984EA3", "#FF7F00", "#FFFF33", "#A65628", "#F781BF", "#999999"), `greyscale` = c("#333333", "#5A5A5A", "#737373", "#878787", "#989898", "#A7A7A7", "#B4B4B4", "#C1C1C1", "#CCCCCC") ) ggeffects_pal <- function(palette = "metro", n = NULL) { pl <- ggeffects_colors[[palette]] if (!is.null(n) && n <= length(pl)) { if (.is_cont_scale(palette)) { pl <- pl[stats::quantile(1:length(pl), probs = seq(0, 1, length.out = n))] } else { pl <- pl[1:n] } } pl } # palettes with a continuous color scale .is_cont_scale <- function(p) { p %in% c("aqua", "dust", "eight", "greyscale", "us", "viridis", "warm") } #' @rdname plot #' @export show_pals <- function() { if (!requireNamespace("ggplot2", quietly = FALSE)) { stop("Package `ggplot2` needed to for this function.", call. = FALSE) } longest.pal <- max(sapply(ggeffects_colors, length)) color_pal <- lapply(ggeffects_colors, function(.x) { if (length(.x) == longest.pal) .x else c(.x, rep("#ffffff", times = longest.pal - length(.x))) }) x <- as.data.frame(color_pal) x <- .gather(x[nrow(x):1, , drop = FALSE]) x <- x[order(x$key), , drop = FALSE] x$y <- rep_len(1:longest.pal, nrow(x)) x$cols = as.factor(1:nrow(x)) x$key <- factor(x$key, levels = rev(unique(x$key))) x$group <- "Other Palettes" x$group[.is_cont_scale(x$key)] <- "Continuous Palettes" x$group[x$key %in% c("breakfast.club", "flat", "metro", "quadro", "set1", "simply", "social")] <- "Red-Blue-Green Palettes" ggplot2::ggplot(x, ggplot2::aes_string(x = "key", fill = "cols")) + ggplot2::geom_bar(width = .7) + ggplot2::scale_fill_manual(values = x$value) + ggplot2::scale_y_continuous(breaks = NULL, labels = NULL) + ggplot2::guides(fill = "none") + ggplot2::coord_flip() + ggplot2::theme_minimal() + ggplot2::labs(x = NULL, y = NULL) + ggplot2::facet_wrap(~group, ncol = 1, scales = "free") } ggeffects/R/get_predictions_gam2.R0000644000176200001440000000262214036250437016631 0ustar liggesusersget_predictions_Gam <- function(model, fitfram, ci.lvl, linv, value_adjustment, terms, model_class, condition, ...) { se <- !is.null(ci.lvl) && !is.na(ci.lvl) # compute ci, two-ways if (!is.null(ci.lvl) && !is.na(ci.lvl)) ci <- (1 + ci.lvl) / 2 else ci <- .975 prdat <- stats::predict( model, newdata = fitfram, type = "link", ## TODO currently not supported se.fit = FALSE ) # copy predictions fitfram$predicted <- linv(as.vector(prdat)) # did user request standard errors? if yes, compute CI if (se) { se.pred <- .standard_error_predictions( model = model, prediction_data = fitfram, value_adjustment = value_adjustment, terms = terms, model_class = model_class, condition = condition ) if (.check_returned_se(se.pred)) { se.fit <- se.pred$se.fit fitfram <- se.pred$prediction_data # calculate CI fitfram$conf.low <- linv(as.vector(prdat) - stats::qnorm(ci) * se.fit) fitfram$conf.high <- linv(as.vector(prdat) + stats::qnorm(ci) * se.fit) # copy standard errors attr(fitfram, "std.error") <- se.fit } else { # no CI fitfram$conf.low <- NA fitfram$conf.high <- NA } } else { # no CI fitfram$conf.low <- NA fitfram$conf.high <- NA } fitfram } ggeffects/R/get_predictions_zeroinfl.R0000644000176200001440000001202614046746430017636 0ustar liggesusersget_predictions_zeroinfl <- function(model, data_grid, ci.lvl, linv, type, model_class, value_adjustment, terms, vcov.fun, vcov.type, vcov.args, condition, interval = NULL, ...) { # get prediction type. pt <- if (model_class == "zeroinfl" && type == "fe") "count" else if (model_class == "zeroinfl" && type == "fe.zi") "response" else if (model_class == "zeroinfl" && type == "zi.prob") "zero" else if (model_class == "zerotrunc" && type == "fe") "count" else if (model_class == "zerotrunc" && type == "fe.zi") "response" else if (model_class == "zerotrunc" && type == "zi.prob") "zero" else if (model_class == "hurdle" && type == "fe") "count" else if (model_class == "hurdle" && type == "fe.zi") "response" else if (model_class == "hurdle" && type == "zi.prob") "zero" else "response" # compute ci, two-ways if (!is.null(ci.lvl) && !is.na(ci.lvl)) ci <- (1 + ci.lvl) / 2 else ci <- .975 # copy object predicted_data <- data_grid add.args <- lapply(match.call(expand.dots = FALSE)$`...`, function(x) x) if ("nsim" %in% names(add.args)) nsim <- eval(add.args[["nsim"]]) else nsim <- 1000 # get predictions prdat <- stats::predict( model, newdata = data_grid, type = pt, ... ) if (type == "zi.prob") { linv <- stats::plogis # need back-transformation predicted_data$predicted <- stats::qlogis(as.vector(prdat)) } else { # need back-transformation predicted_data$predicted <- log(as.vector(prdat)) } if (type == "fe.zi") { model_frame <- insight::get_data(model) clean_terms <- .clean_terms(terms) newdata <- .data_grid( model, model_frame, terms, value_adjustment = value_adjustment, factor_adjustment = FALSE, show_pretty_message = FALSE, condition = condition ) # Since the zero inflation and the conditional model are working in "opposite # directions", confidence intervals can not be derived directly from the # "predict()"-function. Thus, confidence intervals for type = "fe.zi" are # based on quantiles of simulated draws from a multivariate normal distribution # (see also _Brooks et al. 2017, pp.391-392_ for details). prdat.sim <- .simulate_zi_predictions(model, newdata, nsim, terms, value_adjustment, condition) if (is.null(prdat.sim) || inherits(prdat.sim, c("error", "simpleError"))) { insight::print_color("Error: Confidence intervals could not be computed.\n", "red") cat("Possibly a polynomial term is held constant (and does not appear in the `terms`-argument). Or try reducing number of simulation, using argument `nsim` (e.g. `nsim = 100`).\n") predicted_data$predicted <- as.vector(prdat) predicted_data$conf.low <- NA predicted_data$conf.high <- NA } else { # we need two data grids here: one for all combination of levels from the # model predictors ("newdata"), and one with the current combinations only # for the terms in question ("data_grid"). "sims" has always the same # number of rows as "newdata", but "data_grid" might be shorter. So we # merge "data_grid" and "newdata", add mean and quantiles from "sims" # as new variables, and then later only keep the original observations # from "data_grid" - by this, we avoid unequal row-lengths. sims <- exp(prdat.sim$cond) * (1 - stats::plogis(prdat.sim$zi)) predicted_data <- .join_simulations(data_grid, newdata, as.vector(prdat), sims, ci, clean_terms) if (.obj_has_name(predicted_data, "std.error")) { # copy standard errors attr(predicted_data, "std.error") <- predicted_data$std.error predicted_data <- .remove_column(predicted_data, "std.error") } } } else { # get standard errors from variance-covariance matrix se.pred <- .standard_error_predictions( model = model, prediction_data = predicted_data, value_adjustment = value_adjustment, type = type, terms = terms, model_class = model_class, vcov.fun = vcov.fun, vcov.type = vcov.type, vcov.args = vcov.args, condition = condition, interval = interval ) if (.check_returned_se(se.pred)) { se.fit <- se.pred$se.fit predicted_data <- se.pred$prediction_data # CI predicted_data$conf.low <- linv(predicted_data$predicted - stats::qnorm(ci) * se.fit) predicted_data$conf.high <- linv(predicted_data$predicted + stats::qnorm(ci) * se.fit) # copy standard errors and attributes attr(predicted_data, "std.error") <- se.fit attr(predicted_data, "prediction.interval") <- attr(se.pred, "prediction_interval") } else { # CI predicted_data$conf.low <- NA predicted_data$conf.high <- NA } predicted_data$predicted <- linv(predicted_data$predicted) } predicted_data } ggeffects/NEWS.md0000644000176200001440000003326314100457753013322 0ustar liggesusers# ggeffects 1.1.1 ## Changes to functions * Add more informative error message for *brmsfit* models using `mo()` with numeric predictors, which only allow to predict for values that are actually present in the data. ## Bug fixes * Fixed issue with adding raw data points for plots from logistic regression models, when the response variable was no factor with numeric levels. * Fixed issues with CRAN checks. # ggeffects 1.1.0 ## New supported models * `orm` (package **rms**) ## Breaking Changes * Prediction intervals (where possible, or when `type = "random"`), are now always based on sigma^2 (i.e. `insight::get_sigma(model)^2`). This is in line with `interval = "prediction"` for *lm*, or for predictions based on simulations (when `type = "simulate"`). * `print()` now uses the name of the focal variable as column name (instead) of `"x"`). ## New function * `collapse_by_group()`, to generate a data frame where the response value of the raw data is averaged over the levels of a (random effect) grouping factor. ## General * A new vignette was added related to the definition and meaning of "marginal effects" and "adjusted predictions". To be more strict and to avoid confusion with the term "marginal effect", which meaning may vary across fields, either "marginal effects" was replaced by "adjusted predictions", or "adjusted predictions" was added as term throughout the package's documentation and vignettes. * Allow confidence intervals when predictions are conditioned on random effect groups (i.e. when `type = "random"` and `terms` includes a random effect group factor). * Predicted response values based on `simulate()` (i.e. when `type = "simulate"`) is now possible for more model classes (see `?ggpredict`). * `ggpredict()` now computes confidence intervals for some edge cases where it previously failed (e.g. some models that do not compute standard errors for predictions, and where a factor was included in the model and not the focal term). * `plot()` gains a `collapse.group` argument, which - in conjunction with `add.data` - averages ("collapses") the raw data by the levels of the group factors (random effects). * `data_grid()` was added as more common alias for `new_data()`. ## Bug fixes * `ggpredict()` and `plot()` for survival-models now always start with time = 1. * Fixed issue in `print()` for survival-models. * Fixed issue with `type = "simulate"` for `glmmTMB` models. * Fixed issue with `gamlss` models that had `random()` function in the model formula. * Fixed issue with incorrect back-transformation of predictions for `geeglm` models. # ggeffects 1.0.2 ## Breaking changes * `residuals.type` argument in `plot()` is deprecated. Always using `"working"` residuals. ## General * `pretty_range()` and `values_at()` can now also be used as function factories. * `plot()` gains a `limit.range` argument, to limit the range of the prediction bands to the range of the data. ## Bug fixes * Fixed issue with unnecessary back-transformation of log-transformed offset-terms from *glmmTMB* models. * Fixed issues with plotting raw data when predictor on x-axis was a character vector. * Fixed issues from CRAN checks. # ggeffects 1.0.1 ## General * Fixed CRAN check issues. * Added argument `interval` to `ggemmeans()`, to either compute confidence or prediction intervals. # ggeffects 1.0.0 ## New supported models * `averaging` (package **MuMIn**) ## New functions * `pool_predictions()`, to pool multiple `ggeffects` objects. This can be used when predicted values or estimated marginal means are calculated for models fit to multiple imputed datasets. ## General * The function `residualize_over_grid()` is now exported. * The back-transformation of the response-variable (if these were log- or square root-transformed in the model) now also works with square root-transformations and correctly handles `log1p()` and `log(mu + x)`. * Since standard errors were on the link-scale and not back-transformed for non-Gaussian models, these are now no longer printed (to avoid confusion between standard errors on the link-scale and predictions and confidence intervals on the response-scale). ## Bug fixes * Fixed issue for mixed models when predictions should be conditioned on random effects variances (e.g. `type = "random"` or `"zi_random"`), but random effects variances could not be calculated or were almost zero. * Fixed issue with confidence intervals for `multinom` models in `ggemmeans()`. * Fixed issue in `ggemmeans()` for models from *nlme*. * Fixed issue with `plot()` for some models in `ggeffect()`. * Fixed issue with computation of confidence intervals for zero-inflated models with offset-term. # ggeffects 0.16.0 ## Breaking changes * Package _insight_ since version 0.9.5 now returns the "raw" (untransformed, i.e. original) data that was used to fit the model also for log-transformed variables. Thus, exponentiation like using `terms = "predictor [exp]"` is no longer necessary. ## New supported models * `mlogit` (package **mlogit**) ## General * `plot()` now can also create partial residuals plots. There, arguments `residuals`, `residuals.type` and `residuals.line` were added to add partial residuals, the type of residuals and a possible loess-fit regression line for the residual data. ## Bug fixes * The message for models with a back-transformation to the response scale (all non-Gaussian models), that standard errors are still on the link-scale, did not show up for models of class `glm` since some time. Should be fixed now. * Fixed issue with `ggpredict()` and `rlmerMods` models when using factors as adjusted terms. * Fixed issue with brms-multi-response models. # ggeffects 0.15.1 ## New supported models * `mclogit` (package **mclogit**) ## Bug fixes * Fixed issues due to latest *rstanarm* update. * Fixed some issues around categorical/cumulative *brms* models when the outcome is numeric. * Fixed bug with factor level ordering when plotting raw data from `ggeffect()`. # ggeffects 0.15.0 ## Changes to functions * `ggpredict()` gets a new `type`-option, `"zi.prob"`, to predict the zero-inflation probability (for models from *pscl*, *glmmTMB* and *GLMMadaptive*). * When model has log-transformed response variable and `add.data = TRUE` in `plot()`, the raw data points are also transformed accordingly. * `plot()` with `add.data = TRUE` first adds the layer with raw data, then the points / lines for the marginal effects, so raw data points to not overlay the predicted values. * The `terms`-argument now also accepts the name of a variable to define specific values. See vignette _Marginal Effects at Specific Values_. ## Bug fixes * Fix issues in cluster-robust variance-covariance estimation when `vcov.type` was not specified. # ggeffects 0.14.3 ## General * Fixed issues to due changes in other CRAN packages. # ggeffects 0.14.2 ## General * *ggeffects* now requires _glmmTMB_ version 1.0.0 or higher. * Added human-readable alias-options to the `type`-argument. ## Bug fixes * Fixed issue when log-transformed predictors where held constant and their typical value was negative. * Fixed issue when plotting raw data to a plot with categorical predictor in the x-axis, which had numeric factor levels that did not start at `1`. * Fixed issues for model objects that used (log) transformed `offset()` terms. # ggeffects 0.14.1 ## General * Reduce package dependencies. * New package-vignette _(Cluster) Robust Standard Errors_. ## New supported models * `mixor` (package **mixor**), `cgam`, `cgamm` (package **cgam**) ## Bug fixes * Fix CRAN check issues due to latest *emmeans* update. # ggeffects 0.14.0 ## Breaking Changes * The argument `x.as.factor` is considered as less useful and was removed. ## New supported models * `fixest` (package **fixest**), `glmx` (package **glmx**). ## General * Reduce package dependencies. * `plot(rawdata = TRUE)` now also works for objects from `ggemmeans()`. * `ggpredict()` now computes confidence intervals for predictions from `geeglm` models. * For *brmsfit* models with `trials()` as response variable, `ggpredict()` used to choose the median value of trials were the response was hold constant. Now, you can use the `condition`-argument to hold the number of trials constant at different values. * Improve `print()`. ## Bug fixes * Fixed issue with `clmm`-models, when group factor in random effects was numeric. * Raw data is no longer omitted in plots when grouping variable is continuous and added raw data doesn't numerically match the grouping levels (e.g., mean +/- one standard deviation). * Fix CRAN check issues due to latest *geepack* update. # ggeffects 0.13.0 ## Breaking Changes * The use of `emm()` is discouraged, and so it was removed. ## New supported models * `bracl`, `brmultinom` (package **brglm2**) and models from packages **bamlss** and **R2BayesX**. ## General * Updated package dependencies. * `plot()` now uses dodge-position for raw data for categorical x-axis, to align raw data points with points and error bars geoms from predictions. * Updated and re-arranged internal color palette, especially to have a better behaviour when selecting colors from continuous palettes (see `show_pals()`). ## New functions * Added a `vcov()` function to calculate variance-covariance matrix for marginal effects. ## Changes to Functions * `ggemmeans()` now also accepts `type = "re"` and `type = "re.zi"`, to add random effects variances to prediction intervals for mixed models. * The ellipses-argument `...` is now passed down to the `predict()`-method for *gamlss*-objects, so predictions can be computed for sigma, nu and tau as well. ## Bug fixes * Fixed issue with wrong order of plot x-axis for `ggeffect()`, when one term was a character vector. # ggeffects 0.12.0 ## Breaking Changes * The use of `ggaverage()` is discouraged, and so it was removed. * The name `rprs_values()` is now deprecated, the function is named `values_at()`, and its alias is `representative_values()`. * The `x.as.factor`-argument defaults to `TRUE`. ## General * `ggpredict()` now supports cumulative link and ordinal *vglm* models from package **VGAM**. * More informative error message for *clmm*-models when `terms` included random effects. * `add.data` is an alias for the `rawdata`-argument in `plot()`. * `ggpredict()` and `ggemmeans()` now also support predictions for *gam* models from `ziplss` family. ## Changes to Functions * Improved `print()`-method for ordinal or cumulative link models. * The `plot()`-method no longer changes the order of factor levels for groups and facets. * `pretty_data()` gets a `length()` argument to define the length of intervals to be returned. ## Bug fixes * Added "population level" to output from print-method for *lme* objects. * Fixed issue with correct identification of gamm/gamm4 models. * Fixed issue with weighted regression models from *brms*. * Fixed broken tests due to changes of forthcoming *effects* update. # ggeffects 0.11.0 ## General * Revised docs and vignettes - the use of the term _average marginal effects_ was replaced by a less misleading wording, since the functions of **ggeffects** calculate marginal effects at the mean or at representative values, but not average marginal effects. * Replace references to internal vignettes in docstrings to website-vignettes, so links on website are no longer broken. * `values_at()` is an alias for `rprs_values()`. ## New supported models * `betabin`, `negbin` (package **aod**), `wbm` (package *panelr*) ## Changes to functions * `ggpredict()` now supports prediction intervals for models from *MCMCglmm*. * `ggpredict()` gets a `back.transform`-argument, to tranform predicted values from log-transformed responses back to their original scale (the default behaviour), or to allow predictions to remain on log-scale (new). * `ggpredict()` and `ggemmeans()` now can calculate marginal effects for specific values from up to three terms (i.e. `terms` can be of lenght four now). * The `ci.style`-argument from `plot()` now also applies to error bars for categorical variables on the x-axis. ## Bug fixes * Fixed issue with *glmmTMB* models that included model weights. # ggeffects 0.10.0 ## General * Better support, including confidence intervals, for some of the already supported model types. * New package-vignette _Logistic Mixed Effects Model with Interaction Term_. ## New supported models * `gamlss`, `geeglm` (package **geepack**), `lmrob` and `glmrob` (package **robustbase**), `ols` (package **rms**), `rlmer` (package **robustlmm**), `rq` and `rqss` (package **quantreg**), `tobit` (package **AER**), `survreg` (package **survival**) ## Changes to functions * The steps for specifying a range of values (e.g. `terms = "predictor [1:10]"`) can now be changed with `by`, e.g. `terms = "predictor [1:10 by=.5]"` (see also vignette _Marginal Effects at Specific Values_). * Robust standard errors for predictions (see argument `vcov.fun` in `ggpredict()`) now also works for following model-objects: `coxph`, `plm`, `polr` (and probably also `lme` and `gls`, not tested yet). * `ggpredict()` gets an `interval`-argument, to compute prediction intervals instead of confidence intervals. * `plot.ggeffects()` now allows different horizontal and vertical jittering for `rawdata` when `jitter` is a numeric vector of length two. ## Bug fixes * Models with `AsIs`-conversion from division of two variables as dependent variable, e.g. `I(amount/frequency)`, now should work. * `ggpredict()` failed for `MixMod`-objects when `ci.lvl=NA`. ggeffects/MD50000644000176200001440000002404514100527662012527 0ustar liggesusersdb2569cc0e52447975878365ace5f327 *DESCRIPTION 0d4dc995cc84cfb87c09fcf2eca52a69 *NAMESPACE 151eb206dcd938535590935727e6a0dd *NEWS.md 6a9c1d3a9386da5be8ba910d9aea2ba3 *R/backports.R cbe514b527db007a92e1581ce750ed26 *R/collapse_re_data.R 263e9ce35eb3354ce0cbd7d55c158a3d *R/efc.R 4b100574242ffe7d2dc2c3c65c1e7c0d *R/emmeans_prediction_data.R 343219993049973e8608430ec3d271a4 *R/fish.R fb46ce18b2efe9dc50df1c1a2552c701 *R/get_predictions_MCMCglmm.R 6cf8dffe5151e6f87e31ce9ab12856af *R/get_predictions_MixMod.R a530ad66007409d30269da88ac455e77 *R/get_predictions_bamlss.R 23452a305005b962095b3cd40b7bd069 *R/get_predictions_bayesx.R ecedd3a7ec4b0444abcdc7cfa683b4c8 *R/get_predictions_cgam.R 83f2433a7a8d202f1a71f131616462e0 *R/get_predictions_clm.R 3edf9d5671a2cf8d48d948c46cae7b0e *R/get_predictions_clm2.R 2f1c5191b43e24838c281983bbcd921e *R/get_predictions_clmm.R 2e25ab4ad91e8431afdf0a77a4cc3b2f *R/get_predictions_coxph.R debac92e7d8a3e1ab4b4ef97c9d8e828 *R/get_predictions_gam.R e35553a633fed6ae1e1e65960e43fbad *R/get_predictions_gam2.R fc2ff323a3ea577f9857c303c4a34f31 *R/get_predictions_gamlss.R 554437c5d711b8a5feb824e7b47eb98d *R/get_predictions_gee.R a0642ec3b22f7f8e0cdf59dbf70a36f2 *R/get_predictions_geeglm.R 7f7b8e9bb1e25c3843794f241ffc8569 *R/get_predictions_generic.R afd888c20211768fb7ae2173e5deb5dd *R/get_predictions_generic2.R 0f66e384a630cf7eb0ff1f9c2daf079b *R/get_predictions_glimML.R 9fd5dd81ab34e770323036752c9f3be1 *R/get_predictions_glm.R c322e50e0f03893664305ebe9219dadd *R/get_predictions_glmRob.R e23e1054c00c147d3402d96985841060 *R/get_predictions_glmmTMB.R 144de7af81df7e6ebfc5b3f30623c860 *R/get_predictions_glmrob_base.R 3db14a5b8c387c6ebd72c0045d6c3c60 *R/get_predictions_lm.R 1fb44d1f6d9220c0aaefa7a10c578841 *R/get_predictions_lme.R fd46272aefefde94634cc0c5e17dc630 *R/get_predictions_lmrob_base.R 1c83537fd5549b07baa4c045e8832abe *R/get_predictions_logistf.R 21fc2e17c566b7c28922bcb81ef98da9 *R/get_predictions_lrm.R 4d15019652a5347d5fc259c0bd1f2d1f *R/get_predictions_mclogit.R b396b873b78e1fdeb5fc8b5932c496ea *R/get_predictions_merMod.R 903c9df85a671fd0bd0c9064b8461fb4 *R/get_predictions_mixor.R 2f0c9cd6e247ca8b557a53af9a019757 *R/get_predictions_mlogit.R f1e463b6d4c7510bea4483306ece1b27 *R/get_predictions_multinom.R 6b28683a2e21987ea2067ebf63d16374 *R/get_predictions_ols.R 8a74e9988ba68e0bb24f3a3e51bce8b0 *R/get_predictions_polr.R 288e2e8e56fe5bca1327b4e5e62ac0cf *R/get_predictions_rq.R 9ef4dc9d0a5bdc369223b6665fe934c0 *R/get_predictions_stan.R 5c1a908d4dda333c9c19556f25776a4e *R/get_predictions_survival.R f534a7e29ea8a46b57eb246d21128f58 *R/get_predictions_svyglm.R f25bfa6089808d3a83f21388de7e24c2 *R/get_predictions_svyglmnb.R ffb9459276b66491acf0b7e53c7f9127 *R/get_predictions_tobit.R ebce20a36836572d31eab09ad1c66be0 *R/get_predictions_vgam.R 9fd752b1e19fb373fb019b39ebdd6be2 *R/get_predictions_vglm.R dfff1ca0ef481ebfc27459adeec25ec9 *R/get_predictions_wbm.R 524e8eb149475dee6de94ac2678cc957 *R/get_predictions_zelig.R 95ec4fb6a66333f31e8dceea14d1728f *R/get_predictions_zeroinfl.R cbe736c015a8467d22b38e643a863818 *R/getter.R 3a091b85898bdee75fff00bd86de4e39 *R/ggeffect.R 1c9384889007656746dc65c88b41bd8e *R/ggemmeans.R 6163c3e6ebc967fb6ea28bf64dee60e1 *R/ggemmeans_add_confint.R 5c57968411005a4f56955f67dea467a1 *R/ggemmeans_zi_predictions.R 3b120aa21707299296abd57804e674ca *R/ggpredict.R 6d5a420ac1c1d2e23e147a94d1882400 *R/moderator_pattern.R 8f8e852ecb78173bc5d2e1273820371e *R/new_data.R 92be452cfad781bee305dba360f9b303 *R/plot.R 101f1f01e328e8ebc87294328af46b13 *R/pool_predictions.R 5efe88b2a09abb3af25fa322caeac5f4 *R/post_processing_labels.R c00eb15d701b20c85414241cc7aaf46e *R/post_processing_predictions.R 949fddfe5bf9f6cd1c696ab1e5d6a60f *R/predict_zero_inflation.R b3616d6d06d4696b2221862375e45004 *R/predictions.R 44b22282fea8821fb4159f749b08b4ea *R/pretty_range.R ceb71466dcd483daf91a08a2d11e43a0 *R/print.R 86acdaf8ce4fe26eb315e2b043711f94 *R/residualize_over_grid.R bba83b1b5949180296d8ca050e54c222 *R/simulate_predictions.R bb8958cb903f3c8e80b71c9108dca4be *R/standard_error_predictions.R 4985e6a65fc46ed8bdc28300e7fd78d2 *R/themes.R 7bf0fcfa25bd0352bcadb8e3a155b0cd *R/utils.R 5332f3400f3a47230839c490eb97d7ff *R/utils_check_transformations.R eddf3eadfe4a286be8df9c2771213596 *R/utils_colors.R b866e91ec3521fd757b7b20cb7312fcc *R/utils_get_cleaned_terms.R b683598edfe09d9fd2ce7c3e885a5ae3 *R/utils_get_data_grid.R 21ff2f3d25b8e96fc6fc296ed437c619 *R/utils_get_representative_values.R 61c88dfaa5ed4e86e32bdfd8ec8f0e4a *R/utils_ggpredict.R c30eb865f9f20fda94c9f1b902ee7c0f *R/utils_handle_labels.R f2e9b3f8a5e20282d7e710b1de5d5c19 *R/utils_is_empty.R caf5c7e436e710ac1755d2e723c30d81 *R/utils_model_function.R ce7364c06212dd8cb57038c1bed19215 *R/utils_reshape.R 603ae7e0736031ce4dce87b0e5ede82f *R/utils_select.R dc3f73c401311fe31e7aec0134e397e9 *R/utils_set_attr.R eedd2fbbb21df6e4e11747224a55d061 *R/utils_typical_value.R d11f5c3b7dc6080dd40b60eb2e8097af *R/vcov.R 7518c3ce20cf9f27d81c189793a78b1f *README.md d565c7153db49f45998c8b119e6daeeb *build/partial.rdb b8f885c5584a3814713407abd5f8a69a *build/vignette.rds 031665e59ce51cb7336cfbe3fe470e32 *data/efc.RData 6aa2e9c8919bd029aadbaff3d9b7e246 *data/efc_test.RData fed293a745c7a75939e2f2156aaa9afe *data/fish.RData dda77201f194bbcf8d16dd16d45441b0 *data/lung2.rda 05025415ee1bb80b21c7cfb1bf21a1d2 *inst/CITATION fb19278a2a584617d1c0be07ff83761b *inst/doc/content.R 1b57acd70ccabf88c3a115b4324a459d *inst/doc/content.Rmd 98c838cdb10b70302b90389ccfc03892 *inst/doc/content.html 2b3eed979d5795afaf1b9175995e090b *man/collapse_by_group.Rd dde714f4bf3e128e9ab022934b2d1755 *man/efc.Rd 97399fa9377e36d7c3e4bdf304d9eb32 *man/figures/logo.png 5d27388e9656495c159efe07141e5b6f *man/figures/unnamed-chunk-3-1.png 6ec994a1f7e96361c623680af9b53911 *man/figures/unnamed-chunk-4-1.png 3127d8e554659865bf822f0df2895b1a *man/figures/unnamed-chunk-5-1.png dffd9812dc78555d75c2afffa2e5dc2c *man/figures/unnamed-chunk-6-1.png 6e06740400e5615ed97399990f51e341 *man/fish.Rd 0a0091de827accb4147a86e75f340098 *man/get_title.Rd d14eb2ad058c3d4e4d613c8389631dfe *man/ggpredict.Rd c8c28686f3ae5cd23304e1373d294640 *man/lung2.Rd 43790e17a70deab264b9da365dc57e92 *man/new_data.Rd ea6c782d1bbce80ae40c737486c46649 *man/plot.Rd 99358261b98a812821ded2d8f9b15d11 *man/pool_predictions.Rd 123bd5ff6579ff8954e1128f3f651c85 *man/pretty_range.Rd 62fd730fa0756e4ce1131ae408d1a619 *man/residualize_over_grid.Rd 688d0e8569ec51ddf8e853c4096f21eb *man/values_at.Rd 4622771c4ebe8dca3b5533d474c8451c *man/vcov.Rd cbf2e36813dda2a6a01daae85e204ab5 *tests/testthat.R 2c415e23843c0ef07c69da542753f57a *tests/testthat/test-Gam2.R 21a241f423ab514d4f85449d2a3dfde0 *tests/testthat/test-MCMCglmm.R 91203918b5b609a911e08f153c0d758b *tests/testthat/test-MixMod.R 2437c77c6282316dfef584f016d68f4c *tests/testthat/test-backtransform_response.R 1a25d6accf762ee40ea808265ea18fd0 *tests/testthat/test-betareg.R f7811c2987342ad5f9869127d0985059 *tests/testthat/test-brms-categ-cum.R c930374122bd980008bc0f9979351e96 *tests/testthat/test-brms-ppd.R 4eb1681d4591893aa3f208e5ba2e895d *tests/testthat/test-brms-trial.R 0f513b5424a319ba2988c8e660fee9b7 *tests/testthat/test-clean_vars.R 9582642ca2fe1ad73a99718c7ab33c1e *tests/testthat/test-clm.R 596b8eac99cc34aba5e4df8437cb0f43 *tests/testthat/test-clm2.R 2d43c51fe3ad859fdf90405e06fdd566 *tests/testthat/test-clmm.R 53d62a0dc9bd0173c99963a4a81f34fe *tests/testthat/test-condition.R fca5ad4118b62ebc1c35e4e261f34f96 *tests/testthat/test-contrasts.R f1b705cf3c4086dcdbff2f9078a3bfdc *tests/testthat/test-contrasts2.R 16fba988cec80ea4d809501bd1f7e323 *tests/testthat/test-contrasts3.R e8a101e04b5ddeec8523a20d410cea19 *tests/testthat/test-correct_se_sorting.R 27e743f947cfa7fa316ff9cb86fa6d06 *tests/testthat/test-coxph.R 83641b6b7566bdd0324445efff13832c *tests/testthat/test-decimals.R a0ada06ee2c380d27926a12293e689b1 *tests/testthat/test-extract_values.R d17bbc7703c4de6e9abea449845f1e3e *tests/testthat/test-gamlss.R 998f9a1f6d057f92a06366d78ba50e28 *tests/testthat/test-gamm.R 1a1074952a263c293346af97e86c9601 *tests/testthat/test-gamm4.R 70c9b2d36a528c971459b6e69398ada5 *tests/testthat/test-gee.R 8944c40f92ebb9a435dca34d5f11317d *tests/testthat/test-geeglm.R 542d5ae4199906b83ee3059b5cca5130 *tests/testthat/test-get_titles.R 8637742358196924fa9c2e9253da2eea *tests/testthat/test-glm.R cecfba5256ca095a72ab021890c0c6d0 *tests/testthat/test-glmer.R 44732dcf468d8b3393ace21bd7e1bf34 *tests/testthat/test-glmmTMB.R 7e6e22b4da4e01f5f6667eb857f64580 *tests/testthat/test-glmrob_base.R 37cef1e4433df222991d8eebd11f0f57 *tests/testthat/test-gls.R c9c83b0cf4863abd7181984e7410db64 *tests/testthat/test-ivreg.R 629f8dd8644c3705ca4d8dadc500caa5 *tests/testthat/test-linear-models.R 98c24a68d7945b780ca6d9e4472e534f *tests/testthat/test-lmer.R 4d012db4f3db247a95d2e05d7304f6ea *tests/testthat/test-lmrob_base.R f945665893500a30960e9788a9577d56 *tests/testthat/test-logistf.R 54c9d82bee9c3cbf6b8f6070b0013c20 *tests/testthat/test-lrm.R db82cde0cd08598a2d4247ab17537ec7 *tests/testthat/test-negbin.R cd9f860e8c85b2676ae8c475ceb2f883 *tests/testthat/test-nlme.R c865b46bff5c76bfacd1eee589636594 *tests/testthat/test-offset_zeroinfl.R 0f5890add1a7e549d27fc7dec7006634 *tests/testthat/test-orm.R 898e3bed6e9a9b085ca2fb43420e0378 *tests/testthat/test-plot.R 4f68ca78a0c206d55dd24d5ed0b1a964 *tests/testthat/test-poisson.R 3602bb405147175f743c9c7509d04255 *tests/testthat/test-polr.R adaa78880cf96a5a2364eb54af8cbb44 *tests/testthat/test-poly-zeroinf.R a0ca03bd3cacaf3c9fde2378c29bd10f *tests/testthat/test-print.R b52e0ba428953a149c9e06887e82f803 *tests/testthat/test-rq.R 4ac139c52686f40a7a23777bf858bf51 *tests/testthat/test-rstanarm-ppd.R cbba46175b759ac2791d0c66a5fa89c1 *tests/testthat/test-rstanarm.R 5ef1837ab43f33ceb031ea102c7cc7b8 *tests/testthat/test-survey.R 8727bb956e1afd6577539f55736826af *tests/testthat/test-survreg.R e19db3c4d40c5c8de46ef6ae4a740873 *tests/testthat/test-svyglmnb.R 66c6b521d0a5358dca78dea63059378b *tests/testthat/test-tobit.R b06aff3f37325410508adc07b895e9e2 *tests/testthat/test-vgam.R 0e19bb2453bf8cade8f347eb56801103 *tests/testthat/test-vglm.R 4c547278dd211f0756df5e9f55335499 *tests/testthat/test-zeroinfl.R 8453d62e5048c04973bd6c6ca029160f *tests/testthat/test-zi_prob.R 1b57acd70ccabf88c3a115b4324a459d *vignettes/content.Rmd ggeffects/inst/0000755000176200001440000000000014100515121013152 5ustar liggesusersggeffects/inst/doc/0000755000176200001440000000000014100515121013717 5ustar liggesusersggeffects/inst/doc/content.html0000644000176200001440000002260114100515121016260 0ustar liggesusers Documentation of the ggeffects package

Documentation of the ggeffects package

Daniel Lüdecke

2021-07-29

The documentation of the ggeffects package, including many examples, is available online. Here you can find the content of the available documents. Click on a link to visit the related website.

ggeffects/inst/doc/content.R0000644000176200001440000000172514100515121015521 0ustar liggesusers## ----set-options, echo = FALSE------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ knitr::opts_chunk$set( collapse = TRUE, comment = "#>", dev = "png", fig.width = 7, fig.height = 3.5, message = FALSE, warning = FALSE) options(width = 800) ggeffects/inst/doc/content.Rmd0000644000176200001440000000430314100515003016034 0ustar liggesusers--- title: "Documentation of the ggeffects package" author: "Daniel Lüdecke" date: "`r Sys.Date()`" output: rmarkdown::html_vignette vignette: > %\VignetteIndexEntry{Documentation of the ggeffects package} %\VignetteEngine{knitr::rmarkdown} %\VignetteEncoding{UTF-8} --- ```{r set-options, echo = FALSE} knitr::opts_chunk$set( collapse = TRUE, comment = "#>", dev = "png", fig.width = 7, fig.height = 3.5, message = FALSE, warning = FALSE) options(width = 800) ``` The documentation of the *ggeffects* package, including many examples, is [available online](https://strengejacke.github.io/ggeffects/). Here you can find the content of the available documents. Click on a link to visit the related website. ## General introductions into the *ggeffects* package - [Adjusted Predictions of Regression Models](https://strengejacke.github.io/ggeffects/articles/ggeffects.html) - [Definition of Marginal Effects](https://strengejacke.github.io/ggeffects/articles/introduction_marginal_effects.html) - [Adjusted Predictions at Specific Values](https://strengejacke.github.io/ggeffects/articles/introduction_effectsatvalues.html) - [Adjusted Predictions for Random Effects Models](https://strengejacke.github.io/ggeffects/articles/introduction_randomeffects.html) - [Adding Partial Residuals to Effects Plots](https://strengejacke.github.io/ggeffects/articles/introduction_partial_residuals.html) ## Creating and customizing plots - [Plotting Adjusted Predictions](https://strengejacke.github.io/ggeffects/articles/introduction_plotmethod.html) - [Customize Plot Appearance](https://strengejacke.github.io/ggeffects/articles/introduction_plotcustomize.html) ## Working examples - [Logistic Mixed Effects Model with Interaction Term](https://strengejacke.github.io/ggeffects/articles/practical_logisticmixedmodel.html) - [(Cluster) Robust Standard Errors](https://strengejacke.github.io/ggeffects/articles/practical_robustestimation.html) ## Technical details - [Difference between ggpredict() and ggemmeans()](https://strengejacke.github.io/ggeffects/articles/technical_differencepredictemmeans.html) - [Different output between Stata and ggeffects](https://strengejacke.github.io/ggeffects/articles/technical_stata.html) ggeffects/inst/CITATION0000644000176200001440000000046413451124203014320 0ustar liggesusersbibentry( bibtype = "article", title = "ggeffects: Tidy Data Frames of Marginal Effects from Regression Models.", volume = "3", doi = "10.21105/joss.00772", number = "26", journal = "Journal of Open Source Software", author = person("Daniel", "Lüdecke"), year = "2018", pages = "772" )