ggeffects/0000755000176200001440000000000013614042552012211 5ustar liggesusersggeffects/NAMESPACE0000644000176200001440000000430513614017566013441 0ustar liggesusers# Generated by roxygen2: do not edit by hand S3method(plot,ggalleffects) S3method(plot,ggeffects) S3method(print,ggeffects) S3method(vcov,ggeffects) export(get_complete_df) export(get_legend_labels) export(get_legend_title) export(get_title) export(get_x_labels) export(get_x_title) export(get_y_title) export(ggeffect) export(ggemmeans) export(ggpredict) export(new_data) export(pretty_range) export(representative_values) export(show_pals) export(theme_ggeffects) export(values_at) importFrom(MASS,mvrnorm) importFrom(graphics,plot) importFrom(insight,clean_names) importFrom(insight,find_formula) importFrom(insight,find_parameters) importFrom(insight,find_predictors) importFrom(insight,find_random) importFrom(insight,find_response) importFrom(insight,find_terms) importFrom(insight,find_weights) importFrom(insight,format_ci) importFrom(insight,format_table) importFrom(insight,get_data) importFrom(insight,get_response) importFrom(insight,get_varcov) importFrom(insight,get_variance_random) importFrom(insight,get_weights) importFrom(insight,is_multivariate) importFrom(insight,link_function) importFrom(insight,link_inverse) importFrom(insight,model_info) importFrom(insight,n_obs) importFrom(insight,print_color) importFrom(sjlabelled,as_label) importFrom(sjlabelled,as_numeric) importFrom(sjlabelled,get_label) importFrom(sjlabelled,get_labels) importFrom(sjlabelled,set_labels) importFrom(stats,Gamma) importFrom(stats,as.formula) importFrom(stats,binomial) importFrom(stats,coef) importFrom(stats,complete.cases) importFrom(stats,confint) importFrom(stats,deviance) importFrom(stats,formula) importFrom(stats,gaussian) importFrom(stats,inverse.gaussian) importFrom(stats,median) importFrom(stats,model.matrix) importFrom(stats,na.omit) importFrom(stats,plogis) importFrom(stats,poisson) importFrom(stats,predict) importFrom(stats,predict.glm) importFrom(stats,qlogis) importFrom(stats,qnorm) importFrom(stats,quantile) importFrom(stats,quasi) importFrom(stats,quasibinomial) importFrom(stats,quasipoisson) importFrom(stats,reshape) importFrom(stats,sd) importFrom(stats,setNames) importFrom(stats,simulate) importFrom(stats,terms) importFrom(stats,vcov) ggeffects/README.md0000644000176200001440000002532613604103340013470 0ustar liggesusers # ggeffects - Create Tidy Data Frames of Marginal Effects for ‘ggplot’ from Model Outputs [![CRAN\_Status\_Badge](http://www.r-pkg.org/badges/version/ggeffects)](https://cran.r-project.org/package=ggeffects)    [![DOI](http://joss.theoj.org/papers/10.21105/joss.00772/status.svg)](https://doi.org/10.21105/joss.00772)    [![Documentation](https://img.shields.io/badge/documentation-ggeffects-orange.svg?colorB=E91E63)](https://strengejacke.github.io/ggeffects/)    [![Build Status](https://travis-ci.org/strengejacke/ggeffects.svg?branch=master)](https://travis-ci.org/strengejacke/ggeffects.svg)    [![downloads](http://cranlogs.r-pkg.org/badges/ggeffects)](http://cranlogs.r-pkg.org/)    [![total](http://cranlogs.r-pkg.org/badges/grand-total/ggeffects)](http://cranlogs.r-pkg.org/) Lüdecke D (2018). *ggeffects: Tidy Data Frames of Marginal Effects from Regression Models.* Journal of Open Source Software, 3(26), 772. doi: [10.21105/joss.00772](https://doi.org/10.21105/joss.00772) ## Why marginal effects? Results of regression models are typically presented as tables that are easy to understand. For more complex models that include interaction or quadratic / spline terms, tables with numbers are less helpful and difficult to interpret. In such cases, *marginal effects* are far easier to understand. In particular, the visualization of marginal effects allows to intuitively get the idea of how predictors and outcome are associated, even for complex models. ## Aim of this package **ggeffects** is a light-weight package that aims at easily calculating marginal effects (or: *estimated marginal means*) at the mean or at representative values ([see definitions here](https://stats.stackexchange.com/tags/marginal-effect/info)) from statistical models, i.e. predictions generated by a model when one holds the non-focal variables constant and varies the focal variable(s). This is achieved by three core ideas that describe the philosophy of the function design: 1) Functions are type-safe and always return a data frame with the same, consistent structure; 2) there is a simple, unique approach to calculate marginal effects and estimated marginal means for many different models; 3) the package supports “labelled data” (Lüdecke 2018), which allows human readable annotations for graphical outputs. This means, users do not need to care about any expensive steps after modelling to visualize the results. The returned as data frame is ready to use with the **ggplot2**-package, however, there is also a `plot()`-method to easily create publication-ready figures. ## Documentation and Support Please visit for documentation and vignettes. In case you want to file an issue or contribute in another way to the package, please follow [this guide](https://github.com/strengejacke/ggeffects/blob/master/.github/CONTRIBUTING.md). For questions about the functionality, you may either contact me via email or also file an issue. ## ggeffects supports many different models and is easy to use Marginal effects can be calculated for many different models. Currently supported model-objects are: `bamlss`, `bayesx`, `betabin`, `betareg`, `bglmer`, `blmer`, `bracl`, `brglm`, `brmsfit`, `brmultinom`, `cgam`, `cgamm`, `clm`, `clm2`, `clmm`, `coxph`, `fixest`, `gam` (package **mgcv**), `Gam` (package **gam**), `gamlss`, `gamm`, `gamm4`, `gee`, `geeglm`, `glm`, `glm.nb`, `glmer`, `glmer.nb`, `glmmTMB`, `glmmPQL`, `glmrob`, `glmRob`, `glmx`, `gls`, `hurdle`, `ivreg`, `lm`, `lm_robust`, `lme`, `lmer`, `lmrob`, `lmRob`, `logistf`, `lrm`, `MixMod`, `MCMCglmm`, `mixor`, `multinom`, `negbin`, `nlmer`, `ols`, `plm`, `polr`, `rlm`, `rlmer`, `rq`, `rqss`, `stanreg`, `survreg`, `svyglm`, `svyglm.nb`, `tobit`, `truncreg`, `vgam`, `wbm`, `zeroinfl` and `zerotrunc`. Support for models varies by function, i.e. although `ggpredict()`, `ggemmeans()` and `ggeffect()` support most models, some models are only supported exclusively by one of the three functions. Other models not listed here might work as well, but are currently not testet. Interaction terms, splines and polynomial terms are also supported. The main functions are `ggpredict()`, `ggemmeans()` and `ggeffect()`. There is a generic `plot()`-method to plot the results using **ggplot2**. ## Examples The returned data frames always have the same, consistent structure and column names, so it’s easy to create ggplot-plots without the need to re-write the function call. `x` and `predicted` are the values for the x- and y-axis. `conf.low` and `conf.high` could be used as `ymin` and `ymax` aesthetics for ribbons to add confidence bands to the plot. `group` can be used as grouping-aesthetics, or for faceting. `ggpredict()` requires at least one, but not more than four terms specified in the `terms`-argument. Predicted values of the response, along the values of the first term are calculated, optionally grouped by the other terms specified in `terms`. ``` r library(ggeffects) library(splines) data(efc) fit <- lm(barthtot ~ c12hour + bs(neg_c_7) * c161sex + e42dep, data = efc) ggpredict(fit, terms = "c12hour") #> #> # Predicted values of Total score BARTHEL INDEX #> # x = average number of hours of care per week #> #> x | Predicted | SE | 95% CI #> --------------------------------------- #> 4 | 67.89 | 1.06 | [65.81, 69.96] #> 12 | 67.07 | 1.01 | [65.10, 69.05] #> 22 | 66.06 | 0.96 | [64.19, 67.94] #> 36 | 64.64 | 0.92 | [62.84, 66.45] #> 49 | 63.32 | 0.93 | [61.51, 65.14] #> 70 | 61.20 | 1.01 | [59.22, 63.17] #> 100 | 58.15 | 1.25 | [55.71, 60.60] #> 168 | 51.26 | 2.04 | [47.27, 55.25] #> #> Adjusted for: #> * neg_c_7 = 11.83 #> * c161sex = 1.76 #> * e42dep = 2.93 ``` A possible call to ggplot could look like this: ``` r library(ggplot2) mydf <- ggpredict(fit, terms = "c12hour") ggplot(mydf, aes(x, predicted)) + geom_line() + geom_ribbon(aes(ymin = conf.low, ymax = conf.high), alpha = .1) ``` ![](man/figures/unnamed-chunk-3-1.png) However, there is also a `plot()`-method. This method uses convenient defaults, to easily create the most suitable plot for the marginal effects. ``` r mydf <- ggpredict(fit, terms = "c12hour") plot(mydf) ``` ![](man/figures/unnamed-chunk-4-1.png) With three variables, predictions can be grouped and faceted. ``` r ggpredict(fit, terms = c("neg_c_7", "c161sex", "e42dep")) #> #> # Predicted values of Total score BARTHEL INDEX #> # x = Negative impact with 7 items #> #> # c161sex = Male #> # e42dep = [1] independent #> #> x | Predicted | SE | 95% CI #> ---------------------------------------- #> 7 | 102.74 | 3.45 | [95.97, 109.51] #> 12 | 102.27 | 2.64 | [97.10, 107.44] #> 17 | 93.79 | 3.49 | [86.96, 100.63] #> 28 | 164.57 | 35.00 | [95.98, 233.17] #> #> # c161sex = Female #> # e42dep = [1] independent #> #> x | Predicted | SE | 95% CI #> ---------------------------------------- #> 7 | 109.54 | 2.21 | [105.20, 113.87] #> 12 | 99.81 | 1.97 | [ 95.94, 103.68] #> 17 | 94.90 | 2.39 | [ 90.21, 99.60] #> 28 | 90.26 | 9.42 | [ 71.79, 108.74] #> #> # c161sex = Male #> # e42dep = [2] slightly dependent #> #> x | Predicted | SE | 95% CI #> ---------------------------------------- #> 7 | 83.73 | 3.27 | [77.32, 90.14] #> 12 | 83.26 | 2.20 | [78.95, 87.58] #> 17 | 74.79 | 3.11 | [68.68, 80.89] #> 28 | 145.57 | 34.98 | [77.00, 214.14] #> #> # c161sex = Female #> # e42dep = [2] slightly dependent #> #> x | Predicted | SE | 95% CI #> -------------------------------------- #> 7 | 90.53 | 1.95 | [86.71, 94.35] #> 12 | 80.80 | 1.34 | [78.17, 83.44] #> 17 | 75.90 | 1.84 | [72.29, 79.51] #> 28 | 71.26 | 9.28 | [53.07, 89.45] #> #> # c161sex = Male #> # e42dep = [3] moderately dependent #> #> x | Predicted | SE | 95% CI #> ---------------------------------------- #> 7 | 64.72 | 3.29 | [58.28, 71.16] #> 12 | 64.26 | 2.02 | [60.30, 68.21] #> 17 | 55.78 | 2.93 | [50.04, 61.52] #> 28 | 126.56 | 34.99 | [57.98, 195.14] #> #> # c161sex = Female #> # e42dep = [3] moderately dependent #> #> x | Predicted | SE | 95% CI #> -------------------------------------- #> 7 | 71.52 | 2.01 | [67.59, 75.45] #> 12 | 61.79 | 1.02 | [59.79, 63.80] #> 17 | 56.89 | 1.54 | [53.86, 59.91] #> 28 | 52.25 | 9.21 | [34.21, 70.29] #> #> # c161sex = Male #> # e42dep = [4] severely dependent #> #> x | Predicted | SE | 95% CI #> ---------------------------------------- #> 7 | 45.72 | 3.50 | [38.86, 52.57] #> 12 | 45.25 | 2.15 | [41.03, 49.47] #> 17 | 36.77 | 2.96 | [30.97, 42.58] #> 28 | 107.55 | 35.01 | [38.93, 176.18] #> #> # c161sex = Female #> # e42dep = [4] severely dependent #> #> x | Predicted | SE | 95% CI #> -------------------------------------- #> 7 | 52.51 | 2.36 | [47.88, 57.15] #> 12 | 42.79 | 1.27 | [40.29, 45.28] #> 17 | 37.88 | 1.64 | [34.66, 41.10] #> 28 | 33.24 | 9.20 | [15.21, 51.28] #> #> Adjusted for: #> * c12hour = 42.10 mydf <- ggpredict(fit, terms = c("neg_c_7", "c161sex", "e42dep")) ggplot(mydf, aes(x = x, y = predicted, colour = group)) + geom_line() + facet_wrap(~facet) ``` ![](man/figures/unnamed-chunk-5-1.png) `plot()` works for this case, as well: ``` r plot(mydf) ``` ![](man/figures/unnamed-chunk-6-1.png) More features are explained in detail in the [package-vignettes](https://strengejacke.github.io/ggeffects/). ## Installation ### Latest development build To install the latest development snapshot (see latest changes below), type following commands into the R console: ``` r library(devtools) devtools::install_github("strengejacke/ggeffects") ``` ### Officiale, stable release To install the latest stable release from CRAN, type following command into the R console: ``` r install.packages("ggeffects") ``` ## Citation In case you want / have to cite my package, please use `citation('ggeffects')` for citation information: Lüdecke D (2018). *ggeffects: Tidy Data Frames of Marginal Effects from Regression Models.* Journal of Open Source Software, 3(26), 772. doi: [10.21105/joss.00772](https://doi.org/10.21105/joss.00772) ## References
Lüdecke, Daniel. 2018. “Sjlabelled: Labelled Data Utility Functions,” May. .
ggeffects/data/0000755000176200001440000000000013451124203013113 5ustar liggesusersggeffects/data/efc_test.RData0000644000176200001440000004416413451124203015635 0ustar liggesusers՝Ǒ[Hb]c6#|B!2dͫ](Z"R;lz88& ʞ6u#:/e__ _BP򗭦/+:*kNmS^ 1OJ¸w?==1_?ʯ—~0.}/[z&:[u~#3CE5*潏h+ʎp_v;XG xx Pv(y/AG~*n|˞C~ nw78ޢu~Ԧ~\G~ ӟ 6'sWڮ*^{yܷ$\ j>$X7h>KS_@?+D_䦿k;">#$~2s4Ny~qJv\WF벀|q}w[njٓp|~kn?|?^A[~q1κQ|.:Nrdkݔ7DGucۢ-ck:p|g׸W(uR?!׭#|:W?-+w_v$GG1>X OΡ? Cwa\Fw4{ACBX|Ch1#kAŁ_|h'+[zPz.aB?~Ta8S(ӾF9x>U޿K^O8xxܿؖypyx>RI/yYg^S?3~}E?חxz2O\OsN|~uH;.k!;;%kGx2N0?pj?^5$쏺o 7Ev_?kt~…ͻc}鍤OQk}md$/ `}ތGt 3ɹ0t~ gZnփ|tԢ~ek89|u`W50/~РC\EyoC} "q)Xv[</;y-{ ozq?7wu>=?{oq|XZ- go9lGN5CbuuѽG>Q]ccϵSpn>tƎ17T8vSzu2ݦgT_GO͛wY,a\S 7>Uz-,TzV~s?M_;>k#" |uϵFrgn>cDxY|7sYw,/Sln8W]3oǧSZ9"YS6Udc~\ױ0+76OԼ~9v?n669;ghhqg_^shɭx^ /,_wx[ }OWA@KU`wodSk˿CrOwo|(}Kxwʺ_zhJ~%=v~?}q<͗ e_Ϡ_ x#?.Os:@ڟuY;}'ߓ>?~=~o|Y/IޒTlCm/v\ $W |ї>Sw弦<٫?{wn]/ @oҗQe_Yc٧S. ڒ[K8bǮg_Ϻ}P?/e_8(y9'}75K5'uX7:&s^c]/_}>{qɿuzy~޳{] ox/y-eםe/~\;E~%}J}c1EgBgӗw-'}=4aK}{?{?<+/`{S}y~_2.ywC9<菗۵˽I|T]No~d6}P,U#MR sk.{ҋ}Sr?h3 rp|pc 6c`؅|[G?+\ɾӏ`ov~#+Sj/]>vkƶ3oɷrrGGz:koAޔ#[$o$_?#AZ"z]#M? a8/OEL ] ЏuvOу\a䫍lvj뇬_E>vuAucckҍϵ:>GnXKMxqvd7G?8f;YqCH"9)x7`(Dr9zuU GøipOOZ76Gu@nEH?W[pi[?f<89rgqcvhM-$lt,Z9SZKMK-S6ycen15Dg8uxYyjuğsu摈ozed5A|O*yo;Gvkrf `v-:>s['/gOYcm,F"<G贘7R0#[լ~$X})y+WmeO=V^$OD'+΋]gG_?k3<[/nV{g9Z|[l-vWwv\LoT"k݇?-ڍ>^=_j'6ׇ?'~ؿ O9xPEv0q:^yvb1:?LGA?huK+üy94:suj.nϝC<2ӕsi@bSGw-`9+S_=_GrY}~ WDGsAGwWAr(>g}K;S$#ѷr6é|3:<35S9y w=?{oGݷwWW,K>ogІrZzϽ"WG)+@xѫruv5r?Z_a&_}T? \;|tڀ^U#WW_U\ 6/{^EF~4È^D7#E_D%Oڿ&^~scW䄼^8>2s}x|l(x"3Uh쫝d.,by?Ij3˩[do/GN汹H>f!~o֎KGs1ീ'v=ppĭ|Z]8ۜ\_Ye]?vt#~c|j}Q`{1Zz*}5*gk؏v'OcDGr͵ίC?+_;5~F'#ß MǨVurX/̛z^ N~|zwbugx۷W;Nje꽻_l->;oO#kQ[(Jz(HIVZQMk=?QST?;ͽns6$tjZBR,^7ssO٬sgvSy?~q3<[ӹ\I7|sy⋣l⠋)s۴Jyc짦HhX=ɿ弨&O+qƣyyYڷ,z_T#{;ֻ_*/Mu"vms@'WVh>k?E95]ea8:?-k㿖oӔc8ZY~a5Nf4Z$Wv|Sq3W^t|Oo <ߜO~{ _DbwgMjflʕik#-AcjL\3hvmX2:zd6O k?|NS+g"yj+詟'"͒\o~Tg,r̭Xz8NSЦIɱ)?ѼMK-'=O_zlMj='ټ->\~s=?:|8٘ch.s57'm'OJo[ow:VbsƮ^g¿'|6UM{V8΋ʿs'I7wwvzTik ]/(W \zv;zl[QZy_`!G6o;֏ΫS;ϓ;MOV}*S%=X?#=7gߖvGW{[hzZlguv`'.zMc/Jkv 3,DžtX9dFf~i>`m|m)Z<^e ::?ksOvo8IEKJa<ynkl[>kmBt%+gvi;DI8ֿYߟK5|#?=tѺZǵ|Cy%W]_}kG? ;zY~Ԃ^P'DvrmWi9*Xgҫ#gۇiI|:u 1Nt~b۟_8Ξ޵翴tԬ.]ϵIc us2Nx{\Qܸ~JN z#a/uJzl^s%Vr 󚓇#=|v@ȓc<S_3Yץ]zt޽խ_/nop{nG|Yzpeve)7,H%)Oh^q7J#g ֌GKo4zFG|Xtt+WV\1=>ɋ#[<8fGnz$ED 9l1c=@$(oZ7[g+ǣCqOɷvj!vm߲rox$g,Xqޘ䡨zr:'ҍA^^RZa^q֡t{Oa^dv?lQ;#]'gvuq~]u^7+"{!j}9ܗJ8/ὂ[W7|&l ^v֭z/9<2\$HI/ 0gMoh_]Zo'hZ>I~i=뒽Mt2rG/uJ{dKᅣʼn|K97y/H>=g=[5G:Oh䭮˦<__zE?v^$'>?xӵtn<׻_ul?^$o%woo>8ϻ) >O}=i'EOv֫Z??.II.ڒI__K§뗿x+Xwo?{x]CtP#;k]w7ģCzKN%{\tSxZbϓEnW<|@Kʭu\Q?BS\jEnj/ɭiY~AMW*=?%z^+e1^Ki~v'~cɡ"z'Y+gח}ZgcNlˮʇ[_zy%;(6e'_Ih=%'|ɏ8)'d0/DF~ŋݹ>Gޜ|}u/>^Ziٵ'9wPQvPq-ُܲ(d]?F'no!M> bo`7՘_V:1ig/EP'W\ɟG_#?Ktn| |œ]rJ}_vmi9?k}erNqw#{;*yPט%}ٻ;({_}Fr-} n޻׵ewC󝟗:7WhǗ=yRyy~kyc=1ENOy^_I(}:7V۵iggj>s΃i~ѕDr;@>UDh_!{;w*uPqa*Cc!Ousp/zc=%Gӧ~%_aNR}_։vI'e~Ou[y_뿄N/ )<*;~Po?I^7%u/opeW,үe`"_K/K܁ONKށ>ʹrPz^#uA.ɞk_>}I_O<%<8=U@;>a_&avT>S?mO:^Mѭ?U߱jrQ~jV?ϩ7^c kXY/gS'Q/s}SuMmK}YM#sSů]>G)e:*ҷ-a߶k?L$B>}O]E:S #~tf?_x$`@C|=$Gn^$;>ť˛>ѳ+^6/8*03?;߭X:?_'~f();,]?d2|k, ꛱^syY:G-c#f_Gou%>n?XWWVC4]n_B>cad{AzX/'|hv#IQoʎ.73:Βiyս~ɟϧ5<8rRnsW7>9އOS dtu˿Vd>_N]쾞ϝ47o\-n/Vl]%axyZG?^~}h}wݵߞ[g|\r3];?NZ>sדvSVOu=o:ծc<\cZ{E7ꏞzel0zD|^?;wS=^'glʮX|l<잩~WY>u΋s|>SҦ֝OjݞԾƣfݱzDxOjjOSrԮje<[럵v+oF>csűelM'l\voS5SgOoGn<;xMMXFxc7gp}:=ϝ㽯/||n(گ>׫orxp6o)^j[֬oGvЏoroH_?^w'}y? A}A9K=x7=E/kOW1ߤ*z҇w;}􉯿,\?{h!${?6}ZOy7>H_巒 ''77O?B~:/ TB ok>D#zғ$Jo.Kv }yd/)>kz|C\QOe^UJ_GOC>.|/췥/e'T#3Ͻ<GG|Im6ƙ\>?3'=$'O_v_w7I1vi\_roqݘ5x o+h}-_goo/󠑧6FC3?OHN/n| |A3%yDr0иGڕu1e7>yc~$%=92_Ňu-y_rNqyN}InIx̷_iG~)YGބS^~OPzM9xR=A32owyz\7u*~KPxZ?ŋ)rj7-姜N+}N/̼9_^Vr_~{ W/.y'Op>-<}~{~0Yv⺈O 1yCI8 's|OP|hy+E{P~5_=uud ?NatB ;?>7\'egy_-ɏ-\GJE] rpGj/v$:'qMD\~"/8ߩ} ӎϥ'כvV|^8:jx >suȻʭ/?ڑn<)ׇ^Y矢&2 ٟuiQY_/>egB;nq>^{?[ tf=E֍ܧ_~Uq}G/=xqqq4K?:quu󿷁'x~cP5XNyvp_as}x=;W;R^[פC1'nn܇nX}8y|D?gܲޠ.'#ǢKR=ޣH^s>;;溑'w|tuEt>uyOܹISyXмϘl~Z~qo{x;(~ PO?~{+оm`i׫h+}Bx.R~@A7'7GW}gto,nur}~q<~3HݹWgA{oyC|g7_E>x<+iy<m9"qy)zG;Yd؝7?sb|9{'`8?q3qgޣFn\>s_pv~G<;\}$G{ ~qs1.nYǹHGg'޻zzښד.xˮ3c @2 -O3Bz7yr hKN-UkKnWr)P~Ń۟W=^_ /S+_wˏsnmqے{]*>MA&H_y%~XQǯܱiO GoOC8aqZk xw8?[/gsybޡ[9t}1{|"|ܿ2ɟ1o/L8U0\ ڛ|W{ma|oyAqk \+oH)/dEq-{/wB+g~Stu>ϛ_חr^#?+ďC\W-9dW'Cy~qkAu םsyhOc-s&Iv##4O뺀>:9|MKgx_|{|L5_kM\w^{ s-N8)W1O|gxO_o/9/_h}E~q{Mĺ~/cѠ{2ԃu![zxnxp6֭Y=@cY _u$_%Hty~`^ybw/AwD(>G_c}A܏StyIH:}l}ν7(-n[}8ɿ~{oɦ{W܀ٟ ~S!G?ܓsJi?]>ŋ:&u5'tٿgS7-wk}Zb{kgZ6DqfC7w|DG|Gε.MlX~XEݓcμ.ٺ0'-צŹlY{9${^OswǞ%\q45/cä[Kz|xx69w27\S]b?v3i:s=OJ'ʻt}dl,ߩ p=cV|=+.~6E7k_M:|q::>9=]Spr|yxǿ66`׫}XCA?k>\r{#wz@]ne] <ҷg|PNYwo!O=4ɯi57goƮ]=`Y3ZooڋFSGTKGޱ8~g8ƥ /Oڃh{'bzdyS=˱&>#YS>-M;ǵ=O]~s ԯO~3oJ{Fre=zKs姈O,kZ}'d^ kvkfԎOKiW?>K?6/*󴾺m/Os]SGz&X{.g;W9VyfjOٸsm~A~]׾zsuhqpڻ:Y]I=EtowaI{G+;mNc֓y"m''{p5Q'_Do@m\3?e!_l?Ot Fx7ǣ?{^;w>{} '}'_'o/s%>Rd~m=zn|d9{j=cF(.vΏ7u:aEDo٬oGO~vqu,g9ѳ`ɛO;'9>q}/n{/W{0.b<[[ǛǵG/&G3>,^_ƚ>q}9{%EKT#bV|bm޳D CKTgǏ/\XY9;%]2Etڶ}gKT{0"}7Ot6p<Ӕ@7R(D~0se$wF9s`~%jOXGI$/eE0IV/Ezwfk=j= lryod;zc\KٸGTՏv'_ ΍Od?+WKԹ68:uU%凱~]%jo_{;̷_F٫v=<5x=Y$;|'Ev+o35:yv|K>_NKNэ{-rEywz=gjZR z<0luyx/Q/]RyMϿ JVey%/skߋ,WDZ+kp/]G:l r^'ssPRQup$ 2~Is\6t.9w)}yW X3s|%~~y:/MWur>ܚӿ$d4C95\ikFed_&Qu<ݔSkD;Pύs}3k.^|9E/^|ӗ"uy^n^[c܋lu:;b@ym FM>z诌uz]\nw>م6<_cs.a!5 dx>y\]4r|.f}naMsu}yw~/6_up뮸xy/%P+|6Zuk\wru=:6:Ӿpv})G9wph{ۇî{ƋG}kǫ`mu_ݻ9庻/!_][?}$6wW{gswgjrƽhA߷8k*:ZN ;.:\d{d״ށ-.k쵖9ow@?u llE>~\Eŗl#>lhykٸ6&߈fc(ZŅ8E|%Ȗ5}/EkwQle|0g4OƟ#^0|&Zˬ|udv0YG'n=D>wQ&X2vq}5ͬysuY7/ÌȕQ~(9 NNGu|nwi0^QT\-vwjWNwϝ{Yg߸{p_+:;Vq;轺ƭսB7vik77:m^{|;ͷ7=mi|oc}z,;RZk➾|R՝v>+}gy`3I}ʞN]ĕvY1{G ; ;pwg_bCyLظ˷omovtJӽggeffects/data/efc.RData0000644000176200001440000004415713451124203014600 0ustar liggesusers՝Ǒ[Hb]c6#|B!2dͫ](Z"R;lz88&(exc%˯y7}{vRO#?闝W|ZO?d=?y?|eON9;i.ڟutO?~MiG˾\k]rr?V?-G2-C=ཌྷI~ꕶk+J/hk-+9$O;I? ORTOJ.W=+鯬*H./~Ho!! (\Sa\y}੮%|CѺ, cs<A1G<ܷ>?O4_u'WЖna\un~??1 .Njsg+(ܿ?م"ڧh7 %|pXhܯ%u5?J/szHuk*U>z| ]}G+nGyo65ɩ~֟~|s(]<'y<@v8;֡SYgJЯ qd\Kt+gByyS0+ӸJsI\gx{x+ك78.;KͼgI *lsO8'QO>ǼƳ?03/ew/\<kKt^tuYF|O_wxl&'^3S\o}|a??e!^A>jW\nY-pqp{qlN;Z)+/.NC7^̍/L_Զ|krdbX@Jyj嶰5?/l^vteW`k,Iw~cKvW =|0?z7wr. ]ș`k(30]7_|rNN7_A]3Dn :A4ooц}m@P|a9qr Vh7ϭNt~˞ou[#Ok\@=?@$>!px-v>np_6]w7\rp/H{_":~̳w|:X{>Wtߛ^\>M]_w[,V{Vx[ΣG*+}SPXur]:stOPw/~kw?|z8:_`w{_u[pq{wW_pw;\w%᫋{_7gk+z׾K7O_{4{bWb#*Xn|zEsT?O-z̍7]'T^m3/摱ESam/vVlD7?~*:ȍO+;^K'?n*onޭ#ߜOS`׎?wck={4>6lje=sX:^~$_-M\z k당T?ULTDV~TM?Yhؼ_+צu,=%ʹS658'_k%Mo'+ڿ=ZYw{;?,*uT.Wz^~(#qׯ9?ovPtw7Hn;w'}“<?%:])r@ݸ'MOr}>?#_x}ԖyGte'?N>O'}=%{%>ߦ)JЮwS-:|Quo˾үottJG!oז]/ߵj sӧSEvָ%O jtq"4.}g;hOq;ҫkuYoHQ'OOtY ;Iޓ| /+mH$i~A{M|@9 >}Gr+W:~eVqtCrCSxPCz|X=Y/S%J]Ǽ'_zI]Ozb/ weBwٗ3~>`H>gO G.N:g]Vr_/Ϗƹ_/?+>_K䃮-?պ>!P:] ?H/!G;_Ŀԯ~9i~7O*w]u@mׯ'>Лue|z|zX;> =oA,#bmr*Y?s`X>!N?7 E~З3y^tԿSr^0O2_勞w}U#+Gşg4qUsּ>w]{pΒ>,c+㕺/uVy~U^EK4}9,;zsf̿%/Q>\[wA>Oek&ЧĝZb^/ImB鏫ud1b+v`)`7}ر/Ynx4=Jny*IM}+I<֍m㼿义Xx?WA^tyo\ow] '^䅮l%7}k|ƛ>Kk}ur_/Nѥ_IR_?uuYyP/kI_G{?o/yfp^J|<#9XwG}A_^)AKP+%vro3su.)9_ac~~s}{#ٽt_o'B{Ոm'o\޵uߔ\C*l|?X뷂 X:/v!֑x=~v=g#}"gETZK(m [m0.86ܑNڸq[r~7ꤱop/`V^dHnOC}S?G|Gmfݭjlw Wl;j9!+`Wѧ];ihߘZts瑿/-tq96^|G'kя`6N֩a\GRHNh(ޢ!`<+\^v]&B0+z54QwzFr'gO#>0N#Oۢ>x5~ې +'h `1Ko~Q9?}fq#Z.r6/m[{o5qy9x /y{ \T՚7Z_ ?`V/O-G7˯V5pTitR˯o'TrXx/rLY9"~$\>vy$[^Y>Y~M&'Ǔhѭ]ZY{آ]K'\KsYkX #Hh6OF>":-9y#=jxy5/3V_`mUY+jxʽ)"~ckfvfZ)L/O۫?v_G՝/W0vtxOvWiq? `CIpS7^#~%@C/#gL70_܁<_K, J:JE_6/'ʛ0X9Lܦ?62keൠӮvGiXr~~Яr[Vn[]=G?og+6r[zce$_eM>nF( b}3"lg2ڼG+-Cڀ_DՋ- =G G=Vێ֞W|y|*;_9uغ.gC뗕sjP퇭*<ߍ;YB>oI[C50+md%l{lk)[F#{'oz<G:H?ùBV4]Dq>/zmodk:9 - ߹kl϶guXg~#o{}zݒ|09@^a;_s竩P6 AGt\j?=Ю?vT~u}G] m TyW\V >pC}>ѿ\|?j*?;%z+ƀ~2Dpm:ߌ379<-Otj̽>|M!T|mN+y_[?؎'{_ߜoIu`utۻ;wOGwH=\w[|{|QgUj[yܵsQv={zG;{Z( p^: >hգnu'oSB{ďtԮԬ7Ns\Mv~ ~e3\zq~89y6+\/T_0L0?t3W~%\t59"zc?[8~ 3{\6oj:~~];,H]ۯ\>+ɕ7OQN{c׃rYX?OK4X>n_0Sd>ɕT̕W6[wB?O~:7gߞ~8Q{ZY6G/ag@rGHfй嘚!7{#? >Z]?V脪8νYMӂc._<|TF#9YkHZ9:zxH{;כ?Ks1y8>Kk/ƫ+:6HǣƝa m08ݦDt-oׯ]`_srO̿]5bpt6YmIʪ%+_z3xٷ-:kS@ڪ:?#7r>OH}3:d~#YldKϵu[jd_Fo[3j[y#ɗpO~~2 86UT_ }&&^TOPﻺNd;yDV: fX6OۂY4kgQGW~u[_ o֟0c2VjD\坶y>]#%ۖ~TV~ȑ̓cN#}kj<īNn~埕+kx?v G eH/yO+䷥:xjoq|}/ez]92XI^j#2& ˯q!mN8_mmE|(_rrt @#WYq NzQ}A&mr4}hC9~C#.~;{vYݧCo?Fr;{ogk Oja}oֿq-Pd~UחmfQ8<^_? \UZJc9kjao6N7xS0.;G7w/?z7]/5Ksm/Bp\;^,W7n~aӂ|EKsze{/&ϝz|^?_vou moю"Q]P]'6Qx G ?7ü!H#";5170O#}s?u_ݿwxuk׋ۇ[G?_޹[e_Y{Yj6 qFm`<rGp&hg5>`m -JՇW B;s=kّy?~C[DXϴ}ZO- 1_j=$}@~/4y j~?̋#?Ѻߢ_"yew~Ǥ7)o(_pf~O;|EvZvmI;=~|zT\g/}#k+?eǏѯ~[۬o~Ӈ[:My5&׾U~ zZo5Ka5+oɿE_.ʞzq'{K_%~'¯um]|: wr?޼佺Hz|ЧK/?wxG>Oyq+Oɾ8N|E퇯Ev}gլ1BI?TO瓎~A`"oStkOw,Zs}k.sMXڼ5nnc˹ٹ_f##\vT]So撣v~S\Tyjko,'OIճcᓲ\y|*֞_7«͓W?\SgozśR+XLǮLi&'SMgroZxs37u4=<Ӓgz}s6>mw7u> OD~iW~Q>?|ˆ_׸G41ؿ?А_}n6_p~? rFƣ8pqO="?vlJ#G0.?w1NI=Dz`#Y?JNG+2KGL>d ff8\yv^NfzguXHבluIգX}t)8tП8jI6_p?/Cy|}foj#dA&dg>茎zDt~xpuoFgz>?k$䟔CՍO`~7vsd=;~%]y3S3GCs{/W{ۋխ;ǻxyEy^(뽃Q}4wW_nn<_ݽswlDY,'WkEo,ϭxs?Ei]#gө=v{v|켬O[NX+O$֞sэ'^v[7̵ώ;9:DϦI2?6gծ{ODv"z߱|Լu'Z'ovDwޓZSԼkg.}?5egʛ4u\yql~GS>3۱|,u:Sۦ꣹#^m{SqSq<־ؼm~sjx{{_nw<tj7M-zߴ5ۑݸcw|[g?OI_n~CPA_m zGz<5wOKU7oN}o?f7_Oxo}=d#?zMSAo+>/}?ݯyBx&>e_Kz⃿Z_WSA}WS o-m|Yo/s~!<|Ƒ4_vR[zͼqO̸ISŗ+/~҃~L?b~]$/y"\GmA\7i+nć~B Z_ "s=[_h7A}I~κW!evjKޫ\S\`S__<#yn[vWQ~J|Q7!d?s2>n?j3ϑ/n\G'>+Q(ݵH>cƟSvf/rG3OYp=}~MtY(] `7^;;L9'{'?/?Ï~/'_+ M7ï/ ëTn|M-/}J-_K+OO򞡯L=e𣝸.:Ӯ=kh}oazo)?"?{u>ځ%ochs>| WAvs'ϭk'-üD#POqƍ%7Ivhby @c|.בA}2q{;~+/Q/ ?I}S7f>H> N~wzr_g8´#s&W1Nl5O\ݠq9.r:3v[qWz蹼I xtgnځyTm˼ut|Y?eY[ܾOy=?~C9>/Yzu#)%ߺu|\_>BKlAs\e\;g<ͺEOn?`s}]mI.ߘ'~ 3iChլWU[]:wμq'AG{2;w"g ~ם-P'e~!;Wƍ;yt-}ί~ӝﷀuq,߈R'G'[w;~`]qu,~0~\yήV5n3/!'/nqyi3y=;7(㬃ig ҟT(\|ίn##ɟ~-]EusxS>wnRT>fg43fz9G;ǻ__9^|{|wbxEJd<8euԓ;yn9>Gs/v/4>"[/Y>3B [tw #n:X_ue0wa8OxY}0Eyzma9 ֋ݧmW1woi'k`a](Y'Ϣ/u ;w2X*gn˧_z;.sͼIƳ76 wݛG=k/=g{n_s7_s8q>A?;p |w?q;_4q(:?s·.r ^|}Qn?Wj@\=F8-vTD-౾]m%ή?D2e>`t]'wqc_w>v3\3^sÝ2wS^ѽ\AtA8{'}OO=-۝\#:?8<gSTO`aR^:#;GW_9>ѹGB_ܸ\;;[eq?/$ k̠?4;PetnM^>ڒSymỲZ8yRە\| k}1߭vc}EOGח|((K7 A][#oyܶ$?~^ r1eѼ q]wȎ39%?^^+70N~xY i>?N HM@y׌3oumz1a~,.y?Wz=)|DF}?f^P}7gy(:#>GJK?٭ |a\ ᷐ /=]&W8{xÊוrKGI9*|egoa\Zg]ºu\sxﱅy;joymq]Ӻ.N}3_?l3^.D{^3~)1S WZo{l/׃睗q^xz*}4s U0<h>KN˫Xֻ—|qI?e>]WxKQ?Xp_e|.]߳~$9ҽtss>@ Vz=zqp{{{xkj7`BTȻ}:v]x-} i"]oM˝}Z'ů֯؞jY0ֱM?QܻYP% ~(esK?>V=~9~Q;/~dX3K.?I˵zq&yw繹?g 8WMˀ0'|>6./MιݳL-T}wE̽?l\ϓI=.]r[=:5w}¦e93>w{_ϊKM>j<>ˑx~?LW>_www_l\ldve>/ }'/?X_2+mПzh|8G>ڤOy!WݼЅxkG;m|#g٬o{:?-qtj[Ӯ+C>rkZ ƥog`6oַ87#bT;q{w,_4zyNq) ӱC7^z 㫽/GY>n'{ur8\nn|Wz+zOۻG}}rn^ί;{|Sdq^tH۱ɞwMsT5~kۆo$W= OY}%!S#~B7!(O}g٠=ҟry3K~|H?ɝ{|D8DzuO6N~<3zծGqy+EodǷvևڇt_?rn8t,DAAy0ͿVں*FvOvϞkΝO7}z4qAh_'Wۋ}I7Omel"}_[Oq9/[->z2zxN3|aD7˨~7{ M{zG~>Nzo[6'}\~*r6w/E>#_pl/Qг< /(~mɟzDv|]㷶mf_f6 H_?<4$4ͫ'+ \QyܺE:|7V1~yY~Q>G|qՋx^䝺YZuB?\ޭuCX;g6c# kttsUu'Nm]UI;Da2d9zAėQ>u]"8y @@~Fyo~G55jy!LͻN^]-0ėS풥pt{~~Qޝ^ZuߣTc߬}¬*[/=+v{kAv`~y׭I3~q~.5s7d>sq }˽1xN;+?ezg8"/q_':>cgE1ίe_Pt^Gu!_?8{Y[XӜy]o>}_k=ߋsxwoFk#+n.5^ދh|3J3-bo8Z?8םoδ/gtJ{FQw?Z}~ޭ{w_ھpqp8:_`w{_輽cuWjg%avNK㻫WGV;mICs~/ݽ8>xaq/Z--zeodpN;5w`K{}[Џd]':9F>p_ACzF|"񈏣%FF6I75"#Eqj|f.jy:_Cf.efjy_dl{][͓)s:82+_c>پ]6oʼn[>]G"1;]\_m3kmh]͋0c1rem9F.o6Ay4[]3xG;/W/".=xg{sg^֙|7yΎbzzzgN9zqkuz{?ny|qiͷN7>miwO8Զ^砋\jSA}sy&UBP {}BGQ}У[( *SCT( ,h\*r}p؂V* Z(/h\-6Cr}pLQu2(NH|C>8) (:-A3 P'&h\QCr}ptPu(NUФ\tM8NZ!E>8ԉ گ(:yAPO@cC>8SHQ4]4z"+r}pP>jP$*g:r}pX.֊ 1HT4A7J P+B VTd(NTn٭,zU1ر*nxThCkE m(PV/꿴zTBkEU5l}@.j۝R nt/ (h&z{o_Z=@*5SBE1l}@SH#AU[t-PGШSTBkي s =Jh h!h [RtPP`B&( > ) :hRXCTBk@KAGޣ4Ґo VEjPNXQl}@th4Ȑo AE 1H%W\A R QQ4_CTBk@A-4_Q t2Jh %螢6 Z h9}TB GzZntLQEkM+ > T(AOVJh 0TG[E ''Go4R v<UWl}@ ݾHcȫU `*h BrkJh 0^ѧLZZCw) > TlASl}@V(zN`Bm*|bf/-6N ETMC~*UxmQf\~ ZhEw]߮/M:X+gZ&!_kIo~E%*Eo+BuSAoVレd@WDNᝤRGOӠNWЄ_ XTԀ)P,蠡l?`H% ZE7LЂM*H% Z XRG$hTA@*QPAd)j2H% *DEMDAő:: (B$)j2H% ۢYrT: D3eDA"4)ljI@*QP$mڗj *=OѢ}TPOVOzcjm: zʂ*+Z̶Tħz`@*QPx0 (jy;QRd<JT(j2 H% ZA/)զQ!2E* U-'"C%@DAUPQiJJi@<*ݏ@/ET ^P]Q (jy (JTdQxF @*QPS-@*QPrR-Vm4jAm݀T N:̶i Zh[B6-^Ѕx5m A#ENZڥzЦ hE zWEMІJA#ZChRВTڴT ZE3mІ pRHDMFl/tʂ ו*:mW*I&-Gu쩨lmuulmuuيf@5JA%4[0-JA7%zRm$>ſA!":EUE@A"=StQRPsH'V @tk(P[) Z(h DWi1܀HP J]T}6 GBU'o@7')h A?vjAHTRSڀAAj~ًT-Td1յ^W_n#dBmEޟrl@s%?_=t-QN Gql@chU5)t7wK:N'\DGiݖtyg*5U-(-nΧT4x54!wo3O96R&ǫÂݿ^<sZ"VO:54up݋TM4c> h/^兡JUQ{%>(Jj@{w}Є(h<&šP~ܮ4b&š)S*( t8B(➂~|DR@7AS*/$URAX7 IQP)@; ?餠=PIuqlMR>MDCCTқ(4$UA*g6$ JÊ,RTR`qfKP_Qcq`JPOQs*'~6Sq4T:VT`qsƦf|?D}bo(fy !Şqn<HA+**+jrnm&nQb`t 5d2t( e>(_Td2t( d>(URfP4Pf*d3t( ͐ʳTZ*L^*3a RhPPċ=z*Zc+uPNE$gJV@րPPoJf@Ԁ$?,σ@T2:oUSRQsf{6P=~N_١z*U(jrn%_K%TQbE5$n@^wECU*TTx䄷ZBU*RTﻢ*Tꗙe  m T)[Qoۋ0 R&6STTS,E +ѩJ@CAcSePU'CZ ҈V.a*%5t( U) MG)h(J {n3UVi@QAJRhPe&&vPpPe\&Cݠ*=;rQЅQ ҁ.RxPuu^Q Ҏ ]ť1*=*Fcq)hPJ[E[ ;ꫴQ5RаO*m*h44tM5Pdz]EmaKn a_.+Z!4,jᒶ5Ol@AB.*Z+4,tj⒦&R=KzKAcCZ* X\M5 ClRuTT =j3`ZT]~AAݠ=Tz\rxT~|1ͼCŇ>?C]#@TRm6;{I5`\MtR k"]򞯸T(hX0UdðKj_nV߮?(4Aؐr&d׳ً ېv|EKa;Y jRkms|T(hX҆q+@ABǩU6<_1 A!%Pа,j .9WL5fRKmwmb4,4ZfCKNS a"\rj -P|T(hXh5ۆr+@ABK6Ըd<_1 ZLͳ%Pа j .WL5VQmwlb4,:f%PаZ 2L+@ABI',*JAcSÕd"M X :UPаP!u! S abj ZS>+@AB^ШWL5jP{lМ4_1 B=Aw|T(hXIs S aZG!>ѩKݳd(M _ &de@ABUO6L;.j U>Qd@S aG#j է>T(hXua+hPаPU@ABւv:j 5j͡e@ABfn#j@ v<W̩;h8)Uq"*SWLJAB~ +JaTAsMJAB.+*P]S4U* uV4U* R4U* :ŐT4,ԓ#&*R}P4U* uګhT NGT4,ԟ$葢R)hXh JaM^)*6z hT ڌ%裢R)hXh;[ESRаЖT7A7JamnMJABSjN*JAZBQJ-RD@R)`@S)h<*PT :\*, TjB*JA{ZM}PJP)(,TRJQ)(&T)"RJS)(T*B%l6{y;;~H^λ?G凋/W E}6^߬u>_gxeN'ZCsE=AJ*UbrRP(5| Š/ZxW( }+P[  TFjB@P)hl*PV @5ƥj  Ti(hP*P !@7  TG(h4*P> @z5F  Tc(h*P A@6Ơ  T{(h*P @ASJm ژ TjPжTR[5ڬ!Ԇ ͨ@lmE*m(h*P@APJm ڂ TjPP*P@A@huԕ TjPPO*Pq@A@juv2F4F!F3 *PKp TR6SJEk ERTR@AB46TPаP$*M*4,JAcSJEk ERTR@AB46TPаP$*M*4,JAcSJEk ERTRP)~K K  /,u'<  ʄeB2CAЩF\fWrnJuilGRr9_nevSOcq=ݏ^uy2cWTV zfuB@neF+էՂ~9|=V}[fj7%NrP/yct9@w(Aty&̧toWv˼-3zF{A_\ xԾ|'  ʄeB2CAСLPP&t((:  jW?.78 5o~j]r(u~WxA(q>և揵PPݿWʋ|u((:  ʄeB2CAСLPP&t((: ;/IENDB`ggeffects/man/figures/logo.png0000644000176200001440000006325413451124203016101 0ustar liggesusersPNG  IHDRxb]esRGBgAMA a pHYsodtEXtSoftwarepaint.net 4.0.21 ifIDATx^}|= f&qW:w n:*^$z1]Q7T9yzq~ye9}36%t7k+0=;ڜ·2 ghrA>[x%RZvGD矲VuZS,=ȦCeӫ7d+~UٌMn?U] ~0x (uu.ɖHzN#=+x]nQK%M-kk[o׋[RZv{M DF^ѯ\lۑ5l_X/߱ mKnþ{>E@޻wS y;j% ~6ƃ=KOm=-˒kK!mgtۆ"* ]8ϳ]η Me8QFNAΦ!ob mspl}Y?'{piҷGJ~!kWlHkb|VDyXX ,lk)c#b=dƌlijcwewRm$'%(4-K^M`:wMV}~? /*a}vff淲&2o+%NX(>QBVgHu/d>ݝ]ٗ\EA^E %!XOWexS *m^,\UF)Og$ȦOɥcR_]/Ë5;Lpr{ 702-"H/Wqm*ȶ?d aNEl,iSޕ[?_]l֜e*F(,/n<|6+R IYʉcyw"}jRrE)>Ԝ[ c-P pu͐p8:j_d pAd\ʒڦ"|WzU}Be6Mvh|oc @.u:͚sk?3є(N8Ak Ykp`C+^6^{ER\PCJ ^?#% A꾑n3qD ,2%ܪLvC,;T'Ieɞ.ol~mYٲ6':,fc2q6oqbuم%cOzȅE1))`og8. 0oH-o\UrGQ聳!ः A$ :$[TUdRLؾ3ޑ2dľtgFS6Tò n㘦]e;LK΁_Ob$h @Xe^.:l͟ʔA_r1NJ1NNv%hQ{|I3^Dk-/NcJ쳘GAmsԚW]sr<ث W?]!~t9H<<|$e{i<01xL b=RzDJ/]bSLIBNhg -汱a`znnuʞ38Zյ}m^;X=8d泔%/HʚrcQ\TXHXfbIS'X*J2ˌL 8Bsj/Ht 9(6 2n8<nzy dJ l^S@WgdԵN2?4 zumG#*IJq_O*%w6rS0Z^D\{F-`EU AJJYރJpHCA0X>A.>>AަJtjk76Fk"/M8$H{K\#M 'duY\wYx#Ե:%yYdLx˱wg> y.U[OI&e&fK;%[FapY ` kd0ȀiAC)OQ`3$:猬 gKdzPˑ:X":K08iX4Oz76Cc%9[,l5-E!8L4?C׿k.)mSn\uY ٍZ~DJiY3-W|^-74Fm,wZ|Gn,D %'Je) VS⪒Kٰ;'i_UndufX78<1LNBлD ;+[ /uñjxzy*hC2SB,8ij2X5 G@5=Bx 24M '![~y-`N^Xe̽9M 0{^0_Dn\Ht234),.64GQ 0,?\B 7gMVIIKlCkOc1L&302g[RC \=fE4VswYfU!kJKw@e70)|Sd 1 ſ$‰A`dc(*Ӑ6,n`WK Uў!%!`)4Za,R͖ FM'j NkbM$<9q7n3ңNQ55mj7|Xqkvss]:\3s f[}iWbU1OYVoMV`D׃w5OJyGW=<'=H]e۔e{k ̖K xL q\V.۾NA` bZ1-:h2<_hp\m$J('`n&v thtHd1j1>fdsDO l2UeOSyYDjyzg~b$׺W(ZZcZL*~V&+zt?/*%^V_/TSl 3U߾'xSN+{)ӣԗFh0[1&Q' 2k2`&v9h1G<{^x].b6=G9 3ĥ˜gFҘ'6Pʢ7[ZT~NLlr%:84xW՘”9 k<@rYQ>][Z$QВKB)ӖD\f &plmyޟcLCHhxSmA!f]brdli6g^v4(ݤPX LfrGcύf/9]1Bl1yk$ͯ+)g]wXY;\=S-SbS ~;4fp\:g@Y?^[s/ud0]BXLKu*b/eZ0)w?4R%߬%o܏n hFJpHt kln F%YLOY5N(E'tvBCF;nIsh u]mKf<}b-.3K%#76LJPp1g/;"G $eMhc2}}a} d-kŎ+"֢wUv6{15R} h,Kh=t# C"%RGU)1Udr{vd Xy|;&C9M#95CW{햄&L/dH<ԂS AZ^dj lҚc7=O:V}P&sA`,;%[  W 6.4):,Y+KjX- ZJxD#0FdHvogIeJH4@٢f/ jk:N=>&iS{w08crssss yS lhr[EvAj_GB6|l1xx5gc[ _ykIcevWTnX;Y"sP哋`+\̾a3ORfx&jnPG7Ș.Z&kM 2EDu*єg8i#7벬23{x,J%19l>;wm|cJqcu(J׸r~fɱ~\~B%v B0e"K&KΆ`P:ٓ3FNj&` oh+($H4$*M48Ik(i Ci,MwǓص[P̝jbr$faŊ;`3dӡkRr p9]a%4@Lҝ_5[ahEDT3qą/biXK`Un =r5<6Vf^yd;2LL'=% w(Ht<eZb0ȱ| W&[r!_"Smt82ގѰX+e72w2kw&ݱ$ۮdݼ>`TI6_R &e݇qܶjoNju,tS)#;IsI"j'wD؁q`o)Z`.+j#3s+Pe$,50-,!d#4|k$"wCy3%.ПH\ W]۟↳6s5 /E3M"N[nB09}oUZ\ ͓(vyr"#E" y0SJȃ5벷1cZ 4fsyjJy|e0thG>l-k),)qk! 2.!M*@ LZ#:htM&h1]7 9 1x%)٭v̍vW&d3k>45 X6dkvofLV^ |`K/kbdY;h(WJֵK5w5T⫾ 4Yб/ɀ˲%1:]kYI!~t'i6Dl4HtL#%Ҿ`hTp|skĈ2b+]`K[Z t܇Ԙȶ]Y̽#.n_g\f8w9l)S!Y SA19ckljR`vDJ"!$180!͓1MrT%HW֢a:J771cZ⁈6Ds> m6oz:d)s_;Ʌn?}!|~W>}_*Yw?ޒ+xx_ܗWνYIw?wi +~˭Y5ġr&%NWμ-8&_|T(o$y#|M&i韱_VX#"yn 'Μoߒ/>v7~:}dp7Icwugg؍Oͻrz<ͯ]~.7XZRQqly9my=>cWoK4VK++]?7UpM^[ϼkaZ*l?Xr[K&+eܑӜsY[N.]L-O:Ui>}꣓ L9}T@D]!m .hcx!O􃏿kT/x\ZsL >2?8#yAZ:Zv;ktIX.h6[TGJn辎IGȕ7~WzSD_ dqW( ,ܺP}+L].?\ms|u~|K+esiIٻߐ_ݻۋ.0NE_y n{ɶM Ͷ;g|dZ#'/]nݓ%\dL(f doy*9mD5[.ڤIV $4ziie+pG &'Ja(%{O'A.syۈw ߗsu{Nr(]tuO{:uF\|QIoD+.z`z 8qCg2H $Z4Y.iRym_ *kO/W?B?<9y1-O}.H<0&:Wxݷ5hCjt&ǭCd0USrnuJΏ9 \ppp0H>35T ܯ ݶq7cac)SPa~G}F{A}>goN3h:-WvTN^33co8'=/Ndc[_}]^0J8k yV޻"؆wk%Wq)MX`;Ve{ܹ %p4e@΃洨We˴vmz{vghe;ͣy2oFe|o[Siԉ[Y5j>LY&)^';J bwDυV£&myvRnh~%zځ"]YA\2rŖ[V~%d=.?= eǵ0Yr)+M3#eGߔ`-}QlزLvo_˝Ԕ-Iictm}J4 (P9{ J ҤR)Jȃ'臶]}LeilWi&d:_u:_ C6yj~ojxgWr8h.A X˯Lo?x\%;8Sgtrdtyc mGcytI[2_F͛,;FN?,{M:qؖw^w\ѵq8<'ttR-zMzm92 'Ή>R;eUPJW&Y%zBfߐ>Ыs5_k~u䍫.T7‰|]x ~c<ϑr~2q)W+ )Sܿ+}|мe*`GI'?'WfK}Zׄ^-tM)crҳ ˍk`'@f`UNaH;+ڏŞh}O%eӚny1c0;.f7 b}YWhm?D9DDRjKtyO,4E}tq7h*i/L9R#GW \ I?N7KSuyŒw3EH\[Oi8z][qCffcp4>0:rf_6b\*e]t˕za 23by)PӺ+ 7k`^˩=Lhmi#R4w:JτԻhoo.=r,Hsd=6PN BvnIG5꘬||Yq{^8kΆD\`eϚ:"PjLPs!IȃΆAer4VY&Ea6" DKzңsݥ*F/|Xf i-v{m>B K&ks͠h1;h8l7 0U~A`h6e*fHe3歪H$ ; ;X*])Gl] `*tJy c Ԧry ފrtG>堻KfT%亸äI|,]4X pMLDH0$9#G2n\p n7l Q:HW>2K Aca4˨Jgdf\t9wEls^ޣ1  +go1<2͑ݥ) BzM`q.}X4Bkg&2{=]CK;Vi (aH**<de 8Aח0Lu0&Ɗ NM #!ь *%:,RWcڛE@W]&.LS <}R+ɲ$z?b0M>׀] &E\4\>6XTOtXhC40n:&+2:^]4e9"i"]c| `p\$vP~s;W&iapi vmj@8Af,CX[&ekkV/Λ(dxLaD̔)8IMygU&#l59u3EyJD*mU`+M*q6+מ@{e@e21s}z 8vb0%7 *cpV2M!x}/R؋X y+it}+Eh#בiҖh>q#|VDLXȱ|&l`݅JcY5} pd>abd,pduuu}.`EXͪ9\Ef ^8$1fɒ̊T7G 妀N4[8tÑib kȲ/>j?HV ̸ $.c_[ڕ%1w  ,e3ȇRzT]E'͜q8Y`.`0&+RʛКCh4Iq`2[2lbtiDcOm)pj#:`&2>a 5WA{o%6 Ns̳ ?OP/2-z2e֑yHɂ4or_5Wg;>ń pC\,IX>s,Nk fM]t}gK%| ->]WSy :"Șhv:%@%k&sd0^d3ܹ#\K[Ɛiu"Mmjn-$0u*2S$褥_l1@΂rIptJ`iB~aBC |^>qb NL&Of?7t#WH׾-4+`/Y;rMpMvwu_RYk؛i3-`;Nq %`1~ks5YX9c0Qp4MT> 2z^DLc?G| "-yΰuYX|E0̒ I@G++c){1yo =9W @z/%;L%` Ii)^&ƌLhv?gŞ&kSWZו.ȗ;mjsڪ') IP m\33Ox9wIǖ5 M*[%"XVcV5%6_2VJGt8ܒhx#9YHYK'l4$G6&s"wf2UiOq#Ӭ/c~y f=xZrl$yO_8liR=oxm CMq\)ZpաA ^Oc&i6rXkI2bڤs ٦KA:5qeo}x^aǵTÔul:n3.z9s5 [ehЎum,!^XFQoͣy22lK] Mdٜ s*@CejDat8s oFn.Z͘(4 L86*`{w&,e%&MeTO`jFLdw2鬵b&SbMZ /=ΖJ'x[:2&dǝjʸTfOith !a3q`JtmDY .ZӲd rky)S wml1wΜNq;j!H}̍*9ƶƓkQR9ӏuؾx{k^౒v>atYx9SNVC4)':(@G}9Κ k= ;|/ߴO?uqp]ؽHvN0]]ԢFiWYեSҡEMy4WMTrV.,G\lmW6W*AP\4٥ƴly& ApOw .yK`0?JE+FWzV/h3 90M 2 p~wݤ%tf+7PHs(㘇CJɻ$Oe ;ں`nJ`.ح_CǴN(C ו4Z3IweN`3M۵Q.>6r[}'U,k!"٪+ W,s~k^d0b.ey)by|8 p 2s^:f[ćk.9ʹ2AB_{^g `mݍ8LYÀMYfđ*;&-; c|7}4VK@k@"$e0.Ɛحuy")\+֙=Ct(MłǩKUSeZ +d lJw Fuq9Y~H,e]:)SH5}"߸(c^zvBdTVhltcprH{ۣ9k̗;aZ,UYV2?1wlY`vl 6#o_ِHyn%Ҫ)CN90tV|8.+R(t 10UňF8?El٥nـLݭYu.kdISJ_6IJ: ?/ pE4L`t`Ek]bn:adm~,Yܸ k0i7՚[6Ȕښŀ1M۞ Ԙx5|iuH|mG%<) GV#dhΝ%*$k~25c\/mtSe,c&Ɯo _ºh_qSƸ2@X2ebiד,Iƃ}}p0SYj:bdId2Ymzt#2:]5'zu巣5G<&S.imh=ϤII6{)v\UF_R$ֵ+.vKs<֙)UH3%A&iݕǗl!hG9u2'LN+`6kw;MĘp-Mr~V`:`1){l\0&sSRGM 0˕tNZc0ܾ%U_!r߮5itՕd VV[iSLsn / `ESGj,|V , eꪟwQCk)*F#5rIX54sʺmk@bEy'%5=db5tVrtu֝]R~:E`1k )N4\&+>sHۡeuӶHttxۉo[8flsi 2^d1*tܜ[߀ cH 䍹=lS%#W z/%[Mtȁ5]b =Wd?䭥կ)s@^K`M̥<hn^o5V+[2fcw]!uܕˌt٦/L͍ll}Tiɴtg06a_h}k<%Ɣ +T^JǶpƵki+jʎ u;!.S\c34H:heq͌E9GhY=R JWPׯc-X4sYA* 矑O='Rc5+lԧXB10Vc|K2+^~b' aRWJVu:p ]3!0k:tD"bUp<~g|0 ɔdtN>;NtGu^6_^ H{*0@߱c㚕+,~is2wKi~MIT"O6$LfU!t>MgNȱ@cLs6 @``T.7!{ǛY)%@G7b[YFE`P֒o2 8̓UgMY)Q9is`q xunUSY1~s0r-)tTZ]XSmpa'h>dHlKW!GH@1bJ04 s?Kyd iIfD &hLӱrMIƜ̶_W}lq~̼`;כL:~hA`,<(lC ̐ Y 3{G5f@mzpd9.kHr}A\ulń sPG fJ[-\xxK;l8!o&m(3-we1os>Lo C,f 9*Db L?oCA]GyTaH]G&v3f[Xv*v\.C;Hh؎ d/w/fڤ1{%Yh}< +Vd-Ib;c3(h}z1s(r Yn(]J*b%v>xl~m9ߕ5Yׁ=ԢrnN;07F |x)E'L!!}1K  jLh=#N<'̰S6>o#b?aXi@P}!pQxi7݁6 vb]m cY66E>E/֩y`ۮzduk5`{朝^>d#߿jA 5[:#1Q~c@Y0#Qe)#)؛5!Ab`8'IT)-KKuʛ7U^~pLfXlL` 0b/_QϚ;]A6@] NGMbmA6]=ǾxϞa?(l }PYߗ1@H1)d;A2%]7@9[h$I,fH6@*1RkIV(-⹯kblȻ7=^%[_{G6 l =Zs {c'^5d["S 'f+,8\򮿔An5)뺮#-_DE8?Jy7L# :f x,u^ze1h{sM "cY hp~N#ȴ2KyCK)83II2ڊj0YRLlQy!0.U/"R Is?u) lZ7-/kJ{x)k+'c϶yߖr!۸bcpO>=[Wr\ru,n RxKIF>(4l.{Y`Dlݞ24ؒdV L@(Mpyp}|>1QƒY2fJclb04 |ukJ2`躹֙+6j&fPiNn@p+ 0VtMtN&qJ;hDp ,X]{ r>7S;qMօX0VڰS/;Zұ7놌|oP}:| t 4,}^ 7/@)7$٩,{Oe/=+l#FF)IFyw!*x 4g| ogT}ˇg5Fym S`m3z T6fUڞQ '& 5`.$1a(1pb*Yn*[\vxF%8Aw]=Xvuf[[RZ-7Hu; WGK\G6e̟go? fD^82%*/iZkіb|'>,oI97qO7rF$W`ȳv#Gb5/Axgʀָ^}ɡKƲمH Y"As ;`N ^anVNMkd*e2ڵΊ_O%hGw D1U9y'_A9~mmVas &Suz,2d;y+J&o>)b 5\+qsա,C)pT0Vɐk5ZkS2 h2R^,.[HXZhM/@ T Ww0\`kf3!0mcاu X Sah9񖆊6cZ~GFtzN Φ#\Y _k}7]O+O3++Oʙvw ΠDZSTV΂3]aaaLadrZ0Fkʦ˖r?mSF?n泅%$Q 85 ~gQ̞{1r'<26")XWp=F|^r|ŪpY@,xD/5Oѱ[-0KOYbñͼkyp2 LiL0\pԇta&Qbq|Zj[-V}S7c Eqkd0W !pm|Ζ޴DS1ẇL[=JI6#D}_~*ɉ2 =s5X%c-Ɍ։ kOKL'dz4~,0&C 2aI/sW.{^b-筓Γ܆ߖd (pnXk I&G9[^yߔq|-e_o=1`B _ VC$H7&ٕt^ix!52)#iNi>sss0vNs{^Wٿvw9 k c G )a) h&O֑c7qlLYw!dCn a$$9Kt&FVC*Fsv6<16DL=xkgw]/ML˹qzAp`v> uްa?A([[[cj9:Wam*/uQS}Z7k)TdKd>gJ`,He~S+kmVQ}\t}d4 eO<%a-= L܄j=ٰl5MGUW`b0Lv9)98nΰ {]ldjqX0VqO8xmAS :k}TTM#:>+#6Z^D$gXA`y)N#Jw֑,X=+獯^gZ2 G(/o;#Lֻ)9'KY1<ـf3s\sU9S5W]qypu3 [tT`Avɦ@#$ $7 X_.؏`e9qk2mi@=܌s.8D״_)ٳgWq-xCp fv\9%lB|J)d@Ղ><c@V\s1{X jydtd*'d6He4{Aeȁld?.@zɸzl(gnYE >fy.c?$s῭ӥZ ԓ7[ y5Gs}@MZ̜*'{"ca-l2eLv2.+ qԶYlcUr(!ypSݢ r yas-e%:Isd)ɒck/΃XďqK7@pbU,ѱ[rܴ$&dӎS))`u;at<[uWwlg4XȠklV+^j%21Y92cv `GzYҺ !_u*/oeO1]%.[.#jEˤAbJ1jǞ'2镚,Ԧp#l'Gc y6a)-)G+՗KPTO.TuE%Y^c.?)=akR¸|OiثX.*З +'75# [G@?v+Sb3-+i'vo)R&LSi4-V0 6=uL-*oc!7a 6+t]&ʶ+9xĽڶ]C@-{~Vfʦa4`.}4R,Y`JNpjpD"c\VP{d&ǘ N6˘pȍb+e;uHC|uX:DKWdu;&i=_Q6U5>O|J'R^#mYS4lѲ6ELZז}H ay.$ nXS E3q CCn< k n؁ s;nj-eOƼ e_K_jjq[cNlU*'6a: b[!0T`_;`z~: h8yi?RvnI=961 Bۣ'Sk:[g˂dzrlEN{TU IZ`o}?{X-dngeWo+~WpnIĸΐ,i4?6cY)Jd {J94@~a'+a`JjXjNt]&?1*x9ߧv3Ok}{k..G804eO6؄e񄌛Kf-9c:][.χvDK ]k)ǼXn<*HN;6jM[?Ko&3PkRF81a#7/ee~>99lb}Oױ34qe"Yw )t%`»K`Y?C;^.@=|UIi<'#1i9?Ap=u7Q2R)ڤCJ=[ڍz};= efXkuu-r&f/'eKۤx܍ ڄ|qٌK [=*wCvMtoVژ]?wc<8얌}@9du"{qN`8vuZoVx`:9kL|1t䢖]d9zs[*=R-*3ԉ*NHd=֬a3aKr~ SME QLdݒk_WYXka;ͦpr%=Ru9ƛq`oEW25\yLrK)*+T]SJ@YxAIENDB`ggeffects/man/figures/unnamed-chunk-5-1.png0000644000176200001440000002342613576454460020216 0ustar liggesusersPNG  IHDRMR/PLTE:f:f:f?b?b?bb333::::f:f:f:????b??b???b?MMMMMnMMMnMbb?bbb?b??b?bbbb?bbbbff:fff:f::ff:fffffffnMMnMnnMnnnnn????bb?bَMMMnMnMn::::ff:fbb?٫nMnnnMȫff:fې۶?ٽȎMٟbٽٟٽې:ېfnvmfȎېo pHYsod IDATx흍qGnL;X%YV&R-9$MJ9 nMk & QEw"w[|"cAD6AD6AD6AD6AD6AD6*kohP PY-J(q@e-q@e-q@e-q@e-q@e-q@e-q@e-q`>^}竽'g\WV޼ҷ*-P}Oά^)joe:BEqsz|Rn·{O8FPTaIz%yGkٹo_?;^W*E:ewբ!ƓUoSk=y_g;BEp*J5yO=sn:U?Ƕs.cUʅЃj^8OՇXtJZGs(sN]-]7uuT\{ksr{\yȺ{Ցtkcv,zG_Z2\,Ür:ASYQ_,;t89ū;&u.yTy@wo#}O|.TaΩ\ˡC{EəMtՃm8'v >*0dv9ʷskrUZe->)5hP}_2\,Üݛ :|ur:lR.ܽ.~{U;wToaprDi!E"8yZAg*$ЁCnxOձVtGs䝤w CVˊ~жJZZ"?p;kmScA*+clp (clp (clp (clp (clp (clp (clp (clp (c;B1V BJz 1y~t{*1N BzvAp3?xk3A z s b ̡P=?C-1FΚ B:9>x}N)8mc,uF- @oj:YT+KO-wK7=hA z 1;Tūxk}w(w@nOFtOFwKP7sg:>BJz w(2cu|6υB1@BdPYcQ\sP%tE|.TVx8; !BJz !2\1wB8BбκCP(:)Dޱ !\(@ K]B8!MyG Xz P8TVx;3pP%t,m3s MyG S'~g94J@;ޙ9$/-J~*+K:aCA #PYKW*;BdM[PY@x*zl\(585ޱTn]ow (c(O[PYKTVk\(@ KS251(|P8TVxRԻOT <#@ew,A@PPu9n@#PY@{3:ޱS@CAvfs̀#@ewLMi8TVx"ħ*zxyG SqEO5wu^_[\lfܒp Mt ;"$[mɵd!B{F^CzЃ}]xd!BOTEg秔"4,d8y` z~pT+K:VyK.ݤkG^؂@n_o,%XppI=5}’p MtyG` j"'|7~’p nMtyG` j@im.sP͖,\(\>@ Xz[md!BSG^;޶7,!\(@  ^oy.4cj8TVxB^%p MyG XzRk !BJXz=p^S;YhK {7:UkJ>j;л .Prj u9г .5sj8TV 4m%dr`yG<u l[<4 @;RV:^eYB= ~>bj8T !ehAm?M3:e;LI1Մp,DI 94Dg&*d~PYsg& Hjk%}\(zꇖuN?%i戀 ~n W/Ƃ8T-++]FQ Ro;+[l !A9d?.$xM:ڂҐ{ %o5u<;SiUAHEX:4f֡@C+PYԎ0sBw*6CgdPYAs uwfPY!s vp`j8TVCAwO "Eʢw#g\m8H殈#@e;J[No*DjWh2\,bB /d QM Gʢu3PQVM G"u7HsWCSTkR>RN Gʢs  h6~]I\N G"s h7Փ|JO:,@ Cܓ&69Jbh6tlyEic!P-)]څG2~"*pq<3|q2YL@3|?7̚XzX`@/٣:nD@W uAJv,զв-8U)&iW)V|f]crypTkKzXh)zZ>|ǩ+=~~pѬt|CCh ։a cokja:l>o ,0ez.wP6}mSW,0t߁A}bL]Ȁvi P-|RH5h':nVf_ؗJ-3uu0@ CW!/|.TV(3B(mrU۝Fֱy`PϐVJ1F8BI"tmpP%tmpP%tmpP%tmpP%tmpP%tmpP%tmpP%tmpP%tmpP%tmpP%tmpP%tmppDD/i.z7. 7. 7. 7. 7. 7. 7. 7. 7. 7. 7. 7. 7. 7. 7. 7. 7. 7. 7. 7. 7..-[]ppi}Y@*v/?ppI=w<+Lo&0|6}?\zx[=aP7ױB8נع BPUׇƳ8 S}.F8 Ab R7@$ j/E~@EߞN?z*~G/MU hHG ͦ {8g"^}ǫo۽zŸ͎ؓZ&|DvhՉ9=L{5U/@Omۻ;r@O5'Ѐ+'}CAZ_$r(Z]=s{{;TZ=o}ə+³TU\{Wڨ77X,,w5:XNÛZXʶc۽QÑ[ {S|T0"9tRLջӞU?uLᆎl+?V+[jsI]@=]L2SէUuҚyWӅ~>5.ȭ*ҹpPu5'T9TlIjS&8y E Tפ\8)ΕںLVG Yunve3wop8 PDyE졩苣} Z^1scvZ.(6X{&f]d^y]8=k^,|*1uՐlv]Vw7[Ϸ6&'2,e˅?X{TjŪ'ʛ,kNcO0yȧ2@,rץ'XXh@*HwSCWVWA P |*u~1$ #>]jJH:C89-J(q sP!'d;&.c7..&)cl-tC,zA7I5@!-~?XLV_S`=пn븳ppͧD^FSJFY[` ?X%ͦC;\:Y >5<(7@'|1 }9砙:>BjTxǘ[@Cwy. "E zgl (c,-vs;ơYW PB8["@ewA. wB8Bб@M!2\%wf!B&v l#@ew,u3S 4k`*+c-6BG(2c M7G Xzg. %wfq <@BwC0n0ޱ̡}t_ q=5#E$2*+K:Y4CZ.$qq \<@}CAwֺ!h QM4#@ew,Q7pP%tLe`*+ciS@Cvs ;&TEϑjG Xzw) ޡ@;s܀;&g`k*+cͧ*zG ށqEOTؙ?:1T-E0:ޱnfܒp MtyG Xzw|$גp (cGK.7iuA.<ʗ:;?PfB8B&fP o%.>?\o;%  ŏ8T秔·+dxykL.>.VjILױd!M:6uj8?\x4A;XpЭ#V(e5’p hكo/]fK.:5}j85_砶w-!\(ti8TVxǂf%p (cm3B8B&L G XzWb !Ba#@ew,l+pP%t,dGі.6oj8TV++A Xo)PہmpS]QZ @]S-.z±!%u,XS-!q΃00 PKUCt#em% #殈#@ebRFntp`ӄ0Sl)iHHDKAO(Lگ ԨO Z ~f8T nL@fg:o W/‚8TVвށ[! 3fgn W/Ƃ8T-FH]]YK &^V )e3Cȹ+PYx[[YRzW[@M42\4@͢l(UskZ:hweH42\QfSblZ^qPYa^ewZt;֕]sW:6@5~w]h$& >*+cP~Uk 蠹+PYk7kӬ#\|.TV(,'V:]d ! +PYs Y}H  sg3Ctq9oucR-IDATG@ #@ep=tg;I@-!2\,ztBVyRN Gʢs  h}ٓL]Xh#@eQ9 qIj>[%ЧT]BA(a=S{4ޱ|ZS"qliۆ?(u,=@ C hNJI[zB3P-1kcq@% -8U.HL=![K]cٙz]r`}Y@dv !~}7afk Qп~/ţB_Z: m3EߩSJ5f0or5 PA/eA~,=:=֠Us5+`lMk]AP{̔S)M ca݃o]|V vZه61F:V5ֿA@4 PYohHņ, t!6 !4WdPYc[o\(\ڛ;ƶP(:ƶP(:ƶP(:ƶP(:ƶP(:ƶP(:ƶP8"@}t33Qwx `\A=,'K@> cT$ H&H&H&H&brcs|QӤkkc|wx` AȍCnN;暕?ڕT;+|wx`0Am?<S>f~՚>Щ;+|w殈o@AЭkfUqڋꘝ*8B[0HtcY֓Lg!v欘n AЭ㻏B^EGsY;sWD p` `^+l3ų@  n];՜v5Kݛ4ϝY;Q4 pİ` և=f΁ nHdnHdnHdnHdnHdnHdnHdnHdnHdn;nhpzOI 1A,µM uO@pz].4 uM/ސX X4=ןI> uD" uD" uD" uD" uD" uD" uD" uD" uD" uD"XqIENDB`ggeffects/man/figures/unnamed-chunk-4-1.png0000644000176200001440000002216513576454457020222 0ustar liggesusersPNG  IHDRMR/PLTE:f:::f:f::::::::f:ff:f:f:::MMMMMnMMMnMMff:f:ffffffffnMMnMnnMnnnnnn֎MMMnMnȎ:ff:ff:ې۶nMnnnMnȫȫff::۶ې۶۶ȎMȎȫn֫ې:ېf۶f۶۶n䫫fȎ֫ې۶, pHYsod IDATx흋qO+uV'&dYul+L:65M^ńH],pxc;;#Ǐ{(bk:M ( Pj1,b@YŀPeB>I^߽'%YVu,>:W'^/|cƳ[#|ʶ/;.%߻!!#6UU-|&& iF@kp0s/4QiG@ h/GM\EwKpJKo$L)x_($mdIߕo#'C@{?!{IJQ/ 5ۧzlIR/1 ФL_xYg*B-Z ( Pj1,b@YŀPeB-Z=qᷯ!J(T `@љRre@q jPt\PܮB)%W+P&EgJɕJ(T `@љRre@q jPt\PܮB)%W+P&нFtzgrs?]Ȁ7}zsFvuoJ50oFD% {SJ=} jbxHzM] =qB5l*,FPD}踆G) >t\z7|y?"ۇkP@=: C5t\ j(WҀCǕ6rq>t\j(AQn:>t\hD}P}B} PmD} PMB} P=D}PD}nF}D}VD}D}FB}&D}zD}:D}.UB}.F}.ED}. yD}踒%;BP">]I:} Qn:ƈZ2ZuAJ Pɨ->t\ jۇ+I@mRAЖ69'6VT6^5F+ѨfސPhBrWڀvW/ܕ8fmt_ *uFtAE}ОnfQDPhPFr:@%>]6J% z-DEXJT DEWJTNDEVJT⦹D5Eh w&vj(A:T@mh7&mQ@T%}pC=zJ% d:DuEPJT7UD ^JT*}Gh w]@ˈ4hEB(QM @} X`JTiD"!jDb5 Quڎ(AJ P8Dͥw`" DrWrFn>D% Qy (AJPKBE0D+Q@-r`"842f4?(AJPCDPm5BcQB(1`jwQB(1`rQǕ(2QB( Pu6P@D DP=Ԯv*3Pre@[e(ܕ ִ#$^ 24m֕-Ԯv%N*3 Ik86@g3/Rʃq0Wtk64Ds{:t!"z7 vw}L%k@Sv!4Xܹ2 Ѿ̕`&aɍey08@Cz8 j(@u$ 6SJ+jCTub!@tAˮVH}x=ʀp"=uJ!_9gVDtZaPZPWV@e:%fԝO Q8@J+Պ /;HMT<0̀v )%J+&`UJ+%`UW @fUY4պ72e@kd( 2!V6(MAzMӖбԀ5!0(63~]PC3(Lu=0Ew&7eԮ|ъ 0Fht+_7*V7p+_7*V6|{?:xS-Y' \Ej/U:->7(t8ap릖(6U]g^8W[ʃU+L5?I:$sAlQ+V2m6SAź%`eyJ@ S]@OLYyТQ2נ$ .uZ0@+&(63]QhI|$)Q(r>z,.%}D u QEff fʀ%`5rO$7_$%`ur?MnˑJ*Q(0mtóh6UT]Z?Piܛ*fcz[jU4S+O|rצ-IG+]P!Zy$5mOT FD2.M}n h*|3e@AR-Oug (@PAcqL]AP.V+-^NG܃00(U?]Ԛюi&,>)4J>ʃ5dEj `y!)4J˃ue/B^BԽi9 NʃA&K"Խi1 Nʃb@ݛv<5PF48)ƀv  Nʃyc1rO`܃v j_c@q  Nʃ$hpR/ggkP}\{п<ӢtPsk5R`@ݛsզ?KΫ$ 2vش& &,97c:ۊ0M{`ƒ`: h-Az61Mm\[hpR/p" `: *L-]hy8hu0AљZz`: xt4Q)k U>$xt0`ЀTI_$3r-]' :Nx )kB^ts>b@}BhpQi!: `@љipT Q0L],905GT' (:Sp9KJLMGzEЙ»XV:  שk@IԍzGz0LtڷIԙ+ AT'}?IU@~^jFDȕP+`)Uޠۋ$EgiԀ: `@љvej@Q0LzTQ0L=zQPPt^\څN (:SO^mGT'T/W?!*ԟ@ EgM# EgnT'TgW_ *Ի/@+ʀ3 :2LCzN (:0!*47@3DuBe@љsBT'Ti@WJDuBe@љuhN (:Թ+P]r%jWU\ ϕm!WBte@EȕP]&r%+Z/BB5qe@kEȕPf h ԕ+P]В •-+P\PBte@I ڕ%J(T{W+P!\7LډNLnP]QhC`#{;]ƀ77';;j!+@}2وoGo\ȋWtzE,H: E*hԃ6+P]砾MiT_?⽚Rq G7=O;$T?5~t&d,n@xE30G+BXc@ĝLl7;xE0G+K^;TxR;sPG=rZ90_g%/v+Y=l}bc&x ΃O,B-Z ( Pj1,b@Yŀw;_aÎN?4b@{Onmش1ΖN7荄SP:ZǫgϻWWFGoCw|r7\K5/}H-twycvLwP᫿P>y3ẵOMgʟ 3c0wƌl=n磕[Ϣb5^>OkIDAT!%/b゛P0AKath]YQ,KC;㛻B~5^˶)C ){e&:eC˜tOnҖׂ\0SX\ 37faVvTYb*!ĮgiLdģ;p82C_܉쎥V:) kC{UIb?餶ιgx[qva!|_]qyÇͶY(iܺr79? rjd]Zm hM7! 4r|u!瞳?lV΂ lY'ˠٕ9t| M˻ɿ&GtFs'S' ]~-wf5`@E"Zhk,37X7HJVRĺ+%@P"I./ڲ"IKC eINw6ɺ6>p*N܋qQ|ftP8qw=c-IyQN/kwvP!& 5a<)sHXijFr@.P1jʹBBb@Ib (H ( Pj1,b@YŀPe r 6QIENDB`ggeffects/man/figures/unnamed-chunk-6-1.png0000644000176200001440000002636213576454461020222 0ustar liggesusersPNG  IHDRMR/vPLTE:f:::f:f6s7~::::::::f:ff:f:f:::MMMMMiMMnMMMMMiMiMnMnMqMMMMff:f:ffffffffiMMiMiiMiiiiiinMMnMnnMnnnnnnnnփMMMiMiMiiiMiMMMnMnMnnnnȎ:ff:ff:ې۶iMiiiMiζΝnMnnnMnȫȫff:Mi:۶Νې۶۶ȎMȎnȎȫnȎ)3Νi֫ې:ېf۶f۶۶n䫎䫫嶃ΝfȎ֫ې۶D* pHYsod IDATx흏]Ǐ1Q! b" TlVk_Hh mmMZZ&`Z5HHRZ+1i͏OΤ=.93;;ggyv|?/wfg;?v٩D XS@" J Z()hEJ Z()hcSBWoJ}Wȅ_xLe>F_gycHOK:͓nSO]dxcuE)5+:6u]%Og/K+hh4@佩 ~7T&9t.F@~R:aG4NO{[R(0wZMM)]PDo~%-ºLgǯKU4R%Y Py4`4kfZd[uSW~=꼚J9ٟ/HWz<盿3~'y S#fg^&ϗ^rΪo]7/ IS@[K+Bºh[ֱOuu;Uz.=,R ]XYe-me@LE\vhf^w.sEY%Kq}d#E5߸WK+-jǧC?z15}<}}xJ"ƪ,@қ)pgK"u׾"ȌS䴋+ӋK-}ΠzN$XEr]`(@:̏󞬒QchG$(ë1oNSJZٙD $Wn -BxsVN/ ϗ_jU+}EW@H|huOf\^})'(>mEVPTq,mN+9E[3ۏ)Of/S$I|8k^)]c/ڙ@)) J4b4E#?.@YݟQϚ| .'TRֽ=mz;se9M+mįD?=W~RA<ʕ>ոV=|fA{YF9v`i\fu7}sTiD-'}28TyN"a:u]`yRioYgIs3I fA7kk|?Nkl&XO-b:Q/߸?)OW1G~z~g5.R9 'MOr|٥yW*tlךcz!~55Q~,E0JHA%-PR"@IA%-PR"@IA%fM.ƥ(F(Ph"@mD&F(Ph"@mD)@O;Z۾(~S םD?4@oKG,ӎ uOVOToBe{OkvhI#ߜ;?A̶[?`3,ʿ$qjl#2TO\t #wץ_)uAKܴӞzS`~bg^ɑlhʚ3w0 c5<;QW64<2ƝTV=\rp̃c u;Zx X(W$iQĉn:QvLEUzUCţ&F(Phj跱[oT -1e,8[DYpJŷuoAf)߂&͂S*MT| P7 N4nR-h"@,8[D,zp-h"@M4ˀ B }:{V#j46̚"(lP C(l!@ Zh "@A PHg PDB"@ F(P@h8"@D EVE$AԀZQhEhHʮ$^2ۣ̀^dM^}"Hǯe=q<2{KGO?ÀoeQt哯+?/X>K9UoD 3%'Ȕ?tuۣz鑇P UK2Ybc$>@S>E3/FE0 e٨A t: n@(Ѐ%@CDEhA䀺Qh& 2P'=M5 @ А-GB 2PG=vX: Ig~]`˪QAj.Q(gl:Q(Gh8"@>Eh=*z@!>7fĹҖ,hOq, ć,Z"@ hum<*r@a>ٍG%@PkCd1ڷQ!0YL|I͘#=ŝEdrL9ESvX6ڏddzd,^~V@hq; >cl=&P޶_~hvX6ݨ@|N$|t끎܎jPU\$C iˣbT^$QcnTZ.-aX ħ"2ԸHfFFM.+֫<*2@B|vP"U\Q5ApYQQ aX t`eG(!g->(䳻A|Ci@t<xT4[[;oǣL܊z&.q>;A5zN-P- џᝥE(O|`G-yViTвwjEþ2  > PüQЪwg a4fP;Nz)!aX(HdI8M,Tbqu&jsE2~V4ZhyLqY #z7"M4< a"pj|aSH,$$xB5 \z~";jo)`tz^+BPs+-Z~F;,}x=t^wKm*hRŹ X1on6]g<%pE2EYZ3d9NX$[<{Nf$oA-͂S,(5VR[8@sb}dc)us~Km2.J.5ˤ:`[jtubЄ]1@mD-Փ%l@ 33|#w N,j=25,BoAS퓤ϊ=Ǻ[pJŷPִ&PV[agZ@Wr}8:s_hG}2CZNIBXG"DP@SA.8"@I]r2(}'dn}bhNwwHKU 36}Nܖd~ˊY %Do6?娲 j/ƥ?~[($TIFk<Մm̃S O-m7Z NT7sNaIhf Pi[D]eq({9[s|i^~gW$Kܠb3Xd?ӿ:}{]-|NgL=ٙN'A?8(jzgN倊D^1h=.i-YU:wIּLT#%#nRoOO`~6VwvH|%Ϥ>u nȓPČ  3_cXxoBM|ۀ_]?~r;G J`-Ulkq̽/~uE3۶TI+'7Q~ڴPzE]!&B-] NwQ2+* (XVu\xsOO-n777o# @ه7.߲%uP~wj"=F۴]ֱrЇD7G"Q7$ˋ[xwzo$(V$מ*jl%?E:!(j+fnZT.߲%uPB>h6_؝ Bߓ6i,;[jO5>sQL&Oț݈xд]q9eKB;I7*k]!~7pP9ZӸrL!ήORE$CCfm?*^=doytE5 @MܶGY8]!א P:?^U]|_C&@mDA7$ƫߗI!7 PM\z_<$ؼ3ۥ_3T|&+@ks7`N~~Kŷh$_5Vm5d{;9OP[4َS#h!nR-О$&R-u. J[*ES P[4a*a%oMXڿO[*Eη:>H"[*MA,~KŷInR-hjkq):|kl5We̓oA>HOzD$`uoA5nR-hAT| h͂S*MT| 4@6_I"@=oA9&}i H4f)߂&͂S*M4f)߂!_[533$yTR-h2O3]dM^}"H~Kŷ yK_:|!mR-h[`Qt哯+?qZ08TyP6N*C 2iT| JL|>fClFY+mׂa[r@Wv1|B}!¿]ݻeigƧh(!ţoA]m:5KC=qj-o4&hŹ\ P B2 @ja-4 5{_<z-h Zh "@A O:Kҳս;m^@| {g c/nWghWVsSyE=e4ݎᨲ7Ӝ͋ PbG4SE4 7 E5 @CR|,>o4 @ JPK=k@(Au ExM 2Pk=xUh؀Ib0{F n@aL9XZ?yA/?+6auۣ6i{EْSV!hCCFT-+Q[qGP=P# " "omoNN(|to>P7=j(FuJlPG=,s "o"@*h8"@o"@>#o"@> pDFh= &~{J![ >cPUVyr*2Sq86̚+ PIqDf-GE('Q8]PHh}X g,~{T̀5P4P[Ё~$M<:e`p} @ }wy4K~; > p)n IDATݨ@@|X`@7*P,, 1*v[tjgCϘZ$Q;,˭;ò!g^$QCMwXuiw; Ͼv OE2UˍA;2 PL !6w:d<*@! xvPŻj$@- F> X(ijo(wű|[: ]9mGE(g> %)kTвw :  > PüU >;(R['krɰe~y2ɆP 9` -^duCiG/0 jRY#nE252k^qR@cʃHmG qدfJ<$0-K d8 ck(YC0,ߣ/ok(NvwÄg + p޵DP61?\b@\Mz)F]OS'[52GP0ߧ2:dt]:[vy 8k Y$QԔR@#7gWGm̫؃mhM@M-ψZJ_ 7>ƲPXL͐,zf=)|jKnMOw a'\%T_'tY* (>=n}S3'- d$,i pڠ{2jgLAEsE-S[ {bY1Xu Pִ&K#I9QVʡRYOkZ@WDʰU(lM->Yn$G\$Q(lhRQ'fiHƷP7RkFVHƷP7RmjuRTa @SM,h@n Ncf)umumxƾqe6(YpJmZ;JoՂSjCR=XnXpJd!@'߂S? :R-h"@,8[DYpJŷuoAf)߂&͂S*MT| P7 N4546E7J"yJ Z()hEJ Z()h& y-sqr+Gq*^݊Z<vyߍrdr=@l}E5vytߝkr8cXnIm7 ȷ/ڋZO\b E 100), \code{n} could be something between 1 to 5. If \code{x} has a rather small amount of unique values, \code{n} could be something between 10 to 20. If \code{n = NULL}, \code{pretty_range()} automatically tries to find a pretty sequence.} \item{length}{Integer value, as alternative to \code{n}, defines the number of intervals to be returned.} } \value{ A numeric vector with a range corresponding to the minimum and maximum values of \code{x}. } \description{ Creates an evenly spaced, pretty sequence of numbers for a range of a vector. } \examples{ library(sjmisc) data(efc) x <- std(efc$c12hour) x # pretty range for vectors with decimal points pretty_range(x) # pretty range for large range, increasing by 50 pretty_range(1:1000) # increasing by 20 pretty_range(1:1000, n = 7) # return 10 intervals pretty_range(1:1000, length = 10) # same result pretty_range(1:1000, n = 2.5) } ggeffects/man/plot.Rd0000644000176200001440000001431713565465024014247 0ustar liggesusers% Generated by roxygen2: do not edit by hand % Please edit documentation in R/plot.R, R/themes.R \name{plot} \alias{plot} \alias{plot.ggeffects} \alias{theme_ggeffects} \alias{show_pals} \title{Plot ggeffects-objects} \usage{ \method{plot}{ggeffects}( x, ci = TRUE, ci.style = c("ribbon", "errorbar", "dash", "dot"), facets, add.data = FALSE, colors = "Set1", alpha = 0.15, dodge = 0.25, use.theme = TRUE, dot.alpha = 0.35, jitter = 0.2, log.y = FALSE, case = NULL, show.legend = TRUE, show.title = TRUE, show.x.title = TRUE, show.y.title = TRUE, dot.size = NULL, line.size = NULL, connect.lines = FALSE, grid, one.plot = TRUE, rawdata, ... ) theme_ggeffects(base_size = 11, base_family = "") show_pals() } \arguments{ \item{x}{An object of class \code{ggeffects}, as returned by the functions from this package.} \item{ci}{Logical, if \code{TRUE}, confidence bands (for continuous variables at x-axis) resp. error bars (for factors at x-axis) are plotted.} \item{ci.style}{Character vector, indicating the style of the confidence bands. May be either \code{"ribbon"}, \code{"errorbar"}, \code{"dash"} or \code{"dot"}, to plot a ribbon, error bars, or dashed or dotted lines as confidence bands.} \item{facets, grid}{Logical, defaults to \code{TRUE}, if \code{x} has a column named \code{facet}, and defaults to \code{FALSE}, if \code{x} has no such column. Set \code{facets = TRUE} to wrap the plot into facets even for grouping variables (see 'Examples'). \code{grid} is an alias for \code{facets}.} \item{add.data, rawdata}{Logical, if \code{TRUE}, a layer with raw data from response by predictor on the x-axis, plotted as point-geoms, is added to the plot.} \item{colors}{Character vector with color values in hex-format, valid color value names (see \code{demo("colors")}) or a name of a ggeffects-color-palette. Following options are valid for \code{colors}: \itemize{ \item If not specified, the color brewer palette "Set1" will be used. \item If \code{"gs"}, a greyscale will be used. \item If \code{"bw"}, the plot is black/white and uses different line types to distinguish groups. \item There are some pre-defined color-palettes in this package that can be used, e.g. \code{colors = "metro"}. See \code{\link[=show_pals]{show_pals()}} to show all available palettes. \item Else specify own color values or names as vector (e.g. \code{colors = c("#f00000", "#00ff00")}). }} \item{alpha}{Alpha value for the confidence bands.} \item{dodge}{Value for offsetting or shifting error bars, to avoid overlapping. Only applies, if a factor is plotted at the x-axis (in such cases, the confidence bands are replaced by error bars automatically), or if \code{ci.style = "errorbars"}.} \item{use.theme}{Logical, if \code{TRUE}, a slightly tweaked version of ggplot's minimal-theme, \code{theme_ggeffects()}, is applied to the plot. If \code{FALSE}, no theme-modifications are applied.} \item{dot.alpha}{Alpha value for data points, when \code{add.data = TRUE}.} \item{jitter}{Numeric, between 0 and 1. If not \code{NULL} and \code{add.data = TRUE}, adds a small amount of random variation to the location of data points dots, to avoid overplotting. Hence the points don't reflect exact values in the data. May also be a numeric vector of length two, to add different horizontal and vertical jittering. For binary outcomes, raw data is not jittered by default to avoid that data points exceed the axis limits.} \item{log.y}{Logical, if \code{TRUE}, the y-axis scale is log-transformed. This might be useful for binomial models with predicted probabilities on the y-axis.} \item{case}{Desired target case. Labels will automatically converted into the specified character case. See \code{\link[sjlabelled]{convert_case}} for more details on this argument.} \item{show.legend}{Logical, shows or hides the plot legend.} \item{show.title}{Logical, shows or hides the plot title-} \item{show.x.title}{Logical, shows or hides the plot title for the x-axis.} \item{show.y.title}{Logical, shows or hides the plot title for the y-axis.} \item{dot.size}{Numeric, size of the point geoms.} \item{line.size}{Numeric, size of the line geoms.} \item{connect.lines}{Logical, if \code{TRUE} and plot has point-geoms with error bars (this is usually the case when the x-axis is discrete), points of same groups will be connected with a line.} \item{one.plot}{Logical, if \code{TRUE} and \code{x} has a \code{panel} column (i.e. when four \code{terms} were used), a single, integrated plot is produced.} \item{...}{Further arguments passed down to \code{ggplot::scale_y*()}, to control the appearance of the y-axis.} \item{base_size}{Base font size.} \item{base_family}{Base font family.} } \value{ A ggplot2-object. } \description{ A generic plot-method for \code{ggeffects}-objects. } \details{ For proportional odds logistic regression (see \code{\link[MASS]{polr}}) or cumulative link models in general, plots are automatically facetted by \code{response.level}, which indicates the grouping of predictions based on the level of the model's response. } \note{ Load \code{library(ggplot2)} and use \code{theme_set(theme_ggeffects())} to set the \pkg{ggeffects}-theme as default plotting theme. You can then use further plot-modifiers from \pkg{sjPlot}, like \code{legend_style()} or \code{font_size()} without losing the theme-modifications. \cr \cr There are pre-defined colour palettes in this package. Use \code{show_pals()} to show all available colour palettes. } \examples{ library(sjlabelled) data(efc) efc$c172code <- as_label(efc$c172code) fit <- lm(barthtot ~ c12hour + neg_c_7 + c161sex + c172code, data = efc) dat <- ggpredict(fit, terms = "c12hour") plot(dat) \donttest{ # facet by group, use pre-defined color palette dat <- ggpredict(fit, terms = c("c12hour", "c172code")) plot(dat, facet = TRUE, colors = "hero") # don't use facets, b/w figure, w/o confidence bands dat <- ggpredict(fit, terms = c("c12hour", "c172code")) plot(dat, colors = "bw", ci = FALSE) # factor at x axis, plot exact data points and error bars dat <- ggpredict(fit, terms = c("c172code", "c161sex")) plot(dat) # for three variables, automatic facetting dat <- ggpredict(fit, terms = c("c12hour", "c172code", "c161sex")) plot(dat)} # show all color palettes show_pals() } ggeffects/DESCRIPTION0000644000176200001440000000351413614042552013722 0ustar liggesusersPackage: ggeffects Type: Package Encoding: UTF-8 Title: Create Tidy Data Frames of Marginal Effects for 'ggplot' from Model Outputs Version: 0.14.1 Authors@R: c( person("Daniel", "Lüdecke", role = c("aut", "cre"), email = "d.luedecke@uke.de", comment = c(ORCID = "0000-0002-8895-3206")), person("Frederik", "Aust", role = "ctb", comment = c(ORCID = "0000-0003-4900-788X")) ) Maintainer: Daniel Lüdecke Description: Compute marginal effects from statistical models and returns the result as tidy data frames. These data frames are ready to use with the 'ggplot2'-package. Marginal effects can be calculated for many different models. Interaction terms, splines and polynomial terms are also supported. The main functions are ggpredict(), ggemmeans() and ggeffect(). There is a generic plot()-method to plot the results using 'ggplot2'. License: GPL-3 Depends: R (>= 3.2) Imports: graphics, insight (>= 0.8.0), MASS, sjlabelled (>= 1.1.2), stats Suggests: AER, aod, betareg, brms, clubSandwich, effects (>= 4.1-2), emmeans (>= 1.4.1), gam, gamm4, gee, geepack, ggplot2, GLMMadaptive, glmmTMB, httr, knitr, lme4, logistf, magrittr, Matrix, MCMCglmm, mgcv, nlme, ordinal, prediction, pscl, quantreg, rmarkdown, rms, robustbase, rstanarm, rstantools, sandwich, scales, see, sjstats, sjmisc (>= 2.8.2), survey, survival, testthat, VGAM URL: https://strengejacke.github.io/ggeffects BugReports: https://github.com/strengejacke/ggeffects/issues RoxygenNote: 7.0.2 VignetteBuilder: knitr NeedsCompilation: no Packaged: 2020-01-28 11:49:06 UTC; mail Author: Daniel Lüdecke [aut, cre] (), Frederik Aust [ctb] () Repository: CRAN Date/Publication: 2020-01-28 14:30:02 UTC ggeffects/build/0000755000176200001440000000000013614017661013313 5ustar liggesusersggeffects/build/vignette.rds0000644000176200001440000000115713614017661015656 0ustar liggesusersTAo0ڮ!и <$vAӺI*B&/yM-qtp_sdvia;AFhk6G4 8cL04PFvxbf*ߙ }܋.ʄ(zh*JY)YV!B&.yF_c$Ug[ᴺpYS J,>\3 +>|L!|+J`ʁgN{pH9$8?'0,**]|xd*زA?[9L"T}YL]Ut&^vy]堇qA- Wh ı} DCktc 3̒n$Z]:ojj[=|[m*wZm|5nрzʚ>ܬZ6lLYӭ>VIbTa,mboWnip7wcV{݄ItԶ .펹Y.ޅ}qaI\g9Wh4~ӴX,g ~;OPKIggeffects/build/partial.rdb0000644000176200001440000006344013614017607015447 0ustar liggesusers |~HDH*:H O](()i(;f8OnMG!3ߎg'N\q9p._qgvfo8쮓uwP8}@W]lO$[-۶&ߖDDsk3v۞<ڲg݀'V|&hm灵_lxnnѳ/^ڳB'!$X[o-9Y; V}e:γWan/W_n[9ۯӥ7U8/V*ML[O;r:k_e3Iѻ?5頗Aڪ};9N wuVv+ L67,E'ˌ]Z\422l/-~?TrO2ߺJEv,w~ڮʦP+%PEAo.~mK> u׫ t x{-Z$c {-U+㦋9;X晓g7Or9я X v_,% xdiz>5=?UCUTLmATۚnvw5LZlrPWz95Sݶ`e˯pU(1T(ETPm~ۑ,6D^סU*|w +𒥲 %ɥu6T$"4[*w,o[Yo0 =>']\1Olss̳=_Ϲ;[Q; !C8AVԎB$m~Rs1ģXTNwkh Qm$IvRٗY6RW/&Y*V܉_h\AձiGGSǎR'hT䚊:Ii=;age$g42g/2ki]\_Y{Uv.~ $ ]AڞDyMoBZ͇W&أwfgϠ#-Z %˅Ϳ3-ַQ[.-> ǘå)f|+tBH;Bc} o> T~C@ISCazX)G֧OXeT>84OSB[+2OTFmmc/RQsNWلGGQeb$yb;\M.iI|V1w;$1e?`RG~M?ɾenp=0X(^E.Gvf0/l&0z)ZúIh/f~06m!jd{##mfRXFccƊc95RjN\C<2ZGRH(SuvCֱ7x%&BlPrC˦+n1a"a`*Ur^ڜr:.IfԤ.]mExOQg-!JJ;/clbPQZ+JG+APoO>J<|-Х‹B1~fV:y:=?7y{q?+֭q;Yiϭz)9o6q} s̞SləRxMqldYFNu&V!M,lv!,Az;p/\[Џ/UiѰYWj/B&&om61PiwVx3u#*{fxcqqRpcS KrRkJ4uބعɞK۸J;Ƴl 4[F쐌4|yWR5@ ɃUn^<+ c/.\HZ+f~+,X?CӓbZ5Sv]*)%R&m~6l8\!-;{4Ip/*$[]q+=_468~!kٙ  d͊wgn^9 =E ei+F`rCM^`9hiz䊹0C9ueFIv.E/Ҹb{6ܾM(s9x|IS>+˞[,2w7cfqaZrCW6e}msw;1d=̌Nfݗz}gҩ> #FK +KdSi2~EM'/;)q 8>g tl= >2/kM`AƬiT֢.8 +(Y%Nyu2V2iR.3J.|>].[dg(c3mQŤds/ۼQOc2. Dؘm+ 1F\i["״Jأt; 47J[=Hyp \;/Sp%kz(gϴ~km;OJ[Gkїmy0ÔN =5Rx \ dh9K5+v4"DŨH֦H! :Gf;i;2E>MB( W>NHv-Y31'LFPZpFi3T2< 4XHCC\p\W]U&Jnx v!ngK<5*1 |~Ƙ$C+91Ȼ^ҒF9E^Yh\֪Ò'* Yծ l8u;ϞNZiYβ(4vI=ـL[:&`)z6oEdB#VZvbqc-/pY/Sj%잡 W.+^:y;l>w V:X(?ж@%"x4e%# lFLo 4Xg%l7xw -@GH1p5r4W/Jx\moOע#=*{8>hҧP>CE[ේS@CII_=˨<Y-J)p5'VHeɳ#p)NB)QH7_(MIz.ҿׯ-o3w;$mo²% <&Ƅ?< ~Xl0aWj\ HVCC65 랆MnMQrp}5 יX9JL 4[]3J]2!t2&MP4U:Bdn>6B_^:QO*l"_e4c# ǚ|(??=w/6S3il{$p|XSѾu< W[үx*I%cҾ%޴V.LX .GTPڤb8 Bwty; M338]R˖v}>t>30n›7oR\2Q>EL(qUU/0 W5axJCځ)~HC|LO>$Zl+sΫ97RP+WSPW jf~l}")w{ssV*KY'a R/O,t]/'kyKN}aps)YUkMvJ!Il;WX|qe?_B~.i y.S,/܈غcOc*VRz >&"ģ*2p+V 9'@i$ ri\'p<ݺC2cI/DDDR|щcI} K#4`Eir8_QV[?''Ɠɨ ۘʙ淑y@CZg2pNk߰Ly򹬥TMp31Yt$5Þw6f[ Ij|'Au7PU呆@f=MJIp!HR; .4l ::MRipQwXs/‚đb8+ֻŠ +U76*CS[tU}}Մ|MwAڄNxu=mOOR҅w_%^puYJx\w5tdMP֜Hp qA\8PxQl&6Ck`kFO#O+,@<ٗQ1R gtK <N4m*O 8C3S=bgDćIgaKz-is"ECqp̉;~Mk°tTfq8(3*YY޲S޼.DElOICp0r. p|bSs1ģ)c~4ܱX$+yxżʕo-lh[ >xۢn?VPou<`O|4Eu2ۧb5 )+{ђY¼0A3?Lc}IsѢ!V}ܕ vSxx=Q-\pZ;(90t7>Fio*ndk-i5x9̊p\S 6 S}ĩ>Jɝ#ug/,fN-"E#zBګx M\∧yؙ76nj=sr;KTsIV&A>WUFeJ5HfJ_UBRuJ uIj7]XFw>00om=X> k!<~Dr;eAOXs>ǧV ZȖA=lIof5NdKJ([E2o~;oc_J4u րZ&ajfa./a+v#u]MAu+rXfHƑwv4Ӕ5*L͹#<R{ζNZ GG9:IU-[)(p|*%?C<2ڙ`,De{))ǘq^Ju+߫@4ԧ𞺁j+5vxox =Hr7aq_*6C?ֱF|!ʖ[ۗܳ)Q{\*N?]4UqU.+*vxMrZ1j7%w8DOCYQC*5(vw4:\ p2QYsZ01I^#p}e+͗]fK}؟>!0vF[םbo,[PI7d%_%n|;L @AJ)3m{@Z q)+^tP@ڄZC3kIS>-Hvg-)n֬?[ o8ku8kIug- 8C3q= x06Bsq2FZ`@[v2*܈4l|d05:@ځ]];1!@" aJ ޭ_,!)}R`jR2Y[T{{S*j;+={".=;Z}P+2L R0 UWO&Š9F'HaI7!Jv!mK1& g+ũe+zS˔9(vJZd+6>?tI*W4GeRX ǻwiLB.]ƪ#R8n~m(Qp 79a@x\ÿPk`!7aC$6|k 7kf o~Mh¦M6)BRrJCHŐC1ģi1k 5yady#_L-/\/Y>NJ,9oo;* IJ22pxG/kM 6mmEwQYo]>Jm2ꊓ^gg=VIdzh,YŬbH7 3΄] >\7Z}W$?c`]_? x/ AUYp(9kWXa|F-/wk|rbJR 4[w Dؘtd@Rw'DGޥuE,z@х84Wb]LbIGL#Wٰ_^|¯ _Β>{G)"G=)Cu,H87p\CM֦Hp\ߦf;V O-VAl/tměT5ȣ*/";[d7Tfy %" Qe>G P֭aꮏ.#x+dmO`D]JOݱdi3K!fx6ƧMQS?!i;b'vκ%k[Ϳ3-ґ~+t\/p|@ۜO񛀢WȚ;K?Җyլbn;9T&-WJex~(InœFBVBgHU AcRo>x\?J_p_!k55XQܿp|`ªj?LR/'wCC@ 9;{I]є"h8a Z@֗ N{=cJk5vipM:Cɵπ+ESV>|]}ўq)ԿzG |A[)/(K I< ~B2p]HSlZ=%xDd툤N+7,H* `WڻS\Sy][z;_L<~\պ*JAiepnP@'WׯwV [eꖤ\N+7VHپ)ziwsFqS4ޡhr sΫ=AIX=}pSf 2Pm`:} d} % < !i7Y"&n#7*8i;N7Ҏ#%7x4e)\3`95^k3HN`4L {?կפ4"LzI7a/>ןE{M}M~zi9Yk1+ֽK5ao2Xu~:$,֙!C߾PYdgshZC< i]9?"ģVMXlɣC}*OEDh~1sy6ä( %% _tV.1k~W:~$Y{ya=LSsy\V/~}:TPo[!&Q"ܧ\ngL^ vʯgkb{෼m~I*~.Pr{b.T%eЅ\18ZxHtMZ&&mPp׀w';^ \߁ Y7vKI=`CUŔ(iW>G/0ބ%T/mn*\] T>?Ȥ)aeT/7} |gן^B+-U/R΀4Qbg*WPҔ As㩬*[:ExDWPO+Un)DFϺPQNz6R?#WG߯H>J~9x6}5M}8zUvo_E#d ~BՏr \Wr﫨r_0bՏ_!M~*M~m˞[,V_Fu#<|ͫ/rի@u:К_ab6]5:EOgKs #iy | 9E8|W2N"3F2n" | F|"%C<2TzZ 5S\~sk+-w"Qy<+5'.9Vg}#~Z|DzD) x8OU{x\ioͿ|Q}0J 1cm/[('1R,]3/uG/]Q$SWD8>d^9awMmP]/C mFjp(E 5-҇HCR[y$9ZV>EFGCxV=kw@^Aa%w8>]l'4PtI( mO"Ӫq*WĖ4X89K5X!W[JuP@3:!|Sj?P;cy` KL)+ZLv띞m&>Kd=·]#~< uI 6#-/$ρk|OiVS8K F}7^W T$_2)vN-.Vmr\٨jS3eپ:{4/gTzį7VG YY7V$&?h-M:Io!*Ji9/!,EG>?@k(_wjvbt<=7i暒Csq1TG_M|`Ţ>|ʊ`Y-Xo&7(+} /Ka{(HQ'S\ @$W2&tHN[# c+jwHM}&U Kڪp#)x:i$EπW#s*vn7@2c#JcC ʭ+)6q@ɵ  8<+<˗[ ˦#T$jΏo++1;w)5V(ZfKAN{{ ډP=vJx$,<ڔi _kN\^WۿPamHP+ߊh_4D?ѬMYq0'_i( gvOrK<ly 8?`V_CUj0k&þ!䵛:Hv51Dn(@sk7P8jGL{- p |93 (`>ϜlKB?tw'/^ґqYCDs<Xf E}DӖ(eSb+c,Z`1 #k^$o &=l|KN`7xw͇k?+ԁ^M$ =8%wx\?]-X/mZ{đwi^><yv*t:m+aц^v{Ǘ+pWrd H΃kT1%4x4Z nJ.5# \0ގ!8x͈;41X"Ҏg)kV$:p\nhSA [אPxQ~-JO6[l١HT O5Bs] \f-Է &ofmm- ϹEs}8hӨM 8 4a;I2bGМ2:L˨-ŞU$ G%|>9› {dͅuOhPr-'˨l*lG7ň%kR$돦n0~HV2a#FËv8mKlNVDS(ya7oh;Zt_*7.d?ިCm|!qW6.*(8Dqӽ1i+k %s \\"14%C[,4xi{dWܹ >D[8U-%-%=fƷӌ-za=r.p| >OQ>yª!uY?`Ƅ' *ۊU'/^Jst#Z Jkz$' ;ٯ =΀4GR5HؔC,} Do/lJhK UؙأB+#j>bPN~,Zqaŧmg)VǤKF>26Û->o2CџVOxYoPEڄZ(jfRK8>d̡*'.$ɹ 49$BRr7pe4NN۷VA߾U[ȱuCq)Rh(ipP#/lgZvWhS1)kT ,7eEULke ς5fxۄHm~p\+%Ɣ..6fKd-1cmY4c+Ʊ벂m0zJ|G6C_x |w*dR%T؊E=;P&o:[(  |O1Q2v&a2vXM]Dہ7o/eqdyl~idpwtN;$d&4r6tkbz3e n5V(%< jFv Ϲl낫ŴJ#KpBp{<32?IKב•>x1c9;AH)xn8ʉUR>b|1z*-6=c+zB5U9mw(_yچ<|&F%E 4,J#cPvtFSA$pFpuZ JjO. nb7N9S9ԱWrY_1y‘4mˈw/ڙl4%Qf9f\r&~s<[saF_R9<L<*?r#b_kN1գ?yZo W>{jaJ w(pmǁ])oD;`D;QBnE'%'%s= [a϶xMLxl?61a7Y[rhMtfb6N6L[ `__kW\!t-R_#c.dpYPlȮrw=n o70m eq w7 3ݢ[v>v"А}.pҵxB&uO nఏ­^\ b]+ hXp vMN nઆ}l&i 40X|$) U5cцB\'{Vl>]`K̴)=|S{`@Z}T_|³#p[=M/(ezn6Cio lo"+wGWۘ%(ɸTu  ߒ7;+<$8akJxXpŸ-R0GG'4T0W+I-{NFhNoB1`Jp` 6Km|X,SVk5'8nF3N*ԅUX~D*Y9"v6O8 Wj슒@8]|ϗem$\N )~~_m;y{5cN`jm섒kPPvC( H^ f'^ތ.; L nؓb3yƟ| %A l (6{vrU.3eP^ i4\4f_Jٸ[6ڶ(KI m )] D}aYEUvx!k >dffL +AˡD#Dʗl/|sځ',ihRpxTp`b9<'8a3OpBb9SrEܕ+#=/MsXLXm[H L P Tm =k7c^:Ǟ~>>*5+R3@AVڵ;4d.晝y%W?"e jBa]NuvP-M1scmv&6m %v11#Xc;8vC!܇*PCK.a?u%Wt؟h=<# cX&~Kr՚CNɵ1]:JQ.WGcs /x3aJݡ*[I= ,ΰw4oLMU9ցIVqe(0~i(HB6iR)E<ZMiT7޷ķ MܷDɷ1z^=(ݲp)k):>pJ 'yLIƅ2FOӋkMnTB/MNzEh;+fPW]4yJrNhZ]pXpwkھ_Yja* ]6JոQM u]Z6਷FnŧxОI'aarA|‹B1~#_rudu Pk=х7krY/eH,GT2X]ldOFNu&Z7deJނfqbm%ǁWjWso rQ̽n7!#ʤnRݰIe%#@B\Mib=Dbt=mexj"P0 P9a7|b}N>@#4(ae76"? N*-;8sv.(U 6N>W HQi" (3+⟲$$bW|yt.; |3"wo<^jTgg^*;ށ[0R]l6#Ȇn=e?"'T](}cZ;@!=<+Z(p\p@Wr%xMp/X#.:ί ،QWkzI&6uYo-c4,PeLȥwm/OW˜_z xGpP<]EL21z4E)tYe<"'Fwku^oS-9tzd\Ж\ZWQk9/(A\qQb񔋧v| ^U< IpL$fx[pE\.}TnTj1 y-iFf0B@|-aa =me֢ ^֬`yA1/E6#1 (t6})?&(2nl e1[ܳ)t 6-eѣ]mRx#=Q<1Y3od;KҲGpLb NY*h0:,&) Ӿqeڰ|/ntj9 L xv>|eZFu{'Դ#|jClhK;>UlxMpžf+(6$g 8# 6(_L? navͥ2/ϔ}h'kGe>m6W@pE 핟и [c1cל'Q&.Pr-@s*c4l[̑HaC 3⬙g3nn@U+;JYY\h6F/K$8ǍVc ѫWGqx֗ߥs  PIBZ|:ʼn: uՅp]i;hv&6-%16IQڞS:AqVK=5f6 vuEQj+|zJs>dm$]>܀xź:ǮExؐIj [&C-M MZγ. n`[GR,Z|,X,TeKM aL˘tatxJf o/r/XKH蝍"w2u/L6Kę Z^bT_$ (2FOb e1[n$89缚s3%ܜ )bY0.nY2F8'U9-}bwJI6ZRxQ孬} )\팪\C][C֨H=MwJ7w`"w SL:MZ&+D*aY,(FZ*J(󰻥%=҃R:|,-_zO7Ex"6p^pBMWb0W;1w𥻎i XD 4ux])  TU 4*])x|$]]a5 lbAɵuEQZjV%M+X٪fytY!cw'lFA Nq$%Pє*<6o2ÞRbl)k6q7F#1JxAp1uOU.0a;s *H^8+]7,i ^{ɋ;+Gג `OQrNm/'kXcHp 9>Z1;!lSҤg'{Ngbhni k2CR}]"]D @pd|S#*G3n-څ0 6n?9~ \˝ӂ7#8 %<# 1}HY`n5VkցPDɷ1z SNYy<~#ŦSWi/c(ʡjɻE9atUN'W;y}m$TMcD#Z n֓u SpUwކkP>GvNhHZzn6,C#6D x[({!=-y۳5z䗓&I$ܠS';j 23+#bl1'= 4 76yz:7OBMZN`g)hQ8O7!*Kd>͆Xƣ~4H;0A!&TT*8b Mp)2;e nYly2TvO-,s`e/ǏI>yggD7cII[`Gָ> ,lhȸ^֢nA1Zv n`>BíW:ʐvg{N>M TRޅ5w啬%"FO-FkRԻХ+vUIn17%{0BLer-H|WF)2nbA,61[KA`jzF2*ch1/ 6-~?دx!D-Qڄ5tݞM TN n4\`U] kKMQ{CϘgBh:й r5g内0q^s(E?:5q!`X#š`tT4/4#9ֳ*Ptu8^).nn<;lV`{pygy궍u&I{үSq^N ؔ/9 ޅ~l @ Dwo-;SλA-/{kߤ]^(5wycńbFkٟu=[?Gv6kgQbQbB ?4-/X ܀}GVܢ.q[q|!2v+Le^qQGK]\c),#ԇR^(U:>VޛֲdV\j#p )trCCOW"tV[|9-zԥY|*/Y8G_?7T1tixlB6n:sYwZ9arsf(S^Y5bUdch6tҶ ,ֳe^a̳smD؃&J {񾰼#" ]{Zfǂ朗HwY*Y6~vAZJhnкe=.}*Y_(GMYS9n^Z]_JgW?8+ 4[Xi԰!dp ڄh dzˑIfw٪]:ObWorQ(V~5uzBLMhδ| Bo -Zց\,u ZoE9 ๟s`[a7 ߸ON>VQmPLx d°zV 'Y>|jƷC%I< w@wm%a!f(|þ5H^ zҒ [ 'Gל!8/F⻐}H|7FPFBE@iƲ֢'-狐C9r?= ksc0BBsYZC2)aK %m±(٦2m3~1=zAMh4N?RbyМs+߅t uaClt`Zk[8avEk dAf ~͍ E(# 7N Z/k %< g(ZIyr2AH'  L<(͵h%!>,<$ը`lhgft~ "47q D| s|q(yyHw6)G#L]]36j2/A?Uj|$[*fbO{6_F ~*2y7~@* 5$< G^PF. "[PZ>yH!_!gT/Esi79uŤu"tS:;SZKFxhlB&j$K > fD*c4]Ǘ>dԜhL-H5ysFղVC7!p+lRz S A&ۄfoCP#FJj$E7i " $B -r~5!:GQ]9JѣYH-L:ɯZ~]dn-jF$Փ'suovn`|ٔ/|67d=o$cV>|g6IzcgfeSsM)+kӶϞl:ϰYd`M2{"x ]s0N޶<6 *9eq<{vyJydhl򕁱!x:j;Dpo~csL7`jaԾ5,<ϓI}˳/~EG+){dVaZ,{^ Mp(xg#tG0 M~-E#FMӎn6.ccW.XB9t=/bk4~U5 ^&a%g(pݬ?>>>yN{qpgR###<=N HعQG|ނfh~(OTjR0>d8_M`oכ h^6zm (J+RT3]B++mq=quBW;Xk+Z~νa-جWEUnOu,Y_κ*R#Dօc+pKߺ¿vckugɽMphI/U[lDo?P>vyFu>})駴-LR#Bs||XP-VUPwdVIZPrZIHeKcLuB9[xUSH􍮯~/)J-B:-IzMDJ{.E%kEr Uݶ[BoYBڋ}mw.ξ+ /U}iBɿx:vLbR{[wgǾ /]2o{⊗:}AD:2b;WXYj=}a>|+VURn.dET|'6w~][_ch Z-_-B^Mɡ/)nZ(rV͍}Aʩ|m~+Zg)e FZeXm;a*[h}lqҊ);ֽHHo}U/:L!^oըVՍY>N~9QV/V|N5z1V׶Tkhm ٌuDJ6uf]bBja {mYL>{y7nC/,6p{Xtݪh]YYgxUYH J_8P𨱢ƪRFjiPnȯ6Y 5-}㶧g+ƿu:m`Ue7l:[.٪/԰JG`}+lzkgk{}ޠ; +j\w^wkh׻9mb}0Κo{>P@m_ 7P UGu4mS4Pu,rm*kؤlܢ,T7#k>7)} N܃%u03aTa~;of Qc~M|~wRTGYomyvwgѷUA ReνJ^F O?L: n>|o~ Ã0W, 㠿懭u*;ÿ{,X58릭p(28 3W><<|id|Uc_x`~tXɼer~Q|dwֈkpzzf&=6#G4r# 58| p.g-rMgl. ai5 *Q렚\6xl"QA;5?E ~W>M~YoIT\l;̘Ze$Y\ANtƀo52Q7;7qnN.P44rM[oG-5۶s4̓g3y PLC;TSzQvGa=&^MW &9&ĕ;-5C4os<\΂jwwTs[emCe~[s% 7&2[ʪaT|֔~~4(@'p@BnDĵWY;znE>ggeffects/tests/0000755000176200001440000000000013451124203013344 5ustar liggesusersggeffects/tests/testthat/0000755000176200001440000000000013614042552015213 5ustar liggesusersggeffects/tests/testthat/test-clmm.R0000644000176200001440000000152113533450363017245 0ustar liggesusers.runThisTest <- Sys.getenv("RunAllggeffectsTests") == "yes" if (.runThisTest) { if (require("testthat") && require("ggeffects") && require("ordinal") && require("MASS")) { data(wine, package = "ordinal") m1 <- clmm(rating ~ temp + contact + (1 | judge), data = wine) test_that("ggpredict", { p <- ggpredict(m1, "temp") expect_equal(p$predicted[1], 0.09760731, tolerance = 1e-3) ggpredict(m1, c("temp", "contact")) }) test_that("ggeffect", { p <- ggeffect(m1, "temp") expect_equal(p$predicted[1], 0.0730260420584538, tolerance = 1e-3) ggeffect(m1, c("temp", "contact")) }) test_that("ggemmeans", { p <- ggemmeans(m1, "contact") expect_equal(p$predicted[1], 0.08691649, tolerance = 1e-5) ggemmeans(m1, c("temp", "contact")) }) } } ggeffects/tests/testthat/test-Gam2.R0000644000176200001440000000132213466500233017077 0ustar liggesusersif (require("testthat") && require("ggeffects") && require("gam")) { data(kyphosis) m1 <- gam::gam( Kyphosis ~ s(Age, 4) + Number, family = binomial, data = kyphosis, trace = TRUE ) test_that("ggpredict", { p <- ggpredict(m1, "Age") expect_equal(p$predicted[1], 0.02099814, tolerance = 1e-3) ggpredict(m1, c("Age", "Number")) }) test_that("ggeffect", { p <- ggeffect(m1, "Age") expect_equal(p$predicted[1], 0.106151, tolerance = 1e-3) ggeffect(m1, c("Age", "Number")) }) test_that("ggemmeans", { p <- ggemmeans(m1, "Age") expect_equal(p$predicted[1], 0.02099814, tolerance = 1e-3) ggemmeans(m1, c("Age", "Number")) }) } ggeffects/tests/testthat/test-geeglm.R0000644000176200001440000000103213575640521017555 0ustar liggesusersif (require("testthat") && require("ggeffects") && require("geepack")) { data(dietox) m1 <- geeglm( Weight ~ Cu * Time + I(Time ^ 2) + I(Time ^ 3), data = dietox, id = Pig, family = poisson("identity"), corstr = "ar1" ) test_that("ggpredict", { p <- ggpredict(m1, c("Cu", "Time")) expect_equal(p$predicted[1], 35.47711, tolerance = 1e-2) }) test_that("ggemmeans", { p <- ggemmeans(m1, c("Cu", "Time")) expect_equal(p$predicted[1], 35.47711, tolerance = 1e-2) }) } ggeffects/tests/testthat/test-lmer.R0000644000176200001440000000674013577111262017264 0ustar liggesusers.runThisTest <- Sys.getenv("RunAllggeffectsTests") == "yes" if (.runThisTest) { if (suppressWarnings( require("testthat") && require("ggeffects") && require("sjlabelled") && require("lme4") && require("sjmisc") )) { context("ggeffects, lmer") # lmer ---- data(efc) efc$grp = to_label(efc$e15relat) fit <- lmer(neg_c_7 ~ c12hour + e42dep + c161sex + c172code + (1|grp), data = efc) test_that("ggpredict, lmer", { ggpredict(fit, "c12hour") ggpredict(fit, c("c12hour", "c161sex")) ggpredict(fit, c("c12hour", "c161sex", "c172code")) ggpredict(fit, "c12hour", type = "re") ggpredict(fit, c("c12hour", "c161sex"), type = "re") ggpredict(fit, c("c12hour", "c161sex", "c172code"), type = "re") }) test_that("ggpredict, lmer", { pr <- ggpredict(fit, "c12hour") expect_equal(pr$std.error[1:5], c(0.2911, 0.2852, 0.2799, 0.2752, 0.2713), tolerance = 1e-3) pr <- ggpredict(fit, c("c12hour", "c161sex", "c172code"), type = "re") expect_equal(pr$std.error[1:5], c(0.6807, 0.6465, 0.6718, 0.6452, 0.6137), tolerance = 1e-3) }) test_that("ggpredict, lmer-simulate", { ggpredict(fit, "c12hour", type = "sim") ggpredict(fit, c("c12hour", "c161sex"), type = "sim") ggpredict(fit, c("c12hour", "c161sex", "c172code"), type = "sim") }) test_that("ggeffect, lmer", { ggeffect(fit, "c12hour") ggeffect(fit, c("c12hour", "c161sex")) ggeffect(fit, c("c12hour", "c161sex", "c172code")) }) data(efc) efc$cluster <- as.factor(efc$e15relat) efc <- std(efc, c160age, e42dep) m <- lmer( neg_c_7 ~ c160age_z * e42dep_z + c161sex + (1 | cluster), data = efc ) test_that("ggeffect, lmer", { p1 <- ggpredict(m, terms = c("c160age_z", "e42dep_z [-1.17,2.03]")) p2 <- ggemmeans(m, terms = c("c160age_z", "e42dep_z [-1.17,2.03]")) expect_equal(p1$predicted[1], p2$predicted[1], tolerance = 1e-3) }) data(efc) efc$cluster <- as.factor(efc$e15relat) efc <- as_label(efc, e42dep, c172code, c161sex) efc$c172code[efc$c172code == "intermediate level of education"] <- NA m <- lmer( neg_c_7 ~ c172code + e42dep + c161sex + (1 | cluster), data = efc ) test_that("ggeffect, lmer", { ggpredict(m, terms = "e42dep") ggemmeans(m, terms = "e42dep") }) test_that("ggeffect, lmer", { p1 <- ggpredict(m, terms = "e42dep") p2 <- ggemmeans(m, terms = "e42dep") p3 <- ggemmeans(m, terms = "e42dep", condition = c(c161sex = "Male", c172code = "low level of education")) expect_equal(p1$predicted[1], 8.902934, tolerance = 1e-3) expect_equal(p2$predicted[1], 9.742945, tolerance = 1e-3) expect_equal(p1$predicted[1], p3$predicted[1], tolerance = 1e-3) }) m <- lmer( log(Reaction) ~ Days + I(Days^2) + (1 + Days + exp(Days) | Subject), data = sleepstudy ) test_that("ggeffect, lmer", { p1 <- ggpredict(m, terms = "Days") p2 <- ggemmeans(m, terms = "Days") p3 <- ggeffect(m, terms = "Days") expect_equal(p1$predicted[1], 253.5178, tolerance = 1e-3) expect_equal(p2$predicted[1], 253.5178, tolerance = 1e-3) expect_equal(p3$predicted[1], 5.535434, tolerance = 1e-3) }) test_that("ggeffect, lmer", { ggpredict(m, terms = c("Days", "Subject [sample=5]"), type = "re") }) } } ggeffects/tests/testthat/test-MixMod.R0000644000176200001440000000364413567451167017534 0ustar liggesusers.runThisTest <- Sys.getenv("RunAllggeffectsTests") == "yes" if (.runThisTest) { if (require("testthat") && require("ggeffects") && require("GLMMadaptive")) { fish <- read.csv("https://stats.idre.ucla.edu/stat/data/fish.csv") fish$nofish <- as.factor(fish$nofish) fish$livebait <- as.factor(fish$livebait) fish$camper <- as.factor(fish$camper) set.seed(123) m1 <- GLMMadaptive::mixed_model( count ~ child + camper, random = ~ 1 | persons, zi_fixed = ~ child + livebait, zi_random = ~ 1 | persons, data = fish, family = GLMMadaptive::zi.poisson() ) m2 <- GLMMadaptive::mixed_model( nofish ~ xb + zg, random = ~ 1 | persons, data = fish, family = binomial ) test_that("ggpredict", { # this test fails on osx, but not on windows skip_on_cran() skip_on_travis() set.seed(123) p <- ggpredict(m1, c("child", "camper"), type = "fe.zi") expect_equal(p$predicted[1], 2.045537, tolerance = 1e-3) set.seed(123) p <- ggpredict(m1, c("child", "camper"), type = "re.zi", condition = c(count = 3.296)) expect_equal(p$predicted[1], 4.982773, tolerance = 1e-3) set.seed(123) p <- ggpredict(m1, c("child", "camper"), type = "re.zi", condition = c(count = 0)) expect_equal(p$predicted[1], 0.5115884, tolerance = 1e-3) }) test_that("ggemmeans", { set.seed(123) p <- ggemmeans(m1, c("child", "camper"), type = "fe.zi") expect_equal(p$predicted[1], 1.816723, tolerance = 1e-3) set.seed(123) p <- ggemmeans(m1, c("child", "camper"), type = "re.zi") expect_equal(p$predicted[1], 3.457011, tolerance = 1e-3) }) test_that("ggpredict", { expect_message(ggpredict(m1, c("child", "camper"), type = "fe")) expect_message(ggpredict(m2, "zg", type = "fe.zi")) }) } } ggeffects/tests/testthat/test-contrasts2.R0000644000176200001440000000264513451124203020415 0ustar liggesusersif (suppressWarnings( require("testthat") && require("ggeffects") && require("lme4") && require("sjlabelled") )) { context("ggeffects, lmer-contrasts") data(efc) efc$e15relat <- as_label(efc$e15relat) efc$e42dep <- as_label(efc$e42dep) efc$c172code <- as.factor(efc$c172code) m <- lmer(neg_c_7 ~ e42dep + c172code + (1 | e15relat), data = efc) test_that("ggpredict, contrasts-1", { options(contrasts = rep("contr.sum", 2)) pr <- ggpredict(m, c("c172code", "e42dep")) expect_false(anyNA(pr$std.error)) }) test_that("ggpredict, contrasts-2", { options(contrasts = rep("contr.sum", 2)) pr <- ggpredict(m, "c172code") expect_false(anyNA(pr$std.error)) }) test_that("ggpredict, contrasts-3", { options(contrasts = rep("contr.sum", 2)) pr <- ggpredict(m, "e42dep") expect_false(anyNA(pr$std.error)) }) test_that("ggpredict, contrasts-4", { options(contrasts = rep("contr.treatment", 2)) pr <- ggpredict(m, c("c172code", "e42dep")) expect_false(anyNA(pr$std.error)) }) test_that("ggpredict, contrasts-5", { options(contrasts = rep("contr.treatment", 2)) pr <- ggpredict(m, "c172code") expect_false(anyNA(pr$std.error)) }) test_that("ggpredict, contrasts-6", { options(contrasts = rep("contr.treatment", 2)) pr <- ggpredict(m, "e42dep") expect_false(anyNA(pr$std.error)) }) } ggeffects/tests/testthat/test-brms-ppd.R0000644000176200001440000000176513451124203020041 0ustar liggesusers.runThisTest <- Sys.getenv("RunAllggeffectsTests") == "yes" if (.runThisTest && Sys.getenv("USER") != "travis") { if (suppressWarnings( require("testthat") && require("brms") && require("ggeffects") )) { context("ggeffects, brms-ppd") x <- rnorm(10, 0) b <- runif(2) s <- ifelse(diag(2) == 0, 0.23, 1) er <- cbind(rnorm(10, 0, s), rnorm(10, 0, s)) y <- apply(t(b), 2, `*`, x) + er d <- data.frame(y1 = y[,1], y2 = y[,2], x) m1 <- brm(cbind(y1, y2) ~ 1 + x, data = d, chains = 2, iter = 500) m2 <- brm(y1 ~ x, data = d, chains = 2, iter = 500) test_that("ggpredict, brms-ppd", { ggpredict(m1, ppd = TRUE) ggpredict(m1, "x", ppd = TRUE) ggpredict(m2, ppd = TRUE) ggpredict(m2, "x", ppd = TRUE) }) test_that("ggpredict, brms-ppd", { ggpredict(m1, ppd = FALSE) ggpredict(m1, "x", ppd = FALSE) ggpredict(m2, ppd = FALSE) ggpredict(m2, "x", ppd = FALSE) }) } } ggeffects/tests/testthat/test-lrm.R0000644000176200001440000000121213477717331017114 0ustar liggesusersif (suppressWarnings( require("testthat") && require("ggeffects") && require("sjmisc") && require("rms") )) { context("ggeffects, lrs") data(efc) efc$neg_c_7d <- dicho(efc$neg_c_7) m1 <- lrm(neg_c_7d ~ c12hour + e42dep + c161sex + c172code, data = efc) test_that("ggpredict, lrm", { pr <- ggpredict(m1, "c12hour") expect_equal(pr$predicted[1], 0.4008948, tolerance = 1e-4) }) test_that("ggeffect, lrm", { expect_null(ggeffect(m1, "c12hour")) }) test_that("ggemmeans, lrm", { pr <- ggemmeans(m1, "c12hour") expect_equal(pr$predicted[1], 0.4008948, tolerance = 1e-4) }) } ggeffects/tests/testthat/test-tobit.R0000644000176200001440000000123213567453772017453 0ustar liggesusersif (suppressWarnings( require("testthat") && require("ggeffects") && require("AER") )) { unloadNamespace("VGAM") context("ggeffects, tobit") data("Affairs") m1 <- tobit(affairs ~ age + yearsmarried + religiousness + occupation + rating, data = Affairs) test_that("ggpredict, tobit", { pr <- ggpredict(m1, "yearsmarried") expect_equal(pr$predicted[1], -10.15089, tolerance = 1e-4) }) test_that("ggeffect, tobit", { expect_null(ggeffect(m1, "yearsmarried")) }) test_that("ggemmeans, tobit", { pr <- ggemmeans(m1, "yearsmarried") expect_equal(pr$predicted[1], -10.15089, tolerance = 1e-4) }) } ggeffects/tests/testthat/test-glmrob_base.R0000644000176200001440000000116113477717331020601 0ustar liggesusersif (suppressWarnings( require("testthat") && require("ggeffects") && require("sjmisc") && require("robustbase") )) { context("ggeffects, glmrob") data(efc) efc$neg_c_7d <- dicho(efc$neg_c_7) m1 <- glmrob(neg_c_7d ~ c12hour + e42dep + c161sex + c172code, data = efc, family = binomial) test_that("ggpredict, lrm", { pr <- ggpredict(m1, "c12hour") expect_equal(pr$predicted[1], 0.4035267, tolerance = 1e-4) }) test_that("ggeffect, lrm", { expect_null(ggeffect(m1, "c12hour")) }) test_that("ggemmeans, lrm", { expect_null(ggemmeans(m1, "c12hour")) }) } ggeffects/tests/testthat/test-get_titles.R0000644000176200001440000000204613451124203020451 0ustar liggesusersif (suppressWarnings( require("testthat") && require("ggeffects") && require("sjmisc") )) { context("ggeffects, get titles") data(efc) efc$c172code <- sjmisc::to_factor(efc$c172code) fit <- lm(barthtot ~ c12hour + neg_c_7 + c161sex + c172code, data = efc) mydf <- ggpredict(fit, terms = c("c12hour", "c161sex", "c172code")) test_that("ggpredict, get_x_title", { expect_equal(get_x_title(mydf), "average number of hours of care per week") }) test_that("ggpredict, get_y_title", { expect_equal(get_y_title(mydf), "Total score BARTHEL INDEX") }) test_that("ggpredict, get_legend_labels", { expect_equal(get_legend_labels(mydf), c("Male", "Female")) }) test_that("ggpredict, get_legend_title", { expect_equal(get_legend_title(mydf), "carer's gender") }) mydf <- ggpredict(fit, terms = "c172code") test_that("ggpredict, get_x_labels", { expect_equal(get_x_labels(mydf), c("low level of education", "intermediate level of education", "high level of education")) }) } ggeffects/tests/testthat/test-rstanarm-ppd.R0000644000176200001440000000234313451124203020716 0ustar liggesusers.runThisTest <- Sys.getenv("RunAllggeffectsTests") == "yes" if (.runThisTest && Sys.getenv("USER") != "travis") { if (suppressWarnings( require("testthat") && require("rstanarm") && require("ggeffects") )) { context("ggeffects, rstanarm-ppd") x <- rnorm(30, 0) b <- runif(2) s <- ifelse(diag(2) == 0, 0.23, 1) er <- cbind(rnorm(30, 0, s), rnorm(30, 0, s)) y <- apply(t(b), 2, `*`, x) + er d <- data.frame(y1 = y[,1], y2 = y[,2], x) d$group <- sample(c("a", "b", "c"), size = nrow(d), replace = TRUE) m1 <- rstanarm::stan_mvmer( list( y1 ~ x + (1 | group), y2 ~ x + (1 | group) ), data = d, chains = 2, iter = 500 ) m2 <- rstanarm::stan_glm(y1 ~ x, data = d, chains = 2, iter = 500) test_that("ggpredict, rstanarm-ppd", { ggpredict(m1, ppd = TRUE) ggpredict(m1, "x", ppd = TRUE) ggpredict(m2, ppd = TRUE) ggpredict(m2, "x", ppd = TRUE) }) test_that("ggpredict, rstanarm-ppd", { expect_error(ggpredict(m1, ppd = FALSE)) expect_error(ggpredict(m1, "x", ppd = FALSE)) ggpredict(m2, ppd = FALSE) ggpredict(m2, "x", ppd = FALSE) }) } } ggeffects/tests/testthat/test-gee.R0000644000176200001440000000064413451124203017050 0ustar liggesusersif (require("testthat") && require("ggeffects") && require("gee")) { data(warpbreaks) m1 <- gee(breaks ~ tension, id = wool, data = warpbreaks) test_that("ggpredict", { p <- ggpredict(m1, "tension") expect_equal(p$predicted[1], 36.38889, tolerance = 1e-3) }) test_that("ggemmeans", { p <- ggemmeans(m1, "tension") expect_equal(p$predicted[1], 36.38889, tolerance = 1e-3) }) } ggeffects/tests/testthat/test-brms-trial.R0000644000176200001440000000142213575640521020374 0ustar liggesusers.runThisTest <- Sys.getenv("RunAllggeffectsTests") == "yes" if (.runThisTest && Sys.getenv("USER") != "travis") { if (suppressWarnings( require("testthat") && require("brms") && require("ggeffects") && require("insight") )) { m1 <- insight::download_model("brms_mixed_6") m2 <- insight::download_model("brms_mv_4") m3 <- insight::download_model("brms_2") test_that("ggpredict, brms-trial", { ggpredict(m1, c("Base", "Trt")) ggpredict(m2, "Species") ggpredict(m3, c("treat", "c2")) }) test_that("ggpredict, brms-trial", { p1 <- ggpredict(m1, c("Base", "Trt")) p2 <- ggemmeans(m1, c("Base", "Trt")) expect_equal(p1$predicted[1], p2$predicted[1], tolerance = 1e-3) }) } } ggeffects/tests/testthat/test-nlme.R0000644000176200001440000000172713522251723017255 0ustar liggesusersif (suppressWarnings( require("testthat") && require("ggeffects") && require("sjlabelled") && require("nlme") && require("lme4") && require("sjmisc") )) { context("ggeffects, lme") # lme ---- data(Orthodont) fit <- lme(distance ~ age + Sex, data = Orthodont, random = ~ 1 | Subject) test_that("ggpredict, lme", { ggpredict(fit, "age") ggpredict(fit, c("age", "Sex")) ggpredict(fit, "age", type = "re") ggpredict(fit, c("age", "Sex"), type = "re") }) test_that("ggeffect, lme", { ggeffect(fit, "age") ggeffect(fit, c("age", "Sex")) }) m5 <- lmer(distance ~ age * Sex + (age|Subject), data = Orthodont) m6 <- lme(distance ~ age * Sex, data = Orthodont, random = ~ age | Subject) test_that("ggpredict, lme", { ggpredict(m5, c("age", "Sex")) ggpredict(m6, c("age", "Sex")) ggpredict(m5, c("age", "Sex"), type = "re") ggpredict(m6, c("age", "Sex"), type = "re") }) } ggeffects/tests/testthat/test-lmrob_base.R0000644000176200001440000000116213477717331020433 0ustar liggesusersif (suppressWarnings( require("testthat") && require("ggeffects") && require("sjmisc") && require("robustbase") )) { context("ggeffects, lmrob") data(efc) m1 <- lmrob(neg_c_7 ~ c12hour + e42dep + c161sex + c172code, data = efc) test_that("ggpredict, lmrob", { pr <- ggpredict(m1, "c12hour") expect_equal(pr$predicted[1], 11.02581, tolerance = 1e-4) }) test_that("ggeffect, lmrob", { pr <- ggeffect(m1, "c12hour") expect_equal(pr$predicted[1], 11.02581, tolerance = 1e-4) }) test_that("ggemmeans, lmrob", { expect_null(ggemmeans(m1, "c12hour")) }) } ggeffects/tests/testthat/test-vgam.R0000644000176200001440000000051113451124203017233 0ustar liggesusersunloadNamespace("gam") if (require("testthat") && require("ggeffects") && require("VGAM")) { data("hunua") m1 <- vgam(agaaus ~ vitluc + s(altitude, df = 2), binomialff, data = hunua) test_that("ggpredict", { p <- ggpredict(m1, "vitluc") expect_equal(p$predicted[1], 0.2751634, tolerance = 1e-3) }) } ggeffects/tests/testthat/test-rstanarm.R0000644000176200001440000000430613451124203020136 0ustar liggesusers.runThisTest <- Sys.getenv("RunAllggeffectsTests") == "yes" if (.runThisTest && Sys.getenv("USER") != "travis") { if (suppressWarnings( require("testthat") && require("lme4") && require("sjmisc") && requireNamespace("rstanarm") && require("ggeffects") )) { context("ggeffects, stan") # fit linear model data(sleepstudy) data(efc) sleepstudy$age <- round(runif(nrow(sleepstudy), min = 20, max = 60)) sleepstudy$Rdicho <- dicho(sleepstudy$Reaction) efc <- to_label(efc, e42dep, c161sex, c172code) m <- rstanarm::stan_glmer( Reaction ~ Days + age + (1 | Subject), data = sleepstudy, QR = TRUE, # this next line is only to keep the example small in size! chains = 2, cores = 1, seed = 12345, iter = 500 ) m2 <- rstanarm::stan_glmer( Rdicho ~ Days + age + (1 | Subject), data = sleepstudy, QR = TRUE, family = binomial, chains = 2, iter = 500 ) m3 <- rstanarm::stan_glm( tot_sc_e ~ neg_c_7 + e42dep + barthtot + c172code + c161sex, data = efc, family = poisson("log"), chains = 2, iter = 500 ) test_that("ggpredict, rstan", { ggpredict(m, "Days") ggpredict(m, c("Days", "age")) ggpredict(m, "Days", type = "re") ggpredict(m, c("Days", "age"), type = "re") ggpredict(m, "Days", ppd = TRUE) ggpredict(m, c("Days", "age"), ppd = TRUE) ggpredict(m, "Days", type = "re", ppd = TRUE) ggpredict(m, c("Days", "age"), type = "re", ppd = TRUE) }) test_that("ggpredict, rstan", { ggpredict(m2, "Days") ggpredict(m2, c("Days", "age")) ggpredict(m2, "Days", type = "re") ggpredict(m2, c("Days", "age"), type = "re") ggpredict(m2, "Days", ppd = TRUE) ggpredict(m2, c("Days", "age"), ppd = TRUE) ggpredict(m2, "Days", type = "re", ppd = TRUE) ggpredict(m2, c("Days", "age"), type = "re", ppd = TRUE) }) test_that("ggpredict, rstan", { ggpredict(m3, "neg_c_7") ggpredict(m3, c("neg_c_7", "e42dep")) ggpredict(m3, "neg_c_7", ppd = TRUE) ggpredict(m3, c("neg_c_7", "e42dep"), ppd = TRUE) }) } } ggeffects/tests/testthat/test-print.R0000644000176200001440000000271113451124203017441 0ustar liggesusersif (suppressWarnings( require("testthat") && require("ggeffects") && require("sjmisc") )) { context("ggeffects, print") # lm, linear regression ---- data(efc) efc$c172code <- to_label(efc$c172code) efc$e42dep <- to_label(efc$e42dep) efc$c82cop1 <- as.numeric(efc$c82cop1) fit <- lm(barthtot ~ c12hour + neg_c_7 + c82cop1 + e42dep + c161sex + c172code, data = efc) test_that("ggpredict, print", { ggpredict(fit, terms = "c12hour") ggpredict(fit, terms = "c172code") ggpredict(fit, terms = "c161sex") ggpredict(fit, terms = c("c12hour", "c172code")) ggpredict(fit, terms = c("c12hour", "c161sex")) ggpredict(fit, terms = c("e42dep", "c161sex")) ggpredict(fit, terms = c("e42dep", "c172code")) ggpredict(fit, terms = c("c12hour", "c172code", "c161sex")) ggpredict(fit, terms = c("e42dep", "c172code", "c161sex")) ggpredict(fit, terms = c("c12hour", "c172code", "e42dep")) ggpredict(fit, terms = c("c161sex", "c172code", "e42dep")) ggpredict(fit, terms = c("c12hour", "neg_c_7")) ggpredict(fit, terms = c("c12hour", "neg_c_7 [all]")) ggpredict(fit, terms = c("c12hour", "neg_c_7 [quart2]")) ggpredict(fit, terms = c("c12hour", "neg_c_7 [quart2]", "c161sex")) ggpredict(fit, terms = c("c12hour", "neg_c_7", "c161sex")) ggpredict(fit, terms = c("c12hour", "neg_c_7 [quart2]", "c82cop1")) ggpredict(fit, terms = c("c12hour", "neg_c_7", "c82cop1")) }) } ggeffects/tests/testthat/test-contrasts.R0000644000176200001440000000276113451124203020332 0ustar liggesusersif (suppressWarnings( require("testthat") && require("ggeffects") && require("lme4") && require("sjlabelled") )) { context("ggeffects, lmer-contrasts") data(efc) efc$e15relat <- as_label(efc$e15relat) efc$e42dep <- as_label(efc$e42dep) efc$c172code <- as.factor(efc$c172code) m <- lmer(neg_c_7 ~ e42dep + c172code + c12hour + c82cop1 + (1 | e15relat), data = efc) test_that("ggpredict, contrasts-1", { options(contrasts = rep("contr.sum", 2)) pr <- ggpredict(m, c("c12hour", "e42dep")) expect_false(anyNA(pr$std.error)) }) test_that("ggpredict, contrasts-2", { options(contrasts = rep("contr.sum", 2)) pr <- ggpredict(m, c("c12hour", "c172code")) expect_false(anyNA(pr$std.error)) }) test_that("ggpredict, contrasts-3", { options(contrasts = rep("contr.sum", 2)) pr <- ggpredict(m, c("c12hour", "c82cop1")) expect_false(anyNA(pr$std.error)) }) test_that("ggpredict, contrasts-4", { options(contrasts = rep("contr.treatment", 2)) pr <- ggpredict(m, c("c12hour", "e42dep")) expect_false(anyNA(pr$std.error)) }) test_that("ggpredict, contrasts-5", { options(contrasts = rep("contr.treatment", 2)) pr <- ggpredict(m, c("c12hour", "c172code")) expect_false(anyNA(pr$std.error)) }) test_that("ggpredict, contrasts-6", { options(contrasts = rep("contr.treatment", 2)) pr <- ggpredict(m, c("c12hour", "c82cop1")) expect_false(anyNA(pr$std.error)) }) } ggeffects/tests/testthat/test-clm.R0000644000176200001440000000150313533460142017064 0ustar liggesusers.runThisTest <- Sys.getenv("RunAllggeffectsTests") == "yes" if (.runThisTest) { if (require("testthat") && require("ggeffects") && require("ordinal") && require("MASS")) { data(wine, package = "ordinal") m1 <- clm(rating ~ temp * contact, data = wine) test_that("ggpredict", { p <- ggpredict(m1, "temp") expect_equal(p$predicted[1], 0.1960351, tolerance = 1e-3) ggpredict(m1, c("temp", "contact")) }) test_that("ggeffect", { p <- ggeffect(m1, "temp") expect_equal(p$predicted[1], 0.110564082334497, tolerance = 1e-3) ggeffect(m1, c("temp", "contact")) }) test_that("ggemmeans", { p <- ggemmeans(m1, "contact") expect_equal(p$predicted[1], 0.1097049, tolerance = 1e-3) ggemmeans(m1, c("temp", "contact")) }) } } ggeffects/tests/testthat/test-logistf.R0000644000176200001440000000102413575640521017765 0ustar liggesusersif (suppressWarnings( require("testthat") && require("ggeffects") && require("logistf") )) { data(sex2) m1 <- logistf(case ~ age + oc, data = sex2) test_that("ggpredict, logistf", { pr <- ggpredict(m1, "age") expect_equal(pr$predicted[1], 0.5763746, tolerance = 1e-4) }) test_that("ggeffect, logistf", { pr <- ggeffect(m1, "age") expect_equal(pr$predicted[1], 0.5762638, tolerance = 1e-4) }) test_that("ggemmeans, logistf", { expect_null(ggemmeans(m1, "age")) }) } ggeffects/tests/testthat/test-clm2.R0000644000176200001440000000117413533460166017160 0ustar liggesusers.runThisTest <- Sys.getenv("RunAllggeffectsTests") == "yes" if (.runThisTest) { if (require("testthat") && require("ggeffects") && require("ordinal") && require("MASS")) { data(housing) m1 <- clm2(Sat ~ Infl + Type + Cont, weights = Freq, data = housing) test_that("ggpredict", { expect_error(ggpredict(m1, "Infl")) }) test_that("ggeffect", { p <- ggeffect(m1, "Infl") expect_equal(p$predicted[1], 0.457877729905463, tolerance = 1e-3) ggeffect(m1, c("Infl", "Type")) }) test_that("ggemmeans", { expect_error(ggemmeans(m1, "Infl")) }) } } ggeffects/tests/testthat/test-negbin.R0000644000176200001440000000246513614017532017564 0ustar liggesusersif (suppressWarnings( require("testthat") && require("ggeffects") && require("sjmisc") && require("MASS") )) { context("ggeffects, negbin model") data(efc) efc$e42dep <- to_label(efc$e42dep) fit <- glm.nb( tot_sc_e ~ neg_c_7 + I(neg_c_7 ^ 2) + neg_c_7:e42dep + I(neg_c_7 ^ 2):e42dep + c12hour + c172code, data = efc, init.theta = 1.133641349, link = log ) test_that("ggpredict, negbin", { ggpredict(fit, "neg_c_7") ggeffect(fit, "neg_c_7") # still fails on windows old-rel, so re-activate once emmeans is built on all platforms # expect_null(ggemmeans(fit, "neg_c_7")) ggpredict(fit, c("neg_c_7", "e42dep")) ggeffect(fit, c("neg_c_7", "e42dep")) # expect_null(ggemmeans(fit, c("neg_c_7", "e42dep"))) }) data(efc) fit <- glm.nb( tot_sc_e ~ neg_c_7 + I(neg_c_7 ^ 2) + neg_c_7:e42dep + I(neg_c_7 ^ 2):e42dep + c12hour + c172code, data = efc, init.theta = 1.133641349, link = log ) test_that("ggpredict, negbin", { ggpredict(fit, "neg_c_7") ggeffect(fit, "neg_c_7") # expect_null(ggemmeans(fit, "neg_c_7")) ggpredict(fit, c("neg_c_7", "e42dep")) ggeffect(fit, c("neg_c_7", "e42dep")) # expect_null(ggemmeans(fit, c("neg_c_7", "e42dep"))) }) } ggeffects/tests/testthat/test-poly-zeroinf.R0000644000176200001440000000276613575640521020771 0ustar liggesuserslibrary(testthat) library(ggeffects) # glmmTMB ---- library(glmmTMB) library(pscl) data(Salamanders) m1 <- glmmTMB( count ~ spp + poly(cover, 3) + mined + (1 | site), ziformula = ~DOY, dispformula = ~spp, data = Salamanders, family = nbinom2 ) m2 <- glmmTMB( count ~ spp + poly(cover, 3) + mined + (1 | site), ziformula = ~poly(DOY, 3), dispformula = ~spp, data = Salamanders, family = nbinom2 ) m3 <- zeroinfl(count ~ spp + poly(cover, 3) + mined | DOY, data = Salamanders) m4 <- zeroinfl(count ~ spp + poly(cover, 3) + mined | poly(DOY, 3), data = Salamanders) test_that("ggpredict, glmmTMB", { pr <- ggpredict(m1, c("cover", "mined", "spp"), type = "fe.zi") expect_equal(ncol(pr), 7) expect_equal( colnames(pr), c("x", "predicted", "std.error", "conf.low", "conf.high", "group", "facet") ) pr <- ggpredict(m1, c("mined", "spp"), type = "fe.zi") expect_equal(ncol(pr), 6) pr <- ggpredict(m2, c("cover", "mined", "spp"), type = "fe.zi") expect_equal(ncol(pr), 7) pr <- ggpredict(m2, c("mined", "spp"), type = "fe.zi") expect_equal(ncol(pr), 6) pr <- ggpredict(m3, c("mined", "spp"), type = "fe.zi") expect_equal(ncol(pr), 6) pr <- ggpredict(m3, c("cover", "mined", "spp"), type = "fe.zi") expect_equal(ncol(pr), 7) pr <- ggpredict(m4, c("mined", "spp"), type = "fe.zi") expect_equal(ncol(pr), 6) pr <- ggpredict(m4, c("cover", "mined", "spp"), type = "fe.zi") expect_equal(ncol(pr), 7) }) ggeffects/tests/testthat/test-glmmTMB.R0000644000176200001440000001753313451124203017614 0ustar liggesusers.runThisTest <- Sys.getenv("RunAllggeffectsTests") == "yes" if (.runThisTest) { if (require("testthat") && require("ggeffects") && require("glmmTMB")) { context("ggeffects, glmmTMB") data(Owls) data(Salamanders) m1 <- glmmTMB(SiblingNegotiation ~ SexParent + ArrivalTime + (1 | Nest), data = Owls, family = nbinom1) m2 <- glmmTMB(SiblingNegotiation ~ SexParent + ArrivalTime + (1 | Nest), data = Owls, family = nbinom2) m4 <- glmmTMB(SiblingNegotiation ~ FoodTreatment + ArrivalTime + SexParent + (1 | Nest), data = Owls, ziformula = ~ 1, family = truncated_poisson(link = "log")) test_that("ggpredict, glmmTMB", { ggpredict(m1, c("ArrivalTime", "SexParent")) ggpredict(m2, c("ArrivalTime", "SexParent")) ggpredict(m4, c("FoodTreatment", "ArrivalTime [21,24,30]", "SexParent")) ggpredict(m1, c("ArrivalTime", "SexParent"), type = "re") ggpredict(m2, c("ArrivalTime", "SexParent"), type = "re") ggpredict(m4, c("FoodTreatment", "ArrivalTime [21,24,30]", "SexParent"), type = "re") }) test_that("ggpredict, glmmTMB", { expect_message(ggpredict(m1, c("ArrivalTime", "SexParent"), type = "fe.zi")) }) test_that("ggpredict, glmmTMB", { p1 <- ggpredict(m1, c("ArrivalTime", "SexParent")) p2 <- ggpredict(m2, c("ArrivalTime", "SexParent")) p3 <- ggemmeans(m1, c("ArrivalTime", "SexParent")) p4 <- ggemmeans(m2, c("ArrivalTime", "SexParent")) expect_equal(p1$predicted[1], p3$predicted[1], tolerance = 1e-3) expect_equal(p2$predicted[1], p4$predicted[1], tolerance = 1e-3) }) m3 <- glmmTMB(count ~ spp + mined + (1 | site), ziformula = ~ spp + mined, family = truncated_poisson, data = Salamanders) m4 <- glmmTMB(count ~ spp + mined + (1 | site), ziformula = ~ spp + mined + (1 | site), family = truncated_poisson, data = Salamanders) m5 <- glmmTMB(count ~ spp + mined + cover + (1 | site), ziformula = ~ spp + mined, family = truncated_poisson, data = Salamanders) test_that("ggpredict, glmmTMB", { p1 <- ggpredict(m3, "mined", type = "fe") p2 <- ggpredict(m3, "mined", type = "fe.zi") p3 <- ggpredict(m3, "mined", type = "re") p4 <- ggpredict(m3, "mined", type = "re.zi") expect_gt(p3$conf.high[1], p1$conf.high[1]) expect_gt(p4$conf.high[1], p2$conf.high[1]) ggpredict(m3, "mined", type = "fe.zi", nsim = 50) }) test_that("ggpredict, glmmTMB", { p1 <- ggpredict(m5, c("mined", "spp", "cover"), type = "fe") p2 <- ggemmeans(m5, c("mined", "spp", "cover"), type = "fe.zi") p3 <- ggpredict(m5, c("mined", "spp", "cover"), type = "fe") p4 <- ggemmeans(m5, c("mined", "spp", "cover"), type = "fe.zi") expect_equal(p1$predicted[1], p3$predicted[1], tolerance = 1e-3) expect_equal(p2$predicted[1], p4$predicted[1], tolerance = 1e-3) }) test_that("ggpredict, glmmTMB", { p1 <- ggpredict(m3, "mined", type = "fe") p2 <- ggpredict(m3, c("mined", "spp"), type = "fe.zi") p3 <- ggemmeans(m3, "mined", type = "fe", condition = c(spp = "GP")) p4 <- ggemmeans(m3, c("mined", "spp"), type = "fe.zi") p5 <- ggpredict(m3, c("mined", "spp"), type = "fe") p6 <- ggemmeans(m3, c("mined", "spp"), type = "fe") expect_equal(p1$predicted[1], p3$predicted[1], tolerance = 1e-3) expect_equal(p2$predicted[1], p4$predicted[1], tolerance = 1e-3) expect_equal(p5$predicted[1], p6$predicted[1], tolerance = 1e-3) }) test_that("ggpredict, glmmTMB", { p1 <- ggpredict(m4, "mined", type = "fe") p2 <- ggpredict(m4, "mined", type = "fe.zi") p3 <- ggpredict(m4, "mined", type = "re") p4 <- ggpredict(m4, "mined", type = "re.zi") expect_gt(p3$conf.high[1], p1$conf.high[1]) expect_gt(p4$conf.high[1], p2$conf.high[1]) p1 <- ggpredict(m4, c("spp", "mined"), type = "fe") p2 <- ggpredict(m4, c("spp", "mined"), type = "fe.zi") p3 <- ggpredict(m4, c("spp", "mined"), type = "re") p4 <- ggpredict(m4, c("spp", "mined"), type = "re.zi") expect_gt(p3$conf.high[1], p1$conf.high[1]) expect_gt(p4$conf.high[1], p2$conf.high[1]) }) test_that("ggpredict, glmmTMB", { p <- ggpredict(m3, "spp", type = "fe.zi") expect_true(all(p$conf.low > 0)) set.seed(100) p <- ggpredict(m3, "spp", type = "fe.zi") expect_true(all(p$conf.low > 0)) }) test_that("ggpredict, glmmTMB-simulate", { p <- ggpredict(m3, "mined", type = "sim") p <- ggpredict(m3, c("spp", "mined"), type = "sim") p <- ggpredict(m4, "mined", type = "sim") p <- ggpredict(m4, c("spp", "mined"), type = "sim") }) md <- glmmTMB( count ~ spp + mined + (1 | site), ziformula = ~ spp + mined, dispformula = ~ DOY, family = truncated_poisson, data = Salamanders ) test_that("ggpredict, glmmTMB", { p1 <- ggpredict(md, c("spp", "mined"), type = "fe") p2 <- ggpredict(md, c("spp", "mined"), type = "fe.zi") p3 <- ggpredict(md, c("spp", "mined"), type = "re") p4 <- ggpredict(md, c("spp", "mined"), type = "re.zi") expect_gt(p3$conf.high[1], p1$conf.high[1]) expect_gt(p4$conf.high[1], p2$conf.high[1]) }) data(efc_test) m5 <- glmmTMB( negc7d ~ c12hour + e42dep + c161sex + c172code + (1 | grp), data = efc_test, ziformula = ~ c172code, family = binomial(link = "logit") ) test_that("ggpredict, glmmTMB", { ggpredict(m5, "c161sex", type = "fe") ggpredict(m5, "c161sex", type = "fe.zi") ggpredict(m5, "c161sex", type = "re") ggpredict(m5, "c161sex", type = "re.zi") }) data(efc_test) m6 <- glmmTMB( negc7d ~ c12hour + e42dep + c161sex + c172code + (1 | grp), data = efc_test, family = binomial(link = "logit") ) test_that("ggpredict, glmmTMB", { ggpredict(m6, "c161sex", type = "fe") ggpredict(m6, "c161sex", type = "re") }) data(efc_test) efc_test$tot_sc_e <- as.numeric(efc_test$tot_sc_e) efc_test$c172code <- as.factor(efc_test$c172code) m7 <- glmmTMB( tot_sc_e ~ neg_c_7 * c172code + c161sex + (1 | grp), data = efc_test, ziformula = ~ c172code, family = nbinom1 ) test_that("ggpredict, glmmTMB", { ggpredict(m7, "neg_c_7") ggpredict(m7, "neg_c_7 [all]") ggpredict(m7, "neg_c_7", type = "fe.zi") ggpredict(m7, "neg_c_7 [all]", type = "fe.zi") ggpredict(m7, c("neg_c_7", "c172code")) ggpredict(m7, c("neg_c_7 [all]", "c172code")) ggpredict(m7, c("neg_c_7", "c172code"), type = "fe.zi") ggpredict(m7, c("neg_c_7 [all]", "c172code"), type = "fe.zi") }) m8 <- glmmTMB( tot_sc_e ~ neg_c_7 * c172code + (1 | grp), data = efc_test, ziformula = ~ c172code, family = nbinom1 ) test_that("ggpredict, glmmTMB", { ggpredict(m8, "neg_c_7") ggpredict(m8, "neg_c_7 [all]") ggpredict(m8, "neg_c_7", type = "fe.zi") ggpredict(m8, "neg_c_7 [all]", type = "fe.zi") ggpredict(m8, c("neg_c_7", "c172code")) ggpredict(m8, c("neg_c_7 [all]", "c172code")) ggpredict(m8, c("neg_c_7", "c172code"), type = "fe.zi") ggpredict(m8, c("neg_c_7 [all]", "c172code"), type = "fe.zi") }) data(Salamanders) m9 <- glmmTMB( count ~ spp + cover + mined + (1 | site), ziformula = ~ DOY, dispformula = ~ spp, data = Salamanders, family = nbinom2 ) test_that("ggpredict, glmmTMB", { ggpredict(m9, c("cover", "mined", "spp"), type = "fe") ggpredict(m9, c("cover", "mined", "spp"), type = "fe.zi") ggpredict(m9, c("cover", "mined", "spp"), type = "re") ggpredict(m9, c("cover", "mined", "spp"), type = "re.zi") }) } } ggeffects/tests/testthat/test-survey.R0000644000176200001440000000150613522251740017651 0ustar liggesusersif (suppressWarnings( require("testthat") && require("ggeffects") && require("sjlabelled") && require("survey") && require("sjstats") && require("sjmisc") )) { context("ggeffects, survey") # svyglm ----- data(nhanes_sample) nhanes_sample$total <- dicho(nhanes_sample$total) # create survey design des <- svydesign( id = ~SDMVPSU, strat = ~SDMVSTRA, weights = ~WTINT2YR, nest = TRUE, data = nhanes_sample ) # fit negative binomial regression fit <- svyglm(total ~ RIAGENDR + age + RIDRETH1, des, family = binomial(link = "logit")) test_that("ggpredict, svyglm", { ggpredict(fit, "age") ggpredict(fit, c("age", "RIAGENDR")) }) test_that("ggeffect, svyglm", { ggeffect(fit, "age") ggeffect(fit, c("age", "RIAGENDR")) }) } ggeffects/tests/testthat/test-correct_se_sorting.R0000644000176200001440000000472613561244720022223 0ustar liggesusers.runThisTest <- Sys.getenv("RunAllggeffectsTests") == "yes" if (.runThisTest || Sys.getenv("USER") == "travis") { if (suppressWarnings( require("testthat") && require("ggeffects") && require("lme4") && require("sjmisc") )) { set.seed(123) dat <- data.frame( outcome = rbinom(n = 100, size = 1, prob = 0.35), var_binom = as.factor(rbinom(n = 100, size = 1, prob = 0.2)), var_cont = rnorm(n = 100, mean = 10, sd = 7), group = sample(letters[1:4], size = 100, replace = TRUE) ) dat$var_cont <- sjmisc::std(dat$var_cont) m1 <- glmer( outcome ~ var_binom + var_cont + (1 | group), data = dat, family = binomial(link = "logit") ) test_that("se-sorting", { pr <- ggpredict(m1, "var_cont") expect_equal(pr$predicted, c(0.336719595864838, 0.343075324628438, 0.349487808877511, 0.355955205473809, 0.362475595153102, 0.369046984214646, 0.375667306420833, 0.38233442510547, 0.389046135488166, 0.395800167191347, 0.402594186955394, 0.409425801546448, 0.41629256085041), tolerance = 1e-4) expect_equal(pr$std.error, c(0.618699912753018, 0.526519784780116, 0.441130838598037, 0.367300396177996, 0.313309075157131, 0.290440016857388, 0.305758174839891, 0.354345242445446, 0.424938667902817, 0.508453560698829, 0.599513975290497, 0.695161003669588, 0.793738286055424), tolerance = 1e-4) }) m2 <- glmer( outcome ~ var_binom * poly(var_cont, degree = 2, raw = TRUE) + (1 | group), data = dat, family = binomial(link = "logit") ) test_that("se-sorting", { pr <- ggpredict(m2, c("var_cont", "var_binom")) expect_equal( pr$predicted[1:10], c(0.166784864613204, 0.0615489873135249, 0.224095606762232, 0.141279183688389, 0.281091987683061, 0.250512693421789, 0.331840641853432, 0.354819954638596, 0.371819795779451, 0.422913790544266), tolerance = 1e-4) expect_equal( pr$std.error[1:10], c(1.34423391467447, 3.65581221675649, 0.920590886385926, 2.34007695224355, 0.595294475516507, 1.35709636952096, 0.384285954721907, 0.760109860798146, 0.302556537107688, 0.594810096113016), tolerance = 1e-4) }) } } ggeffects/tests/testthat/test-decimals.R0000644000176200001440000001555713533461244020114 0ustar liggesusers.runThisTest <- Sys.getenv("RunAllggeffectsTests") == "yes" if (.runThisTest) { if (suppressWarnings( require("testthat") && require("ggeffects") && require("MASS") )) { context("ggeffects, decimals") y <- c(180,111, 0, 60, 317, 224, 76, 760, 1, 64, 80, 50, 147, 324, 149, 487, 222, 222, 31, 2, 131, 198, 256, 364, 544, 789, 4, 113, 52, 375, 444, 239, 1033, 303, 129, 118, 210, 99, 398, 101, 291, 154, 244, 519, 99, 0, 13, 5, 416, 489, 462, 244, 74, 276, 1270, 81, 375, 1254, 36, 368, 114, 89, 248, 115) trat <- c("FERTIL", "FERTIL", "FERTIL", "CTRL", "CTRL", "CTRL", "CTRL", "FERTIL", "FERTIL", "FERTIL", "FERTIL", "FERTIL", "FERTIL", "FERTIL", "FERTIL", "FERTIL", "CTRL", "CTRL", "FERTIL", "FERTIL", "FERTIL", "FERTIL", "FERTIL", "FERTIL", "FERTIL", "FERTIL", "FERTIL", "CTRL", "CTRL", "CTRL", "FERTIL", "FERTIL", "FERTIL", "FERTIL", "FERTIL", "CTRL", "CTRL", "FERTIL", "FERTIL", "CTRL", "CTRL", "CTRL", "CTRL", "CTRL", "CTRL", "FERTIL", "FERTIL", "FERTIL", "FERTIL", "FERTIL", "CTRL", "CTRL", "CTRL", "CTRL", "CTRL", "CTRL", "FERTIL", "FERTIL", "FERTIL", "FERTIL", "FERTIL", "CTRL", "CTRL", "CTRL") C <- c("X2062", "X2062", "X1002", "X1002", "X1002", "X1002", "X2062", "X1002", "X1002", "X1002", "X1043", "X1043", "X1002", "X1002", "X1043", "X1043", "X1043", "X1043", "X1043", "X1043", "X1043", "X2062", "X2062", "X2062", "X2062", "X2062", "X1043", "X1043", "X1043", "X1002", "X2062", "X2062", "X1002", "X1002", "X1002", "X1043", "X1002", "X1002", "X2062", "X1002", "X1002", "X1043", "X1043", "X2062", "X2062", "X1043", "X1043", "X1043", "X1002", "X1002", "X2062", "X2062", "X2062", "X2062", "X2062", "X1002", "X2062", "X2062", "X1043", "X1043", "X1043", "X1002", "X1002", "X1043") x1 <- c(1.0510557018036901233415, -0.2181436362234073500499, -1.4873429742505046569079, -1.4873429742505046569079, 1.0510557018036901233415, -0.2181436362234073500499, 1.0510557018036901233415, -0.2181436362234073500499, -1.4873429742505046569079, 1.0510557018036901233415, -1.4873429742505046569079, -0.2181436362234073500499, -1.4873429742505046569079, 1.0510557018036901233415, -1.4873429742505046569079, -0.2181436362234073500499, -1.4873429742505046569079, 1.0510557018036901233415, 1.0510557018036901233415, -1.4873429742505046569079, -0.2181436362234073500499, -1.4873429742505046569079, -0.2181436362234073500499, 1.0510557018036901233415, 1.0510557018036901233415, -0.2181436362234073500499, -1.4873429742505046569079, -0.2181436362234073500499, 1.0510557018036901233415, -1.4873429742505046569079, 1.0510557018036901233415, -1.4873429742505046569079, 1.0510557018036901233415, -1.4873429742505046569079, -0.2181436362234073500499, -0.2181436362234073500499, -0.2181436362234073500499, 1.0510557018036901233415, 1.0510557018036901233415, -0.2181436362234073500499, 1.0510557018036901233415, 1.0510557018036901233415, -0.2181436362234073500499, 1.0510557018036901233415, -0.2181436362234073500499, 1.0510557018036901233415, -1.4873429742505046569079, -0.2181436362234073500499, -0.2181436362234073500499, 1.0510557018036901233415, 1.0510557018036901233415, -0.2181436362234073500499, 1.0510557018036901233415, -0.2181436362234073500499, 1.0510557018036901233415, 1.0510557018036901233415, -0.2181436362234073500499, 1.0510557018036901233415, -1.4873429742505046569079, -0.2181436362234073500499, 1.0510557018036901233415, 1.0510557018036901233415, -0.2181436362234073500499, -0.2181436362234073500499) x2 <- c(-0.16315747994073070969989, -1.05360463900379586199563, -0.93415267997080730921766 , -0.99252850004998782740273, 0.42735235724712267169068, -0.48798991709717626230614 , 2.23014349851008431002697, -0.21660376882282647570399, -0.72058928240081254745775 , 1.88298518683389759509339, -0.39593086292497708944538, 0.47803447414888039501690 , -0.87730234961981912178430, 1.45009767915056353437819, -0.64258282192604954108361 , -0.19048163415359734007559, -0.99134159919342790256991, 0.54366244394847862420050 , 1.66531762304370256622121, -0.79390525090304220334758, -0.01135437096435308254594 , -1.12494052429199431486495, -0.85482479546253875568596, -0.01159930187192333150836 , 0.75465437362335030524463, -0.54144904373718538703741, -0.85598224435921810293593 , -0.44893610617967599774047, 1.63402389894952282389795, -1.09431441740485424496399 , 1.22772575275634254765578, -0.88120822563015821504706, 2.18965726407797056651816 , -0.88830486762534022204818, 0.49904785683978480914647, -0.96391262983003389575032 , -0.84584241895151246470874, 0.01820625581143533663897, 2.04426796652258913056244 , -0.51725662678387962500892, 0.68076969847060686547024, 0.31955573105719387028145 , -0.76372130858969833333560, 0.48931143096917706358795, -0.63996954524487148496803 , -0.74922815557619715232107, -1.22071057920300574117789, -1.06313667425498303842346 , -0.39394827284068967276909, 1.14055737651990241054989, 0.08203018278837372057044 , -0.81693043678611854119254, 0.30102881937280212554242, -0.71118362891575670481359 , 0.16401615885635717484448, 0.07371942371769488189237, -0.34288372588738608159886 , 2.66190601319726560802792, -0.71051489168642878446747, 0.02034948571161497923865 , 1.92939368169321356916157, 0.75986042229322825480864, -0.70121494097762349095859 , -0.05413653704868248706106) dat <- data.frame(y, trat, C, x1, x2) m <- glm.nb(y~trat*C + x1+ I(x1^2) + x1:trat + I(x1^2):trat+x2 , dat) test_that("ggeffect, decimals", { testthat::expect_gt(nrow(ggeffect(m, c("x1"))), 0) testthat::expect_gt(nrow(ggeffect(m, c("x2"))), 0) testthat::expect_gt(nrow(ggeffect(m, c("x1", "x2"))), 0) testthat::expect_gt(nrow(ggeffect(m, c("x1", "trat"))), 0) }) test_that("ggemmeans, decimals", { testthat::expect_gt(nrow(ggemmeans(m, c("x1"))), 0) testthat::expect_gt(nrow(ggemmeans(m, c("x2"))), 0) testthat::expect_gt(nrow(ggemmeans(m, c("x1", "x2"))), 0) testthat::expect_gt(nrow(ggemmeans(m, c("x1", "trat"))), 0) }) test_that("ggpredict, decimals", { testthat::expect_gt(nrow(ggpredict(m, c("x1"))), 0) testthat::expect_gt(nrow(ggpredict(m, c("x2"))), 0) testthat::expect_gt(nrow(ggpredict(m, c("x1", "x2"))), 0) testthat::expect_gt(nrow(ggpredict(m, c("x1", "trat"))), 0) }) } } ggeffects/tests/testthat/test-betareg.R0000644000176200001440000000131113451124203017711 0ustar liggesusersif (require("testthat") && require("ggeffects") && require("betareg")) { data("GasolineYield") m1 <- betareg(yield ~ batch + temp, data = GasolineYield) test_that("ggpredict", { p <- ggpredict(m1, "batch") expect_equal(p$predicted, unname(predict(m1, newdata = new_data(m1, "batch"))), tolerance = 1e-3) ggpredict(m1, c("batch", "temp")) }) test_that("ggeffect", { p <- ggeffect(m1, "batch") expect_equal(p$predicted[1], 0.3122091, tolerance = 1e-3) ggeffect(m1, c("batch", "temp")) }) test_that("ggemmeans", { p <- ggemmeans(m1, "batch") expect_equal(p$predicted[1], 0.3122091, tolerance = 1e-3) ggemmeans(m1, c("batch", "temp")) }) } ggeffects/tests/testthat/test-coxph.R0000644000176200001440000000177613451124203017440 0ustar liggesusersif (require("testthat") && require("ggeffects") && require("survival")) { data("lung", package = "survival") # remove category 3 (outlier) lung <- subset(lung, subset = ph.ecog %in% 0:2) lung$sex <- factor(lung$sex, labels = c("male", "female")) lung$ph.ecog <- factor(lung$ph.ecog, labels = c("good", "ok", "limited")) m1 <- survival::coxph(survival::Surv(time, status) ~ sex + age + ph.ecog, data = lung) test_that("ggpredict", { p <- ggpredict(m1, "sex") expect_equal(p$predicted[1], 0.829228, tolerance = 1e-3) ggpredict(m1, c("sex", "age")) }) test_that("ggemmeans", { p <- ggemmeans(m1, "sex") expect_equal(p$predicted[1], 0.5622074, tolerance = 1e-3) ggemmeans(m1, c("sex", "age")) }) test_that("ggpredict", { p <- ggpredict(m1, "sex", type = "surv") expect_equal(p$predicted[1], 0.9966796, tolerance = 1e-3) p <- ggpredict(m1, "sex", type = "cumhaz") expect_equal(p$predicted[1], 0.003325958, tolerance = 1e-3) }) } ggeffects/tests/testthat/test-gamm4.R0000644000176200001440000000133213466500233017317 0ustar liggesusers.runThisTest <- Sys.getenv("RunAllggeffectsTests") == "yes" if (.runThisTest && Sys.getenv("USER") != "travis") { unloadNamespace("gam") if (require("testthat") && require("ggeffects") && require("gamm4")) { set.seed(123) dat <- gamSim(1, n = 400, scale = 2) ## simulate 4 term additive truth dat$fac <- fac <- as.factor(sample(1:20, 400, replace = TRUE)) dat$y <- dat$y + model.matrix( ~ fac - 1) %*% rnorm(20) * .5 set.seed(123) m1 <- gamm4(y ~ s(x0) + x1 + s(x2), data = dat, random = ~ (1 | fac)) test_that("ggpredict", { p <- ggpredict(m1, "x1") expect_equal(p$predicted[1], 5.885441, tolerance = 1e-4) ggpredict(m1, c("x1", "x2")) }) } } ggeffects/tests/testthat/test-polr.R0000644000176200001440000000406213565004472017275 0ustar liggesusersif (suppressWarnings( require("testthat") && require("ggeffects") && require("MASS") )) { context("ggeffects, polr") options(contrasts = c("contr.treatment", "contr.poly")) fit <- polr(Sat ~ Infl + Type + Cont, weights = Freq, data = housing) test_that("ggpredict, polr", { ggpredict(fit, "Infl") ggpredict(fit, c("Infl", "Type")) ggpredict(fit, c("Infl", "Type", "Cont")) }) test_that("ggemmeans, polr", { ggemmeans(fit, "Infl") ggemmeans(fit, c("Infl", "Type")) ggemmeans(fit, c("Infl", "Type", "Cont")) }) test_that("ggpredict, polr", { ggpredict(fit, "Infl [Low,High]") ggpredict(fit, c("Infl [Low,High]", "Type [Tower]")) ggpredict(fit, c("Infl [Medium,Low]", "Type [Terrace]", "Cont [Low]")) }) test_that("ggemmeans, polr", { ggemmeans(fit, "Infl [Low,High]") ggemmeans(fit, c("Infl [Low,High]", "Type [Tower]")) ggemmeans(fit, c("Infl [Medium,Low]", "Type [Terrace]", "Cont [Low]")) }) test_that("ggpredict, polr", { ggpredict(fit, "Infl [Low,High]", condition = c(Type = "Tower")) ggpredict(fit, c("Infl [Low,High]", "Type [Tower]"), condition = c(Cont = "Low")) }) test_that("ggemmeans, polr", { ggemmeans(fit, "Infl [Low,High]", condition = c(Type = "Tower")) ggemmeans(fit, c("Infl [Low,High]", "Type [Tower]"), condition = c(Cont = "Low")) }) test_that("ggemmeans, polr", { p1 <- ggemmeans(fit, "Infl", condition = c(Type = "Tower", Cont = "Low")) p2 <- ggpredict(fit, "Infl") expect_equal( p1$predicted[p1$x == 1 & p1$response.level == "Low"], p2$predicted[p2$x == 1 & p2$response.level == "Low"], tolerance = 1e-3 ) }) test_that("ggeffect, polr", { ggeffect(fit, "Infl") ggeffect(fit, c("Infl", "Type")) ggeffect(fit, c("Infl", "Type", "Cont")) }) test_that("ggeffect, polr", { ggeffect(fit, "Infl [Low,High]") ggeffect(fit, c("Infl [Low,High]", "Type [Tower]")) ggeffect(fit, c("Infl [Medium,Low]", "Type [Terrace]", "Cont [Low]")) }) } ggeffects/tests/testthat/test-survreg.R0000644000176200001440000000116613477717331020027 0ustar liggesusersif (suppressWarnings( require("testthat") && require("ggeffects") && require("survival") )) { context("ggeffects, survreg") data("ovarian") m1 <- survreg(Surv(futime, fustat) ~ ecog.ps + rx, data = ovarian, dist = "exponential") test_that("ggpredict, survreg", { pr <- ggpredict(m1, "ecog.ps") expect_equal(pr$predicted[1], 1637.551, tolerance = 1e-4) }) test_that("ggeffect, survreg", { expect_null(ggeffect(m1, "ecog.ps")) }) test_that("ggemmeans, survreg", { pr <- ggemmeans(m1, "ecog.ps") expect_equal(pr$predicted[1], 1637.551, tolerance = 1e-4) }) } ggeffects/tests/testthat/test-contrasts3.R0000644000176200001440000000252413451124203020412 0ustar liggesusersif (suppressWarnings( require("testthat") && require("ggeffects") && require("lme4") && require("sjlabelled") )) { context("ggeffects, lmer-contrasts") data(efc) efc$e15relat <- as_label(efc$e15relat) m <- lmer(neg_c_7 ~ c160age + c12hour + (1 | e15relat), data = efc) test_that("ggpredict, contrasts-1", { options(contrasts = rep("contr.sum", 2)) pr <- ggpredict(m, c("c160age", "c12hour")) expect_false(anyNA(pr$std.error)) }) test_that("ggpredict, contrasts-2", { options(contrasts = rep("contr.sum", 2)) pr <- ggpredict(m, "c160age") expect_false(anyNA(pr$std.error)) }) test_that("ggpredict, contrasts-3", { options(contrasts = rep("contr.sum", 2)) pr <- ggpredict(m, "c12hour") expect_false(anyNA(pr$std.error)) }) test_that("ggpredict, contrasts-4", { options(contrasts = rep("contr.treatment", 2)) pr <- ggpredict(m, c("c160age", "c12hour")) expect_false(anyNA(pr$std.error)) }) test_that("ggpredict, contrasts-5", { options(contrasts = rep("contr.treatment", 2)) pr <- ggpredict(m, "c160age") expect_false(anyNA(pr$std.error)) }) test_that("ggpredict, contrasts-6", { options(contrasts = rep("contr.treatment", 2)) pr <- ggpredict(m, "c12hour") expect_false(anyNA(pr$std.error)) }) } ggeffects/tests/testthat/test-svyglmnb.R0000644000176200001440000000121213522251750020150 0ustar liggesusersif (suppressWarnings( require("testthat") && require("ggeffects") && require("sjlabelled") && require("survey") && require("sjstats") && require("sjmisc") )) { context("ggeffects, svyglm.nb") # svyglm.nb ----- data(nhanes_sample) # create survey design des <- svydesign( id = ~SDMVPSU, strat = ~SDMVSTRA, weights = ~WTINT2YR, nest = TRUE, data = nhanes_sample ) # fit negative binomial regression fit <- svyglm.nb(total ~ RIAGENDR + age + RIDRETH1, des) test_that("ggpredict, svyglm.nb", { ggpredict(fit, "age") ggpredict(fit, c("age", "RIAGENDR")) }) } ggeffects/tests/testthat/test-glm.R0000644000176200001440000000472313522251701017074 0ustar liggesusersif (suppressWarnings( require("testthat") && require("ggeffects") && require("lme4") && require("sjlabelled") && require("sjmisc") )) { context("ggeffects, logistic regression") # glm, logistic regression ---- data(efc) efc$neg_c_7d <- dicho(efc$neg_c_7) fit <- glm(neg_c_7d ~ c12hour + e42dep + c161sex + c172code, data = efc, family = binomial(link = "logit")) m <- glm( cbind(incidence, size - incidence) ~ period, family = binomial, data = lme4::cbpp ) test_that("ggpredict, glm", { ggpredict(fit, "c12hour") ggpredict(fit, c("c12hour", "c161sex")) ggpredict(fit, c("c12hour", "c161sex", "c172code")) }) test_that("ggeffect, glm", { ggeffect(fit, "c12hour") ggeffect(fit, c("c12hour", "c161sex")) ggeffect(fit, c("c12hour", "c161sex", "c172code")) }) test_that("ggemmeans, glm", { ggemmeans(fit, "c12hour") ggemmeans(fit, c("c12hour", "c161sex")) ggemmeans(fit, c("c12hour", "c161sex", "c172code")) }) test_that("ggeffects, glm", { p1 <- ggpredict(m, "period") p2 <- ggeffect(m, "period") p3 <- ggemmeans(m, "period") expect_equal(p1$predicted[1], 0.2194245, tolerance = 1e-3) expect_equal(p2$predicted[1], 0.2194245, tolerance = 1e-3) expect_equal(p3$predicted[1], 0.2194245, tolerance = 1e-3) }) test_that("ggpredict, glm, robust", { ggpredict(fit, "c12hour", vcov.fun = "vcovHC", vcov.type = "HC1") ggpredict(fit, c("c12hour", "c161sex"), vcov.fun = "vcovHC", vcov.type = "HC1") ggpredict(fit, c("c12hour", "c161sex", "c172code"), vcov.fun = "vcovHC", vcov.type = "HC1") }) test_that("ggeffects, glm, robust", { ggpredict(m, "period", vcov.fun = "vcovHC", vcov.type = "HC1") }) data(cbpp) cbpp$trials <- cbpp$size - cbpp$incidence m1 <- glmer(cbind(incidence, trials) ~ period + (1 | herd), data = cbpp, family = binomial) m2 <- glmer(cbind(incidence, size - incidence) ~ period + (1 | herd), data = cbpp, family = binomial) m3 <- glm(cbind(incidence, trials) ~ period, data = cbpp, family = binomial) m4 <- glm(cbind(incidence, size - incidence) ~ period, data = cbpp, family = binomial) test_that("ggeffects, glm-matrix-columns", { ggpredict(m1, "period") ggpredict(m2, "period") ggpredict(m3, "period") ggpredict(m4, "period") ggemmeans(m1, "period") ggemmeans(m2, "period") ggemmeans(m3, "period") ggemmeans(m4, "period") }) } ggeffects/tests/testthat/test-clean_vars.R0000644000176200001440000000410613451124203020422 0ustar liggesusersif (suppressWarnings( require("testthat") && require("ggeffects") )) { context("ggeffects, clean_vars") # lm, linear regression ---- data(efc) efc$c172code <- as.factor(efc$c172code) efc$e42dep <- as.factor(efc$e42dep) fit <- lm(barthtot ~ c12hour + c172code + e42dep, data = efc) test_that("ggpredict, clean_vars", { expect_equal(nrow(ggpredict(fit, "c172code")), 3) expect_equal(nrow(ggpredict(fit, "c172code [1,3]")), 2) expect_equal(nrow(ggpredict(fit, "c172code[1,3]")), 2) }) test_that("ggpredict, clean_vars", { expect_equal(nrow(ggemmeans(fit, "c172code")), 3) expect_equal(nrow(ggemmeans(fit, "c172code [1,3]")), 2) expect_equal(nrow(ggemmeans(fit, "c172code[1,3]")), 2) }) test_that("ggpredict, clean_vars", { expect_equal(nrow(ggpredict(fit, "e42dep")), 4) expect_equal(nrow(ggpredict(fit, "e42dep [1,3]")), 2) expect_equal(nrow(ggpredict(fit, "e42dep[1,3]")), 2) }) test_that("ggpredict, clean_vars", { expect_equal(nrow(ggpredict(fit, c("c172code", "e42dep"))), 12) expect_equal(nrow(ggpredict(fit, c("c172code [1,3]", "e42dep"))), 8) expect_equal(nrow(ggpredict(fit, c("c172code", "e42dep [1,3]"))), 6) expect_equal(nrow(ggpredict(fit, c("c172code [1,3]", "e42dep [1,3]"))), 4) expect_equal(nrow(ggpredict(fit, c("c172code[1,3]", "e42dep"))), 8) expect_equal(nrow(ggpredict(fit, c("c172code", "e42dep[1,3]"))), 6) expect_equal(nrow(ggpredict(fit, c("c172code[1,3]", "e42dep[1,3]"))), 4) }) test_that("ggemmeans, clean_vars", { expect_equal(nrow(ggemmeans(fit, c("c172code", "e42dep"))), 12) expect_equal(nrow(ggemmeans(fit, c("c172code [1,3]", "e42dep"))), 8) expect_equal(nrow(ggemmeans(fit, c("c172code", "e42dep [1,3]"))), 6) expect_equal(nrow(ggemmeans(fit, c("c172code [1,3]", "e42dep [1,3]"))), 4) expect_equal(nrow(ggemmeans(fit, c("c172code[1,3]", "e42dep"))), 8) expect_equal(nrow(ggemmeans(fit, c("c172code", "e42dep[1,3]"))), 6) expect_equal(nrow(ggemmeans(fit, c("c172code[1,3]", "e42dep[1,3]"))), 4) }) } ggeffects/tests/testthat/test-linear-models.R0000644000176200001440000002320213565004430021043 0ustar liggesusersif (suppressWarnings( require("testthat") && require("ggeffects") && require("sjlabelled") && require("sjmisc") )) { context("ggeffects, linear model") # lm, linear regression ---- data(efc) fit <- lm(barthtot ~ c12hour + neg_c_7 + c161sex + c172code, data = efc) test_that("ggpredict, lm", { ggpredict(fit, "c12hour") ggpredict(fit, c("c12hour", "c161sex")) x <- ggpredict(fit, c("c12hour", "c161sex", "c172code")) print(x) x <- ggpredict(fit, c("c12hour", "c161sex", "neg_c_7")) print(x) }) test_that("ggpredict, lm by", { expect_equal(nrow(ggpredict(fit, "c12hour [10:20]")), 11) expect_equal(nrow(ggpredict(fit, "c12hour [10:20 by=.2]")), 51) expect_equal(nrow(ggpredict(fit, "c12hour [10:20 by = .2]")), 51) expect_equal(nrow(ggpredict(fit, "c12hour [10:20by=.2]")), 51) }) test_that("ggpredict, lm-vcov", { ggpredict(fit, c("c12hour", "c161sex"), vcov.fun = "vcovHC", vcov.type = "HC1") }) test_that("ggpredict, lm-prediction-interval", { pr <- ggpredict(fit, c("c12hour", "c161sex"), interval = "predict") expect_equal(pr$conf.low[1], 27.43113, tolerance = 1e-4) pr <- ggpredict(fit, c("c12hour", "c161sex"), interval = "conf") expect_equal(pr$conf.low[1], 71.02894, tolerance = 1e-4) pr <- ggpredict(fit, c("c12hour", "c161sex"), interval = "predict", vcov.fun = "vcovHC", vcov.type = "HC1") expect_equal(pr$conf.low[1], 27.44084, tolerance = 1e-4) ggpredict(fit, c("c12hour", "c161sex"), interval = "predict", ci.lvl = NA) ggpredict(fit, c("c12hour", "c161sex"), interval = "conf", ci.lvl = NA) }) test_that("ggpredict, lm-noci", { ggpredict(fit, c("c12hour", "c161sex"), ci.lvl = NA) }) test_that("ggpredict, lm, ci.lvl", { ggpredict(fit, "c12hour", ci.lvl = .8) ggpredict(fit, c("c12hour", "c161sex"), ci.lvl = .8) ggpredict(fit, c("c12hour", "c161sex", "c172code"), ci.lvl = .8) }) test_that("ggpredict, lm, typical", { ggpredict(fit, "c12hour", ci.lvl = .8, typical = "median") ggpredict(fit, c("c12hour", "c161sex"), ci.lvl = .8, typical = "median") ggpredict(fit, c("c12hour", "c161sex", "c172code"), ci.lvl = .8, typical = "median") }) test_that("ggpredict, lm, condition", { ggpredict(fit, "c172code", condition = c(c12hour = 40), ci.lvl = .8, typical = "median") ggpredict(fit, c("c172code", "c161sex"), condition = c(c12hour = 40), ci.lvl = .8, typical = "median") }) test_that("ggpredict, lm, pretty", { ggpredict(fit, "c12hour", full.data = TRUE, ci.lvl = .8, typical = "median") ggpredict(fit, c("c12hour", "c161sex"), full.data = TRUE, ci.lvl = .8, typical = "median") }) test_that("ggpredict, lm, full.data", { ggpredict(fit, "c172code", full.data = TRUE, ci.lvl = .8, typical = "median") ggpredict(fit, c("c172code", "c161sex"), full.data = TRUE, ci.lvl = .8, typical = "median") }) test_that("ggeffect, lm", { ggeffect(fit, "c12hour") ggeffect(fit, c("c12hour", "c161sex")) ggeffect(fit, c("c12hour", "c161sex", "c172code")) }) test_that("ggemmeans, lm", { ggemmeans(fit, "c12hour") ggemmeans(fit, c("c12hour", "c161sex")) ggemmeans(fit, c("c12hour", "c161sex", "c172code")) }) test_that("ggemmeans, lm, ci.lvl", { ggemmeans(fit, "c12hour", ci.lvl = .8) ggemmeans(fit, c("c12hour", "c161sex"), ci.lvl = .8) ggemmeans(fit, c("c12hour", "c161sex", "c172code"), ci.lvl = .8) }) test_that("ggemmeans, lm, typical", { ggemmeans(fit, "c12hour", ci.lvl = .8, typical = "median") ggemmeans(fit, c("c12hour", "c161sex"), ci.lvl = .8, typical = "median") ggemmeans(fit, c("c12hour", "c161sex", "c172code"), ci.lvl = .8, typical = "median") }) test_that("ggemmeans, lm, condition", { ggemmeans(fit, "c172code", condition = c(c12hour = 40), ci.lvl = .8, typical = "median") ggemmeans(fit, c("c172code", "c161sex"), condition = c(c12hour = 40), ci.lvl = .8, typical = "median") }) test_that("ggemmeans, lm, pretty", { ggemmeans(fit, "c12hour", full.data = TRUE, ci.lvl = .8, typical = "median") ggemmeans(fit, c("c12hour", "c161sex"), full.data = TRUE, ci.lvl = .8, typical = "median") }) data(efc) efc$c172code <- to_label(efc$c172code) fit <- lm(barthtot ~ c12hour + neg_c_7 + c161sex + c172code, data = efc) test_that("ggpredict, lm", { ggpredict(fit, "c12hour [20,30,40]") ggpredict(fit, "c12hour [30:60]") ggpredict(fit, c("c12hour [30:60]", "c161sex", "c172code [high level of education,low level of education]")) }) test_that("ggpredict, lm", { ggpredict(fit, "c12hour [meansd]") ggpredict(fit, "c12hour [minmax]") ggpredict(fit, c("c12hour [quart]", "c161sex", "c172code [high level of education,low level of education]")) ggpredict(fit, c("c12hour [zeromax]", "c161sex", "c172code [high level of education,low level of education]")) ggpredict(fit, c("c12hour [quart2]", "c161sex", "c172code [high level of education,low level of education]")) }) test_that("ggeffect, lm", { ggeffect(fit, "c12hour [20,30,40]") ggeffect(fit, "c12hour [30:60]") ggeffect(fit, c("c12hour [30:60]", "c161sex", "c172code [high level of education,low level of education]")) }) test_that("ggeffect, lm", { ggeffect(fit, "c12hour [meansd]") ggeffect(fit, "c12hour [minmax]") ggeffect(fit, c("c12hour [quart]", "c161sex", "c172code [high level of education,low level of education]")) ggeffect(fit, c("c12hour [zeromax]", "c161sex", "c172code [high level of education,low level of education]")) ggeffect(fit, c("c12hour [quart2]", "c161sex", "c172code [high level of education,low level of education]")) }) test_that("ggemmeans, lm", { ggemmeans(fit, "c12hour [20,30,40]") ggemmeans(fit, "c12hour [30:60]") ggemmeans(fit, c("c12hour [30:60]", "c161sex", "c172code [high level of education,low level of education]")) }) test_that("ggemmeans, lm", { ggemmeans(fit, "c12hour [meansd]") ggemmeans(fit, "c12hour [minmax]") ggemmeans(fit, c("c12hour [quart]", "c161sex", "c172code [high level of education,low level of education]")) ggemmeans(fit, c("c12hour [zeromax]", "c161sex", "c172code [high level of education,low level of education]")) ggemmeans(fit, c("c12hour [quart2]", "c161sex", "c172code [high level of education,low level of education]")) }) data(efc) efc$c172code <- to_label(efc$c172code) fit <- lm(barthtot ~ log(c12hour) + c161sex + c172code, data = efc) test_that("ggpredict, lm, log", { ggpredict(fit, "c12hour [meansd]") ggpredict(fit, "c12hour [minmax]") ggpredict(fit, c("c12hour", "c172code [high level of education,low level of education]")) ggpredict(fit, c("c12hour [exp]", "c172code [high level of education,low level of education]")) }) test_that("ggeffect, lm, log", { ggeffect(fit, "c12hour [meansd]") ggeffect(fit, "c12hour [minmax]") ggeffect(fit, c("c12hour", "c172code [high level of education,low level of education]")) ggeffect(fit, c("c12hour [exp]", "c172code [high level of education,low level of education]")) }) test_that("ggeffect, lm, no_space", { ggeffect(fit, "c12hour[meansd]") ggeffect(fit, "c12hour[minmax]") ggeffect(fit, c("c12hour", "c172code[high level of education,low level of education]")) ggeffect(fit, c("c12hour[exp]", "c172code[high level of education,low level of education]")) }) test_that("ggemmeans, lm, log", { ggemmeans(fit, "c12hour [meansd]") ggemmeans(fit, "c12hour [minmax]") ggemmeans(fit, c("c12hour", "c172code [high level of education,low level of education]")) ggemmeans(fit, c("c12hour [exp]", "c172code [high level of education,low level of education]")) }) test_that("ggemmeans, lm, no_space", { ggemmeans(fit, "c12hour[meansd]") ggemmeans(fit, "c12hour[minmax]") ggemmeans(fit, c("c12hour", "c172code[high level of education,low level of education]")) ggemmeans(fit, c("c12hour[exp]", "c172code[high level of education,low level of education]")) }) test_that("ggpredict, lm formula", { ggpredict(fit, ~ c12hour) ggpredict(fit, ~ c12hour + c161sex) ggpredict(fit, ~ c12hour + c161sex + c172code) }) d <- subset(efc, select = c(barthtot, c12hour, neg_c_7, c172code)) d <- na.omit(d) m1 <- lm(barthtot ~ c12hour + poly(neg_c_7, 2) + c172code, data = d) m2 <- lm(barthtot ~ c12hour + poly(neg_c_7, 3, raw = TRUE) + c172code, data = d) m3 <- lm(barthtot ~ scale(c12hour) + poly(neg_c_7, 2) + c172code, data = d) test_that("ggpredict, lm", { ggpredict(m1, "neg_c_7") ggpredict(m2, "neg_c_7") ggpredict(m3, "neg_c_7") ggpredict(m3, "c12hour") }) test_that("ggemmeans, lm", { ggemmeans(m1, "neg_c_7") ggemmeans(m2, "neg_c_7") ggemmeans(m3, "neg_c_7") ggemmeans(m3, "c12hour") }) data(efc) fit <- lm(barthtot ~ c12hour + neg_c_7, data = efc) test_that("ggemmeans, lm", { p1 <- ggemmeans(fit, "neg_c_7") p2 <- ggeffect(fit, "neg_c_7") p3 <- ggpredict(fit, "neg_c_7") expect_equal(p1$predicted[1], 78.2641, tolerance = 1e-3) expect_equal(p2$predicted[1], 78.2641, tolerance = 1e-3) expect_equal(p3$predicted[1], 78.2641, tolerance = 1e-3) }) test_that("ggemmeans, lm", { p1 <- ggemmeans(fit, "neg_c_7 [5,10]") p2 <- ggeffect(fit, "neg_c_7 [5,10]") p3 <- ggpredict(fit, "neg_c_7 [5,10]") expect_equal(p1$predicted[1], 80.58504, tolerance = 1e-3) expect_equal(p2$predicted[1], 80.58504, tolerance = 1e-3) expect_equal(p3$predicted[1], 80.58504, tolerance = 1e-3) }) } ggeffects/tests/testthat/test-condition.R0000644000176200001440000000733413451124203020301 0ustar liggesusersif (suppressWarnings( require("testthat") && require("ggeffects") && require("lme4") && require("sjmisc") )) { context("ggeffects, condition") data(efc) efc$e42dep <- to_label(efc$e42dep) fit <- lm(barthtot ~ c12hour + neg_c_7 + e42dep + c172code, data = efc) test_that("ggpredict, condition", { ggpredict(fit, "c172code") ggpredict(fit, "c172code", condition = c(c12hour = 40)) ggpredict(fit, "c172code", condition = c(c12hour = 40, e42dep = "severely dependent")) ggpredict(fit, "c172code", condition = c(e42dep = "severely dependent")) }) test_that("ggemmeans, condition", { ggemmeans(fit, "c172code") ggemmeans(fit, "c172code", condition = c(c12hour = 40)) ggemmeans(fit, "c172code", condition = c(c12hour = 40, e42dep = "severely dependent")) ggemmeans(fit, "c172code", condition = c(e42dep = "severely dependent")) }) efc$neg_c_7d <- dicho(efc$neg_c_7) m1 <- glm( neg_c_7d ~ c12hour + e42dep + c161sex + c172code, data = efc, family = binomial(link = "logit") ) test_that("ggpredict, glm", { ggpredict(m1, "c12hour", condition = c(e42dep = "severely dependent")) ggpredict(m1, c("c12hour", "c161sex"), condition = c(e42dep = "severely dependent")) ggpredict(m1, c("c12hour", "c161sex", "c172code"), condition = c(e42dep = "severely dependent")) }) test_that("ggpredict, glm", { ggemmeans(m1, "c12hour", condition = c(e42dep = "severely dependent")) ggemmeans(m1, c("c12hour", "c161sex"), condition = c(e42dep = "severely dependent")) ggemmeans(m1, c("c12hour", "c161sex", "c172code"), condition = c(e42dep = "severely dependent")) }) efc$neg_c_7d <- dicho(efc$neg_c_7) m2 <- glm( neg_c_7d ~ c12hour + e42dep + c161sex + c172code, data = efc, family = binomial(link = "logit") ) test_that("ggpredict, glm", { ggpredict(m2, "c12hour", condition = c(c172code = 1)) ggpredict(m2, c("c12hour", "c161sex"), condition = c(c172code = 2)) }) data(efc) efc$grp <- to_label(efc$e15relat) efc$e42dep <- to_label(efc$e42dep) m3 <- lmer(neg_c_7 ~ c12hour + e42dep + c161sex + c172code + (1|grp), data = efc) test_that("ggpredict, condition-lmer", { pr <- ggpredict(m3, "c12hour", type = "re") expect_equal(pr$predicted[1], 8.962075, tolerance = 1e-3) expect_equal(pr$std.error[1], 0.7345163, tolerance = 1e-3) pr <- ggpredict(m3, "c12hour", type = "re", condition = c(c172code = 1)) expect_equal(pr$predicted[1], 8.62045, tolerance = 1e-3) expect_equal(pr$std.error[1], 0.75549, tolerance = 1e-3) pr <- ggpredict(m3, "c12hour", type = "re", condition = c(e42dep = "severely dependent")) expect_equal(pr$predicted[1], 12.83257, tolerance = 1e-3) expect_equal(pr$std.error[1], 0.7345163, tolerance = 1e-3) pr <- ggpredict(m3, "c12hour", type = "re", condition = c(e42dep = "severely dependent", c172code = 3)) expect_equal(pr$predicted[1], 13.19621, tolerance = 1e-3) expect_equal(pr$std.error[1], 0.7667454, tolerance = 1e-3) pr <- ggpredict(m3, "c12hour", type = "re", condition = c(e42dep = "severely dependent", c172code = 3, grp = "sibling")) expect_equal(pr$predicted[1], 13.13315, tolerance = 1e-3) expect_equal(pr$std.error[1], 0.76675, tolerance = 1e-3) pr <- ggpredict(m3, "c12hour", type = "re", condition = c(c172code = 3, grp = "sibling")) expect_equal(pr$predicted[1], 9.26265, tolerance = 1e-3) expect_equal(pr$std.error[1], 0.76675, tolerance = 1e-3) pr <- ggpredict(m3, "c12hour", type = "re", condition = c(grp = "sibling")) expect_equal(pr$predicted[1], 8.89902, tolerance = 1e-3) expect_equal(pr$std.error[1], 0.73452, tolerance = 1e-3) }) } ggeffects/tests/testthat/test-plot.R0000644000176200001440000000207613522521471017276 0ustar liggesusersif (suppressWarnings( require("testthat") && require("ggeffects") && require("sjlabelled") && require("sjmisc") )) { context("ggeffects, plot") # lm, linear regression ---- data(efc) efc$c172code <- to_label(efc$c172code) fit <- lm(barthtot ~ c12hour + neg_c_7 + c161sex + c172code, data = efc) test_that("ggpredict, lm", { pr <- ggpredict(fit, "c12hour [20,30,40]") p <- plot(pr) p <- plot(pr, ci = FALSE) p <- plot(pr, ci = TRUE, ci.style = "dot") p <- plot(pr, add.data = TRUE) p <- plot(pr, add.data = TRUE, jitter = FALSE) p <- plot(pr, colors = "bw") p <- plot(pr, colors = "gs") pr <- ggpredict(fit, c("c12hour", "c172code")) p <- plot(pr) p <- plot(pr, ci = FALSE) p <- plot(pr, ci = TRUE, ci.style = "dot") p <- plot(pr, add.data = TRUE) p <- plot(pr, add.data = TRUE, jitter = 0) p <- plot(pr, facets = TRUE) p <- plot(pr, facets = FALSE) p <- plot(pr, use.theme = FALSE) p <- plot(pr, colors = "bw") p <- plot(pr, colors = "gs") }) } ggeffects/tests/testthat/test-vglm.R0000644000176200001440000000342013533460275017265 0ustar liggesusersunloadNamespace("gam") .runThisTest <- Sys.getenv("RunAllggeffectsTests") == "yes" if (.runThisTest) { if (require("testthat") && require("ggeffects") && require("VGAM")) { d.AD <- data.frame( treatment = gl(3, 3), outcome = gl(3, 1, 9), counts = c(18, 17, 15, 20, 10, 20, 25, 13, 12) ) m1 <- vglm(counts ~ outcome + treatment, family = poissonff, data = d.AD, trace = TRUE) test_that("ggpredict", { p <- ggpredict(m1, "outcome") expect_equal(p$predicted[1], 21, tolerance = 1e-3) }) set.seed(123) N <- 100 X1 <- rnorm(N, 175, 7) X2 <- rnorm(N, 30, 8) Ycont <- 0.5 * X1 - 0.3 * X2 + 10 + rnorm(N, 0, 6) Yord <- cut( Ycont, breaks = quantile(Ycont), include.lowest = TRUE, labels = c("--", "-", "+", "++"), ordered = TRUE ) dfOrd <- data.frame(X1, X2, Yord) m2 <- vglm(Yord ~ X1 + X2, family = propodds, data = dfOrd) test_that("ggpredict", { p <- ggpredict(m2, terms = "X1") expect_equal(p$predicted[1], 0.2633227, tolerance = 1e-3) expect_equal(nrow(p), 27) p <- ggpredict(m2, terms = "X1", ci = NA) expect_equal(p$predicted[1], 0.7366773, tolerance = 1e-3) expect_equal(nrow(p), 36) }) data(pneumo) pneumo <- transform(pneumo, let = log(exposure.time)) m3 <- vglm(cbind(normal, mild, severe) ~ let, propodds, data = pneumo) test_that("ggpredict", { p <- ggpredict(m3, "let") expect_equal(p$predicted[1], 0.005992263, tolerance = 1e-3) expect_equal(nrow(p), 16) p <- ggpredict(m3, "let", ci = NA) expect_equal(p$predicted[1], 0.9940077, tolerance = 1e-3) expect_equal(nrow(p), 24) }) } } ggeffects/tests/testthat/test-rq.R0000644000176200001440000000106413477717331016751 0ustar liggesusersif (suppressWarnings( require("testthat") && require("ggeffects") && require("quantreg") )) { context("ggeffects, quantreg") data(stackloss) m1 <- rq(stack.loss ~ Air.Flow + Water.Temp, data = stackloss, tau = .25) test_that("ggpredict, rq", { expect_warning(pr <- ggpredict(m1, "Air.Flow")) expect_equal(pr$predicted[1], 10.09524, tolerance = 1e-4) }) test_that("ggeffect, rq", { expect_null(ggeffect(m1, "Air.Flow")) }) test_that("ggemmeans, rq", { expect_null(ggemmeans(m1, "Air.Flow")) }) } ggeffects/tests/testthat/test-zeroinfl.R0000644000176200001440000000624713575640521020162 0ustar liggesusersif (suppressWarnings( require("testthat") && require("ggeffects") && require("glmmTMB") && require("pscl") )) { context("ggeffects, pscl") data(Salamanders) m1 <- zeroinfl(count ~ mined | mined, dist = "poisson", data = Salamanders) m2 <- hurdle(count ~ mined | mined, dist = "poisson", zero.dist = "poisson", data = Salamanders) m3 <- hurdle(count ~ mined | mined, dist = "poisson", zero.dist = "binomial", data = Salamanders) m4 <- hurdle(count ~ mined | mined, dist = "poisson", zero.dist = "binomial", link = "log", data = Salamanders) m5 <- zeroinfl(count ~ mined | mined, dist = "negbin", link = "log", data = Salamanders) test_that("ggpredict, pscl", { ggpredict(m1, "mined", type = "fe") ggpredict(m1, "mined", type = "fe.zi") ggpredict(m2, "mined", type = "fe") ggpredict(m2, "mined", type = "fe.zi") ggpredict(m3, "mined", type = "fe") ggpredict(m3, "mined", type = "fe.zi") ggpredict(m4, "mined", type = "fe") ggpredict(m4, "mined", type = "fe.zi") ggpredict(m5, "mined", type = "fe") ggpredict(m5, "mined", type = "fe.zi") }) test_that("ggpredict, pscl", { skip_on_travis() skip_on_cran() set.seed(123) pr <- ggpredict(m1, "mined", type = "fe.zi") expect_equal(pr$conf.low, c(0.1731, 2.0172), tolerance = 1e-3) model <- zeroinfl(count ~ mined * spp | mined * spp, dist = "poisson", data = Salamanders) set.seed(123) pr <- ggpredict(model, c("mined", "spp"), type = "fe.zi") expect_equal( pr$conf.low, c(0, 0, 0.0556, 0, 0, 0.1398, 0.1517, 1.6219, 0.0574, 1.8075, 0.4951, 3.1064, 3.0941, 1.3263), tolerance = 1e-3 ) }) test_that("ggemmeans, pscl", { ggemmeans(m1, "mined", type = "fe") ggemmeans(m1, "mined", type = "fe.zi") ggemmeans(m2, "mined", type = "fe") ggemmeans(m2, "mined", type = "fe.zi") ggemmeans(m3, "mined", type = "fe") ggemmeans(m3, "mined", type = "fe.zi") ggemmeans(m4, "mined", type = "fe") ggemmeans(m4, "mined", type = "fe.zi") ggemmeans(m5, "mined", type = "fe") ggemmeans(m5, "mined", type = "fe.zi") }) test_that("compare, pscl", { p1 <- ggemmeans(m1, "mined", type = "fe") p2 <- ggpredict(m1, "mined", type = "fe") expect_equal(p1$predicted[1], p2$predicted[1], tolerance = 1e-3) p1 <- ggemmeans(m1, "mined", type = "fe.zi") p2 <- ggpredict(m1, "mined", type = "fe.zi") expect_equal(p1$predicted[1], p2$predicted[1], tolerance = 1e-3) p1 <- ggemmeans(m2, "mined", type = "fe") p2 <- ggpredict(m2, "mined", type = "fe") expect_equal(p1$predicted[1], p2$predicted[1], tolerance = 1e-3) p1 <- ggemmeans(m2, "mined", type = "fe.zi") p2 <- ggpredict(m2, "mined", type = "fe.zi") expect_equal(p1$predicted[1], p2$predicted[1], tolerance = 1e-3) p1 <- ggemmeans(m5, "mined", type = "fe") p2 <- ggpredict(m5, "mined", type = "fe") expect_equal(p1$predicted[1], p2$predicted[1], tolerance = 1e-3) p1 <- ggemmeans(m5, "mined", type = "fe.zi") p2 <- ggpredict(m5, "mined", type = "fe.zi") expect_equal(p1$predicted[1], p2$predicted[1], tolerance = 1e-3) }) } ggeffects/tests/testthat/test-glmer.R0000644000176200001440000000463713533415423017434 0ustar liggesusers.runThisTest <- Sys.getenv("RunAllggeffectsTests") == "yes" if (.runThisTest) { if (suppressWarnings( require("testthat") && require("ggeffects") && require("lme4") && require("glmmTMB") )) { context("ggeffects, glmer") # glmer ---- data(efc_test) fit <- glmer( negc7d ~ c12hour + e42dep + c161sex + c172code + (1 | grp), data = efc_test, family = binomial(link = "logit") ) test_that("ggpredict, glmer", { ggpredict(fit, "c12hour") ggpredict(fit, c("c12hour", "c161sex")) ggpredict(fit, c("c12hour", "c161sex", "c172code")) ggpredict(fit, "c12hour", type = "re") ggpredict(fit, c("c12hour", "c161sex"), type = "re") ggpredict(fit, c("c12hour", "c161sex", "c172code"), type = "re") }) test_that("ggeffect, glmer", { ggeffect(fit, "c12hour") ggeffect(fit, c("c12hour", "c161sex")) ggeffect(fit, c("c12hour", "c161sex", "c172code")) }) test_that("ggemmeans, glmer", { ggemmeans(fit, "c12hour") ggemmeans(fit, c("c12hour", "c161sex")) ggemmeans(fit, c("c12hour", "c161sex", "c172code")) }) if (Sys.getenv("USER") != "travis") { m <- insight::download_model("merMod_5") dd <- insight::get_data(m) test_that("ggpredict, glmer.nb", { ggpredict(m, "f1") ggpredict(m, "f1", type = "re") ggpredict(m, c("f1", "f2")) ggpredict(m, c("f1", "f2"), type = "re") ggemmeans(m, "f1") ggemmeans(m, c("f1", "f2")) }) test_that("ggpredict, glmer.nb-simulate", { ggpredict(m, c("f1", "f2"), type = "sim") }) } data(cbpp) cbpp$trials <- cbpp$size - cbpp$incidence m1 <- glmer(cbind(incidence, trials) ~ period + (1 | herd), data = cbpp, family = binomial) m2 <- glmer(cbind(incidence, size - incidence) ~ period + (1 | herd), data = cbpp, family = binomial) test_that("ggpredict, glmer, cbind", { ggpredict(m1, "period") ggpredict(m2, "period") ggpredict(m1, "period", type = "re") ggpredict(m2, "period", type = "re") ggemmeans(m1, "period") ggemmeans(m2, "period") }) test_that("compare, glmer, cbind", { p1 <- ggpredict(m1, "period") p2 <- ggemmeans(m1, "period") expect_equal(p1$predicted[1], p2$predicted[1], tolerance = 1e-3) }) } } ggeffects/tests/testthat/test-poisson.R0000644000176200001440000000117013522251731020003 0ustar liggesusersif (suppressWarnings( require("testthat") && require("ggeffects") )) { context("ggeffects, poisson model") # glm, poisson regression ---- data(efc) fit <- glm(tot_sc_e ~ neg_c_7 + c12hour + e42dep + c161sex + c172code, data = efc, family = poisson(link = "log")) test_that("ggpredict, glm", { ggpredict(fit, "c12hour") ggpredict(fit, c("c12hour", "c161sex")) ggpredict(fit, c("c12hour", "c161sex", "c172code")) }) test_that("ggeffect, glm", { ggeffect(fit, "c12hour") ggeffect(fit, c("c12hour", "c161sex")) ggeffect(fit, c("c12hour", "c161sex", "c172code")) }) } ggeffects/tests/testthat/test-ivreg.R0000644000176200001440000000125113451124203017417 0ustar liggesusers.runThisTest <- Sys.getenv("RunAllggeffectsTests") == "yes" if (.runThisTest) { if (require("testthat") && require("ggeffects") && require("AER")) { data(CigarettesSW) CigarettesSW$rprice <- with(CigarettesSW, price / cpi) CigarettesSW$rincome <- with(CigarettesSW, income / population / cpi) CigarettesSW$tdiff <- with(CigarettesSW, (taxs - tax) / cpi) m1 <- ivreg(log(packs) ~ log(rprice) + log(rincome) | log(rincome) + tdiff + I(tax / cpi), data = CigarettesSW, subset = year == "1995") test_that("ggpredict", { p <- ggpredict(m1, "rprice [exp]") expect_equal(p$predicted[1], 76.99202, tolerance = 1e-3) }) } } ggeffects/tests/testthat/test-MCMCglmm.R0000644000176200001440000000106013477717331017717 0ustar liggesusers.runThisTest <- Sys.getenv("RunAllggeffectsTests") == "yes" if (.runThisTest && Sys.getenv("USER") != "travis") { if (require("testthat") && require("ggeffects") && require("MCMCglmm")) { set.seed(123) data(PlodiaPO) m1 <- MCMCglmm( PO ~ plate, random = ~FSfamily, data = PlodiaPO, verbose = FALSE, nitt = 1300, burnin = 300, thin = 1 ) test_that("ggpredict", { p <- ggpredict(m1, "plate") expect_equal(p$predicted[1], 1.056156, tolerance = 1e-4) }) } } ggeffects/tests/testthat/test-gls.R0000644000176200001440000000105613451124203017073 0ustar liggesusersif (require("testthat") && require("ggeffects") && require("nlme")) { data(Ovary) m1 <- gls(follicles ~ Time, Ovary, correlation = corAR1(form = ~ 1 | Mare)) test_that("ggpredict", { p <- ggpredict(m1, "Time") expect_equal(p$predicted[1], 11.49246, tolerance = 1e-3) }) test_that("ggeffect", { p <- ggeffect(m1, "Time") expect_equal(p$predicted[1], 11.49246, tolerance = 1e-3) }) test_that("ggemmeans", { p <- ggemmeans(m1, "Time") expect_equal(p$predicted[1], 11.49246, tolerance = 1e-3) }) } ggeffects/tests/testthat/test-gamm.R0000644000176200001440000000113613561225133017234 0ustar liggesusersunloadNamespace("gam") .runThisTest <- Sys.getenv("RunAllggeffectsTests") == "yes" if (.runThisTest) { if (require("testthat") && require("ggeffects") && require("mgcv")) { set.seed(0) dat <- gamSim(6, n = 200, scale = .2, dist = "poisson") m1 <- gamm( y ~ s(x0) + s(x1) + s(x2), family = poisson, data = dat, random = list(fac = ~ 1) ) test_that("ggpredict", { p <- ggpredict(m1, "x1") expect_equal(p$predicted[1], 15.5450060160087, tolerance = 1e-3) ggpredict(m1, c("x1", "x2")) }) } } ggeffects/tests/testthat.R0000644000176200001440000000037613451124203015335 0ustar liggesuserslibrary(testthat) library(ggeffects) if (length(strsplit(packageDescription("ggeffects")$Version, "\\.")[[1]]) > 3) { Sys.setenv("RunAllggeffectsTests" = "yes") } else { Sys.setenv("RunAllggeffectsTests" = "no") } test_check("ggeffects") ggeffects/vignettes/0000755000176200001440000000000013614017661014224 5ustar liggesusersggeffects/vignettes/introduction_randomeffects.Rmd0000644000176200001440000002247213614010146022307 0ustar liggesusers--- title: "Introduction: Marginal Effects for Random Effects Models" author: "Daniel Lüdecke" date: "`r Sys.Date()`" output: rmarkdown::html_vignette vignette: > %\VignetteIndexEntry{Introduction: Marginal Effects for Random Effects Models} %\VignetteEngine{knitr::rmarkdown} %\VignetteEncoding{UTF-8} --- ```{r set-options, echo = FALSE} knitr::opts_chunk$set(collapse = TRUE, comment = "#>", dev = "png", fig.width = 7, fig.height = 4, message = FALSE, warning = FALSE) options(width = 800) if (!requireNamespace("ggplot2", quietly = TRUE) || !requireNamespace("lme4", quietly = TRUE) || !requireNamespace("glmmTMB", quietly = TRUE) || !requireNamespace("sjlabelled", quietly = TRUE)) { knitr::opts_chunk$set(eval = FALSE) } ``` This vignette shows how to calculate marginal effects that take the random-effect variances for mixed models into account. ## Marginal effects for mixed effects models Basically, the type of predictions, i.e. whether to account for the uncertainty of random effects or not, can be set with the `type`-argument. The default, `type = "fe"`, means that predictions are on the population-level and do not account for the random effect variances. Intervals are _confidence intervals_ for the predicted values. ```{r} library(ggeffects) library(lme4) data(sleepstudy) m <- lmer(Reaction ~ Days + (1 + Days | Subject), data = sleepstudy) pr <- ggpredict(m, "Days") pr plot(pr) ``` When `type = "re"`, the predicted values _are still on the population-level_. However, the random effect variances are taken into account, meaning that the intervals are actually _prediction intervals_ and become larger. More technically speaking, `type = "re"` accounts for the uncertainty of the fixed effects _conditional on the estimates_ of the random-effect variances and conditional modes (BLUPs). The random-effect variance is the _mean_ random-effect variance. Calculation is based on the proposal from _Johnson et al. 2014_, which is also implemented in functions like [`performance::r2()`](https://easystats.github.io/performance/reference/r2_nakagawa.html) or [`insight::get_variance()`](https://easystats.github.io/insight/reference/get_variance.html) to get r-squared values or random effect variances for mixed models with more complex random effects structures. As can be seen, compared to the previous example with `type = "fe"`, predicted values are identical (both on the population-level). However, standard errors, and thus the resulting confidence (or prediction) intervals are much larger . ```{r} pr <- ggpredict(m, "Days", type = "re") pr plot(pr) ``` The reason why both `type = "fe"` and `type = "re"` return predictions at population-level is because `ggpredict()` returns predicted values of the response _at specific levels_ of given model predictors, which are defined in the data frame that is passed to the `newdata`-argument (of `predict()`). The data frame requires data from _all_ model terms, including random effect terms. This again requires to choose certain levels or values also for each random effect term, or to set those terms to zero or `NA` (for population-level). Since there is no general rule, which level(s) of random effect terms to choose in order to represent the random effects structure in the data, using the population-level seems the most clear and consistent approach. To get predicted values for a specific level of the random effect term, simply define this level in the `condition`-argument. ```{r} ggpredict(m, "Days", type = "re", condition = c(Subject = 330)) ``` Finally, it is possible to obtain predicted values by simulating from the model, where predictions are based on `simulate()`. ```{r} ggpredict(m, "Days", type = "sim") ``` ## Marginal effects for zero-inflated mixed models For zero-inflated mixed effects models, typically fitted with the **glmmTMB** or **GLMMadaptive** packages, predicted values can be conditioned on * the fixed effects of the conditional model only (`type = "fe"`) * the fixed effects and zero-inflation component (`type = "fe.zi"`) * the fixed effects of the conditional model only (population-level), taking the random-effect variances into account (`type = "re"`) * the fixed effects and zero-inflation component (population-level), taking the random-effect variances into account (`type = "re.zi"`) * all model parameters (`type = "sim"`) ```{r} library(glmmTMB) data(Salamanders) m <- glmmTMB( count ~ spp + mined + (1 | site), ziformula = ~ spp + mined, family = truncated_poisson, data = Salamanders ) ``` Similar to mixed models without zero-inflation component, `type = "fe"` and `type = "re"` for **glmmTMB**-models (with zero-inflation) both return predictions on the population-level, where the latter option accounts for the uncertainty of the random effects. In short, `predict(..., type = "link")` is called (however, predicted values are back-transformed to the response scale). ```{r} ggpredict(m, "spp") ggpredict(m, "spp", type = "re") ``` For `type = "fe.zi"`, the predicted response value is the expected value `mu*(1-p)` _without conditioning_ on random effects. Since the zero inflation and the conditional model are working in "opposite directions", a higher expected value for the zero inflation means a lower response, but a higher value for the conditional model means a higher response. While it is possible to calculate predicted values with `predict(..., type = "response")`, standard errors and confidence intervals can not be derived directly from the `predict()`-function. Thus, confidence intervals for `type = "fe.zi"` are based on quantiles of simulated draws from a multivariate normal distribution (see also _Brooks et al. 2017, pp.391-392_ for details). ```{r} ggpredict(m, "spp", type = "fe.zi") ``` For `type = "re.zi"`, the predicted response value is the expected value `mu*(1-p)`, accounting for the random-effect variances. Intervals are calculated in the same way as for `type = "fe.zi"`, except that the mean random effect variance is considered and thus _prediction intervals_ rather than confidence intervals are returned. ```{r} ggpredict(m, "spp", type = "re.zi") ``` Finally, it is possible to obtain predicted values by simulating from the model, where predictions are based on `simulate()` (see _Brooks et al. 2017, pp.392-393_ for details). To achieve this, use `type = "sim"`. ```{r} ggpredict(m, "spp", type = "sim") ``` ## Marginal effects for each level of random effects Marginal effects can also be calculated for each group level in mixed models. Simply add the name of the related random effects term to the `terms`-argument, and set `type = "re"`. In the following example, we fit a linear mixed model and first simply plot the marginal effetcs, _not_ conditioned on random-effect variances. ```{r} library(sjlabelled) data(efc) efc$e15relat <- as_label(efc$e15relat) m <- lmer(neg_c_7 ~ c12hour + c160age + c161sex + (1 | e15relat), data = efc) me <- ggpredict(m, terms = "c12hour") plot(me) ``` Changing the type to `type = "re"` still returns population-level predictions by default. Recall that the major difference between `type = "fe"` and `type = "re"` is the uncertainty in the variance parameters. This leads to larger confidence intervals (i.e. prediction intervals) for marginal effects with `type = "re"`. ```{r} me <- ggpredict(m, terms = "c12hour", type = "re") plot(me) ``` To compute marginal effects for each grouping level, add the related random term to the `terms`-argument. In this case, confidence intervals are not calculated, but marginal effects are conditioned on each group level of the random effects. ```{r} me <- ggpredict(m, terms = c("c12hour", "e15relat"), type = "re") plot(me) ``` Marginal effects, conditioned on random effects, can also be calculated for specific levels only. Add the related values into brackets after the variable name in the `terms`-argument. ```{r} me <- ggpredict(m, terms = c("c12hour", "e15relat [child,sibling]"), type = "re") plot(me) ``` The most complex plot in this scenario would be a term (`c12hour`) at certain values of two other terms (`c161sex`, `c160age`) for specific levels of random effects (`e15relat`), so we have four variables in the `terms`-argument. ```{r fig.height=6} me <- ggpredict( m, terms = c("c12hour", "c161sex", "c160age", "e15relat [child,sibling]"), type = "re" ) plot(me) ``` If the group factor has too many levels, you can also take a random sample of all possible levels and plot the marginal effects for this subsample of group levels. To do this, use `term = " [sample=n]"`. ```{r} set.seed(123) m <- lmer(Reaction ~ Days + (1 + Days | Subject), data = sleepstudy) me <- ggpredict(m, terms = c("Days", "Subject [sample=7]"), type = "re") plot(me) ``` You can also add the observed data points for each group using `add.data = TRUE`. ```{r} plot(me, add.data = TRUE) ``` # References Brooks ME, Kristensen K, Benthem KJ van, Magnusson A, Berg CW, Nielsen A, et al. glmmTMB Balances Speed and Flexibility Among Packages for Zero-inflated Generalized Linear Mixed Modeling. The R Journal. 2017;9: 378–400. Johnson PC, O'Hara RB. 2014. Extension of Nakagawa & Schielzeth's R2GLMM to random slopes models. Methods Ecol Evol, 5: 944-946. (doi: 10.1111/2041-210X.12225) ggeffects/vignettes/introduction_plotmethod.Rmd0000644000176200001440000001554413614010064021647 0ustar liggesusers--- title: "Introduction: Plotting Marginal Effects" author: "Daniel Lüdecke" date: "`r Sys.Date()`" output: rmarkdown::html_vignette vignette: > %\VignetteIndexEntry{Introduction: Plotting Marginal Effects} %\VignetteEngine{knitr::rmarkdown} %\VignetteEncoding{UTF-8} --- ```{r set-options, echo = FALSE} knitr::opts_chunk$set(collapse = TRUE, comment = "#>", dev = "png", fig.width = 7, fig.height = 4, message = FALSE, warning = FALSE) options(width = 800) if (!requireNamespace("ggplot2", quietly = TRUE) || !requireNamespace("lme4", quietly = TRUE) || !requireNamespace("survival", quietly = TRUE) || !requireNamespace("sjmisc", quietly = TRUE)) { knitr::opts_chunk$set(eval = FALSE) } ``` # plot()-method This vignettes demonstrates the `plot()`-method of the **ggeffects**-package. It is recommended to read the [general introduction](ggeffects.html) first, if you haven't done this yet. If you don't want to write your own ggplot-code, **ggeffects** has a `plot()`-method with some convenient defaults, which allows quickly creating ggplot-objects. `plot()` has some arguments to tweak the plot-appearance. For instance, `ci` allows you to show or hide confidence bands (or error bars, for discrete variables), `facets` allows you to create facets even for just one grouping variable, or `colors` allows you to quickly choose from some color-palettes, including black & white colored plots. Use `add.data` to add the raw data points to the plot. **ggeffects** supports [labelled data](https://strengejacke.github.io/sjlabelled/) and the `plot()`-method automatically sets titles, axis - and legend-labels depending on the value and variable labels of the data. ```{r} library(ggeffects) library(sjmisc) data(efc) efc$c172code <- to_label(efc$c172code) fit <- lm(barthtot ~ c12hour + neg_c_7 + c161sex + c172code, data = efc) ``` ## Facet by Group ```{r} dat <- ggpredict(fit, terms = c("c12hour", "c172code")) plot(dat, facet = TRUE) ``` ## No Facets, in Black & White ```{r} # don't use facets, b/w figure, w/o confidence bands plot(dat, colors = "bw", ci = FALSE) ``` ## Add Data Points to Plot ```{r} dat <- ggpredict(fit, terms = c("c12hour", "c172code")) plot(dat, add.data = TRUE) ``` ## Automatic Facetting ```{r} # for three variables, automatic facetting dat <- ggpredict(fit, terms = c("c12hour", "c172code", "c161sex")) plot(dat) ``` ## Automatic Selection of Error Bars or Confidence Bands ```{r} # categorical variables have errorbars dat <- ggpredict(fit, terms = c("c172code", "c161sex")) plot(dat) ``` ## Connect Discrete Data Points with Lines ```{r} # point-geoms for discrete x-axis can be connected with lines plot(dat, connect.lines = TRUE) ``` ## Create Panel Plots for more than three Terms For three grouping variable (i.e. if `terms` is of length four), one plot per `panel` (the values of the fourth variable in `terms`) is created, and a single, integrated plot is produced by default. Use `one.plot = FALSE` to return one plot per panel. ```{r fig.height = 8} # for four variables, automatic facetting and integrated panel dat <- ggpredict(fit, terms = c("c12hour", "c172code", "c161sex", "neg_c_7")) # use 'one.plot = FALSE' for returning multiple single plots plot(dat, one.plot = TRUE) ``` # Change appearance of confidence bands In some plots, the the confidence bands are not represented by a shaded area (ribbons), but rather by error bars (with line), dashed or dotted lines. Use `ci.style = "errorbar"`, `ci.style = "dash"` or `ci.style = "dot"` to change the style of confidence bands. ## Dashed Lines for Confidence Intervals ```{r} # dashed lines for CI dat <- ggpredict(fit, terms = "c12hour") plot(dat, ci.style = "dash") ``` ## Error Bars for Continuous Variables ```{r} # facet by group dat <- ggpredict(fit, terms = c("c12hour", "c172code")) plot(dat, facet = TRUE, ci.style = "errorbar", dot.size = 1.5) ``` ## Dotted Error Bars The style of error bars for plots with categorical x-axis can also be changed. By default, these are "error bars", but `ci.style = "dot"` or `ci.style = "dashed"` works as well ```{r} dat <- ggpredict(fit, terms = "c172code") plot(dat, ci.style = "dot") ``` # Log-transform y-axis for binomial models For binomial models, the y-axis indicates the predicted probabilities of an event. In this case, error bars are not symmetrical. ```{r} library("lme4") m <- glm( cbind(incidence, size - incidence) ~ period, family = binomial, data = lme4::cbpp ) dat <- ggpredict(m, "period") # normal plot, asymmetrical error bars plot(dat) ``` Here you can use `log.y` to log-transform the y-axis. The `plot()`-method will automatically choose axis breaks and limits that fit well to the value range and log-scale. ```{r} # plot with log-transformed y-axis plot(dat, log.y = TRUE) ``` # Control y-axis appearance Furthermore, arguments in `...` are passed down to `ggplot::scale_y_continuous()` (resp. `ggplot::scale_y_log10()`, if `log.y = TRUE`), so you can control the appearance of the y-axis. ```{r} # plot with log-transformed y-axis, modify breaks plot( dat, log.y = TRUE, breaks = c(.05, .1, .15, .2, .25, .3), limits = c(.01, .3) ) ``` # Survival models `ggpredict()` also supports `coxph`-models from the **survival**-package and is able to either plot risk-scores (the default), probabilities of survival (`type = "surv"`) or cumulative hazards (`type = "cumhaz"`). Since probabilities of survival and cumulative hazards are changing across time, the time-variable is automatically used as x-axis in such cases, so the `terms`-argument only needs up to two variables. ```{r} data("lung", package = "survival") # remove category 3 (outlier, not nice in the plot) lung <- subset(lung, subset = ph.ecog %in% 0:2) lung$sex <- factor(lung$sex, labels = c("male", "female")) lung$ph.ecog <- factor(lung$ph.ecog, labels = c("good", "ok", "limited")) m <- survival::coxph(survival::Surv(time, status) ~ sex + age + ph.ecog, data = lung) # predicted risk-scores pr <- ggpredict(m, c("sex", "ph.ecog")) plot(pr) ``` ```{r} # probability of survival pr <- ggpredict(m, c("sex", "ph.ecog"), type = "surv") plot(pr) ``` ```{r} # cumulative hazards pr <- ggpredict(m, c("sex", "ph.ecog"), type = "cumhaz") plot(pr) ``` # Custom color palettes The **ggeffects**-package has a few pre-defined color-palettes that can be used with the `colors`-argument. Use `show_pals()` to see all available palettes. ```{r} show_pals() ``` Here are two examples showing how to use pre-defined colors: ```{r} dat <- ggpredict(fit, terms = c("c12hour", "c172code")) plot(dat, facet = TRUE, colors = "circus") ``` ```{r} dat <- ggpredict(fit, terms = c("c172code", "c12hour [quart]")) plot(dat, colors = "hero", dodge = 0.4) # increase space between error bars ``` ggeffects/vignettes/introduction_effectsatvalues.Rmd0000644000176200001440000002574613614007734022673 0ustar liggesusers--- title: "Introduction: Marginal Effects at Specific Values" author: "Daniel Lüdecke" date: "`r Sys.Date()`" output: rmarkdown::html_vignette vignette: > %\VignetteIndexEntry{Introduction: Marginal Effects at Specific Values} %\VignetteEngine{knitr::rmarkdown} %\VignetteEncoding{UTF-8} --- ```{r set-options, echo = FALSE} knitr::opts_chunk$set(collapse = TRUE, comment = "#>", dev = "png", fig.width = 7, fig.height = 4, message = FALSE, warning = FALSE) options(width = 800) if (!requireNamespace("ggplot2", quietly = TRUE) || !requireNamespace("lme4", quietly = TRUE) || !requireNamespace("sjlabelled", quietly = TRUE) || !requireNamespace("sjmisc", quietly = TRUE)) { knitr::opts_chunk$set(eval = FALSE) } ``` # Marginal effects at specific values or levels This vignettes shows how to calculate marginal effects at specific values or levels for the terms of interest. It is recommended to read the [general introduction](ggeffects.html) first, if you haven't done this yet. The `terms`-argument not only defines the model terms of interest, but each model term can be limited to certain values. This allows to compute and plot marginal effects for (grouping) terms at specific values only, or to define values for the main effect of interest. There are several options to define these values, which always should be placed in square brackets directly after the term name and can vary for each model term. 1. Concrete values are separated by a comma: `terms = "c172code [1,3]"`. For factors, you could also use factor levels, e.g. `terms = "Species [setosa,versicolor]"`. 2. Ranges are specified with a colon: `terms = c("c12hour [30:80]", "c172code [1,3]")`. This would plot all values from 30 to 80 for the variable _c12hour_. By default, the step size is 1, i.e. `[1:4]` would create the range `1, 2, 3, 4`. You can choose different step sizes with `by`, e.g. `[1:4 by=.5]`. 3. Convenient shortcuts to calculate common values like mean +/- 1 SD (`terms = "c12hour [meansd]"`), quartiles (`terms = "c12hour [quart]"`) or minumum and maximum values (`terms = "c12hour [minmax]"`). See `values_at()` for the different options. 4. A function name. The function is then applied to all unique values of the indicated variable, e.g. `terms = "hp [exp]"`. You can also define own functions, and pass the name of it to the `terms`-values, e.g. `terms = "hp [own_function]"`. 5. If the _first_ variable specified in `terms` is a _numeric_ vector, for which no specific values are given, a "pretty range" is calculated (see `pretty_range()`), to avoid memory allocation problems for vectors with many unique values. To select all values, use the `[all]`-tag, e.g. `terms = "mpg [all]"`. If a _numeric_ vector is specified as _second_ or _third_ variable in `term` (i.e. if this vector represents a grouping structure), representative values (see `values_at()`) are chosen, which is typically mean +/- SD. 6. To create a pretty range that should be smaller or larger than the default range (i.e. if no specific values would be given), use the `n`-tag, e.g. `terms = "age [n=5]"` or `terms = "age [n = 12]"`. Larger values for `n` return a larger range of predicted values. 7. Especially useful for plotting group levels of random effects with many levels, is the `sample`-option, e.g. `terms = "Subject [sample=9]"`, which will sample nine values from all possible values of the variable `Subject`. ## Specific values and value range ```{r} library(ggeffects) library(ggplot2) data(efc) fit <- lm(barthtot ~ c12hour + neg_c_7 + c161sex + c172code, data = efc) mydf <- ggpredict(fit, terms = c("c12hour [30:80]", "c172code [1,3]")) mydf ggplot(mydf, aes(x, predicted, colour = group)) + geom_line() ``` Defining value ranges is especially useful when variables are, for instance, log-transformed. `ggpredict()` then typically only uses the range of the log-transformed variable, which is in most cases not what we want. In such situation, specify the range in the `terms`-argument. ```{r} data(mtcars) mpg_model <- lm(mpg ~ log(hp), data = mtcars) # x-values and predictions based on the log(hp)-values ggpredict(mpg_model, "hp") # x-values and predictions based on hp-values from 50 to 150 ggpredict(mpg_model, "hp [50:150]") ``` By default, the step size for a range is 1, like `50, 51, 52, ...`. If you need a different step size, use `by=` inside the brackets, e.g. `"hp [50:60 by=.5]"`. This would create a range from 50 to 60, with .5er steps. ```{r} # range for x-values with .5-steps ggpredict(mpg_model, "hp [50:60 by=.5]") ``` ## Choosing representative values Especially in situations where we have two continuous variables in interaction terms, or where the "grouping" variable is continuous, it is helpful to select representative values of the grouping variable - else, predictions would be made for too many groups, which is no longer helpful when interpreting marginal effects. You can use * `"minmax"`: minimum and maximum values (lower and upper bounds) of the variable are used. * `"meansd"`: uses the mean value as well as one standard deviation below and above mean value. * `"zeromax"`: is similar to the `"minmax"` option, however, 0 is always used as minimum value. This may be useful for predictors that don't have an empirical zero-value. * `"quart"` calculates and uses the quartiles (lower, median and upper), _including_ minimum and maximum value. * `"quart2"` calculates and uses the quartiles (lower, median and upper), _excluding_ minimum and maximum value. * `"all"` takes all values of the vector. ```{r} data(efc) # short variable label, for plot attr(efc$c12hour, "label") <- "hours of care" fit <- lm(barthtot ~ c12hour * c161sex + neg_c_7, data = efc) mydf <- ggpredict(fit, terms = c("c161sex", "c12hour [meansd]")) plot(mydf) mydf <- ggpredict(fit, terms = c("c161sex", "c12hour [quart]")) plot(mydf) ``` ## Transforming values with functions The brackets in the `terms`-argument also accept the name of a valid function, to (back-)transform predicted valued. In this example, an alternative would be to specify that values should be exponentiated, which is indicated by `[exp]` in the `terms`-argument: ```{r} # x-values and predictions based on exponentiated hp-values ggpredict(mpg_model, "hp [exp]") ``` It is possible to define any function, also custom functions: ```{r} # x-values and predictions based on doubled hp-values hp_double <- function(x) 2 * x ggpredict(mpg_model, "hp [hp_double]") ``` ## Pretty value ranges This section is intended to show some examples how the plotted output differs, depending on which value range is used. Some transformations, like polynomial or spline terms, but also quadratic or cubic terms, result in many predicted values. In such situation, predictions for some models lead to memory allocation problems. That is why `ggpredict()` "prettifies" certain value ranges by default, at least for some model types (like mixed models). To see the difference in the "curvilinear" trend, we use a quadratic term on a standardized variable. ```{r} library(sjmisc) library(sjlabelled) library(lme4) data(efc) efc$c12hour <- std(efc$c12hour) efc$e15relat <- as_label(efc$e15relat) m <- lmer( barthtot ~ c12hour + I(c12hour^2) + neg_c_7 + c160age + c172code + (1 | e15relat), data = efc ) me <- ggpredict(m, terms = "c12hour") plot(me) ``` ### Turn off "prettifying" As said above, `ggpredict()` "prettifies" the vector, resulting in a smaller set of unique values. This is less memory consuming and may be needed especially for more complex models. You can turn off automatic "prettifying" by adding the `"all"`-shortcut to the `terms`-argument. ```{r} me <- ggpredict(m, terms = "c12hour [all]") plot(me) ``` This results in a smooth plot, as all values from the term of interest are taken into account. ### Using different ranges for prettifying To modify the "prettifying", add the `"n"`-shortcut to the `terms`-argument. This allows you to select a feasible range of values that is smaller (and hence less memory consuming) them `"terms = ... [all]"`, but still produces smoother plots than the default prettyfing. ```{r} me <- ggpredict(m, terms = "c12hour [n=2]") plot(me) ``` ```{r} me <- ggpredict(m, terms = "c12hour [n=10]") plot(me) ``` ## Marginal effects conditioned on specific values of the covariates By default, the `typical`-argument determines the function that will be applied to the covariates to hold these terms at constant values. By default, this is the mean-value, but other options (like median or mode) are possible as well. Use the `condition`-argument to define other values at which covariates should be held constant. `condition` requires a named vector, with the name indicating the covariate. ```{r} data(mtcars) mpg_model <- lm(mpg ~ log(hp) + disp, data = mtcars) # "disp" is hold constant at its mean ggpredict(mpg_model, "hp [exp]") # "disp" is hold constant at value 200 ggpredict(mpg_model, "hp [exp]", condition = c(disp = 200)) ``` ## Marginal effects for each level of random effects Marginal effects can also be calculated for each group level in mixed models. Simply add the name of the related random effects term to the `terms`-argument, and set `type = "re"`. In the following example, we fit a linear mixed model and first simply plot the marginal effetcs, _not_ conditioned on random effects. ```{r} library(sjlabelled) library(lme4) data(efc) efc$e15relat <- as_label(efc$e15relat) m <- lmer(neg_c_7 ~ c12hour + c160age + c161sex + (1 | e15relat), data = efc) me <- ggpredict(m, terms = "c12hour") plot(me) ``` Changing the type to `type = "re"` still returns population-level predictions by default. The major difference between `type = "fe"` and `type = "re"` is the uncertainty in the variance parameters. This leads to larger confidence intervals for marginal effects with `type = "re"`. ```{r} me <- ggpredict(m, terms = "c12hour", type = "re") plot(me) ``` To compute marginal effects for each grouping level, add the related random term to the `terms`-argument. In this case, confidence intervals are not calculated, but marginal effects are conditioned on each group level of the random effects. ```{r} me <- ggpredict(m, terms = c("c12hour", "e15relat"), type = "re") plot(me) ``` Marginal effects, conditioned on random effects, can also be calculated for specific levels only. Add the related values into brackets after the variable name in the `terms`-argument. ```{r} me <- ggpredict(m, terms = c("c12hour", "e15relat [child,sibling]"), type = "re") plot(me) ``` If the group factor has too many levels, you can also take a random sample of all possible levels and plot the marginal effects for this subsample of group levels. To do this, use `term = " [sample=n]"`. ```{r} data("sleepstudy") m <- lmer(Reaction ~ Days + (1 + Days | Subject), data = sleepstudy) me <- ggpredict(m, terms = c("Days", "Subject [sample=8]"), type = "re") plot(me) ``` ggeffects/vignettes/introduction_plotcustomize.Rmd0000644000176200001440000001167013614007770022417 0ustar liggesusers--- title: "Introduction: Customize Plot Appearance" author: "Daniel Lüdecke" date: "`r Sys.Date()`" output: rmarkdown::html_vignette vignette: > %\VignetteIndexEntry{Introduction: Customize Plot Appearance} %\VignetteEngine{knitr::rmarkdown} %\VignetteEncoding{UTF-8} --- ```{r set-options, echo = FALSE} knitr::opts_chunk$set(collapse = TRUE, comment = "#>", dev = "png", fig.width = 7, fig.height = 4, message = FALSE, warning = FALSE) options(width = 800) if (!requireNamespace("ggplot2", quietly = TRUE)) { knitr::opts_chunk$set(eval = FALSE) } ``` This vignettes demonstrates how to customize plots created with the `plot()`-method of the **ggeffects**-package. `plot()` returns an object of class **ggplot**, so it is easy to apply further modifications to the resulting plot. You may want to load the ggplot2-package to do this: `library(ggplot2)`. Let's start with a default-plot: ```{r} library(ggeffects) library(ggplot2) data(mtcars) m <- lm(mpg ~ gear + as.factor(cyl) + wt, data = mtcars) # continuous x-axis dat <- ggpredict(m, terms = c("gear", "wt")) # discrete x-axis dat_categorical <- ggpredict(m, terms = c("cyl", "wt")) # default plot plot(dat) ``` ## Changing Plot and Axis Titles The simplest thing is to change the titles from the plot, x- and y-axis. This can be done with `ggplot2::labs()`: ```{r} plot(dat) + labs( x = "Number of forward gears", y = "Miles/(US) gallon", title = "Predicted mean miles per gallon" ) ``` ## Changing the Legend Title The legend-title can also be changed using the `labs()`-function. The legend in ggplot-objects refers to the aesthetic used for the grouping variable, which is by default the `colour`, i.e. the plot is constructed in the following way: ```{r eval=FALSE} ggplot(data, aes(x = x, y = predicted, colour = group)) ``` ### Plots with Default Colors Hence, using `colour` in `labs()` changes the legend-title: ```{r} plot(dat) + labs(colour = "Weight (1000 lbs)") ``` ### Black-and-White Plots For black-and-white plots, the group-aesthetic is mapped to different _linetypes_, not to different colours. Thus, the legend-title for black-and-white plots can be changed using `linetype` in `labs()`: ```{r} plot(dat, colors = "bw") + labs(linetype = "Weight (1000 lbs)") ``` ### Black-and-White Plots with Categorical Predictor If the variable on the x-axis is discrete for a black-and-white plot, the group-aesthetic is mapped to different _shapes_, so following code must be used to change the legend title: ```{r} plot(dat_categorical, colors = "bw") + labs(shape = "Weight (1000 lbs)") ``` ## Changing the x-Axis Appearance The x-axis for plots returned from `plot()` is always _continuous_, even for discrete x-axis-variables. The reason for this is that many users are used to plots that connect the data points with lines, which is only possible for continuous x-axes. You can do this using the `connect.lines`-argument: ```{r} plot(dat_categorical, connect.lines = TRUE) ``` ### Categorical Predictors Since the x-axis is continuous (i.e. `ggplot2::scale_x_continuous()`), you can use `scale_x_continuous()` to modify the x-axis, and change breaks, limits or labels. ```{r} plot(dat_categorical) + scale_x_continuous(labels = c("four", "six", "eight"), breaks = c(1, 2, 3)) ``` ### Continuous Predictors Or for continuous variables: ```{r} plot(dat) + scale_x_continuous(breaks = 3:5, limits = c(2, 6)) ``` ## Changing the y-Axis Appearance Arguments in `...` are passed down to `ggplot::scale_y_continuous()` (resp. `ggplot::scale_y_log10()`, if `log.y = TRUE`), so you can control the appearance of the y-axis by putting the arguments directly into the call to `plot()`: ```{r} plot(dat_categorical, breaks = seq(12, 30, 2), limits = c(12, 30)) ``` ## Changing the Legend Labels The legend labels can also be changed using a `scale_*()`-function from **ggplot**. Depending on the color-setting (see section **Changing the Legend Title**), following functions can be used to change the legend labels: * `scale_colour_manual()` resp. `scale_colour_brewer()` * `scale_linetype_manual()` * `scale_shape_manual()` Since you overwrite an exising "color" scale, you typically need to provide the `values` or `palette`-argument, to manuall set the colors, linetypes or shapes. ### Plots with Default Colors For plots using default colors: ```{r} plot(dat) + scale_colour_brewer(palette = "Set1", labels = c("-1 SD", "Mean", "+1 SD")) ``` ### Black-and-White Plots For black-and-white plots: ```{r} plot(dat, colors = "bw") + scale_linetype_manual(values = 15:17, labels = c("-1 SD", "Mean", "+1 SD")) ``` ### Black-and-White Plots with Categorical Predictor For black-and-white plots with categorical x-axis: ```{r} plot(dat_categorical, colors = "bw") + scale_shape_manual(values = 1:3, labels = c("-1 SD", "Mean", "+1 SD")) ``` ggeffects/vignettes/ggeffects.Rmd0000644000176200001440000002643013614007662016632 0ustar liggesusers--- title: "ggeffects: Marginal Effects of Regression Models" author: "Daniel Lüdecke" date: "`r Sys.Date()`" output: rmarkdown::html_vignette vignette: > %\VignetteIndexEntry{ggeffects: Marginal Effects of Regression Models} %\VignetteEngine{knitr::rmarkdown} %\VignetteEncoding{UTF-8} --- ```{r set-options, echo = FALSE} knitr::opts_chunk$set( collapse = TRUE, comment = "#>", dev = "png", fig.width = 7, fig.height = 3.5, message = FALSE, warning = FALSE) options(width = 800) if (!requireNamespace("ggplot2", quietly = TRUE) || !requireNamespace("splines", quietly = TRUE) || !requireNamespace("sjmisc", quietly = TRUE)) { knitr::opts_chunk$set(eval = FALSE) } ``` # Aim of the ggeffects-package Results of regression models are typically presented as tables that are easy to understand. For more complex models that include interaction or quadratic / spline terms, tables with numbers are less helpful and more difficult to interpret. In such cases, the visualization of _marginal effects_ is far easier to understand and allows to intuitively get the idea of how predictors and outcome are associated, even for complex models. **ggeffects** computes marginal effects (or: _estimated marginal means_) at the mean (MEM) or at representative values (MER) from statistical models, i.e. predictions generated by a model when one holds the non-focal variables constant and varies the focal variable(s). The result is returned as data frame with consistent structure, especially for further use with [ggplot](https://cran.r-project.org/package=ggplot2). Definitions of "marginal effects" [can be found here](https://stats.stackexchange.com/tags/marginal-effect/info). Since the focus lies on plotting the data (the marginal effects), at least one model term needs to be specified for which the effects are computed. It is also possible to compute marginal effects for model terms, grouped by the levels of another model's predictor. The package also allows plotting marginal effects for two-, three- or four-way-interactions, or for specific values of a model term only. Examples are shown below. ## Short technical note `ggpredict()`, `ggemmeans()` and `ggeffect()` always return predicted values for the _response_ of a model (or _response distribution_ for Bayesian models). Typically, `ggpredict()` returns confidence intervals based on the standard errors as returned by the `predict()`-function, assuming normal distribution (`+/- 1.96 * SE`). If `predict()` for a certain class does _not_ return standard errors (for example, *merMod*-objects), these are calculated manually, by following steps: matrix-multiply `X` by the parameter vector `B` to get the predictions, then extract the variance-covariance matrix `V` of the parameters and compute `XVX'` to get the variance-covariance matrix of the predictions. The square-root of the diagonal of this matrix represent the standard errors of the predictions, which are then multiplied by 1.96 for the confidence intervals. For mixed models, if `type = "re"` or `type = "re.zi"`, the uncertainty in the random effects is accounted for when calculating the standard errors. Hence, in such cases, the intervals may be considered as _prediction intervals_. ## Consistent and tidy structure The returned data frames always have the same, consistent structure and column names, so it's easy to create ggplot-plots without the need to re-write the arguments to be mapped in each ggplot-call. `x` and `predicted` are the values for the x- and y-axis. `conf.low` and `conf.high` could be used as `ymin` and `ymax` aesthetics for ribbons to add confidence bands to the plot. `group` can be used as grouping-aesthetics, or for faceting. The examples shown here mostly use **ggplot2**-code for the plots, however, there is also a `plot()`-method, which is described in the vignette [Plotting Marginal Effects](introduction_plotmethod.html). # Marginal effects at the mean `ggpredict()` computes predicted values for all possible levels and values from a model's predictors. In the simplest case, a fitted model is passed as first argument, and the term in question as second argument. Use the raw name of the variable for the `terms`-argument only - you don't need to write things like `poly(term, 3)` or `I(term^2)` for the `terms`-argument. ```{r} library(ggeffects) data(efc) fit <- lm(barthtot ~ c12hour + neg_c_7 + c161sex + c172code, data = efc) ggpredict(fit, terms = "c12hour") ``` As you can see, `ggpredict()` (and `ggeffect()` or `ggemmeans()`) has a nice `print()`-method, which takes care of printing not too many rows (but always an equally spaced range of values, including minimum and maximum value of the term in question) and giving some extra information. This is especially useful when predicted values are shown depending on the levels of other terms (see below). The output shows the predicted values for the response at each value from the term _c12hour_. The data is already in shape for ggplot: ```{r} library(ggplot2) theme_set(theme_bw()) mydf <- ggpredict(fit, terms = "c12hour") ggplot(mydf, aes(x, predicted)) + geom_line() ``` # Marginal effects at the mean by other predictors' levels The `terms`-argument accepts up to four model terms, where the second to fourth terms indicate grouping levels. This allows predictions for the term in question at different levels for other model terms: ```{r} ggpredict(fit, terms = c("c12hour", "c172code")) ``` Creating a ggplot is pretty straightforward: the `colour`-aesthetics is mapped with the `group`-column: ```{r} mydf <- ggpredict(fit, terms = c("c12hour", "c172code")) ggplot(mydf, aes(x, predicted, colour = group)) + geom_line() ``` A second grouping structure can be defined, which will create another column named `facet`, which - as the name implies - might be used to create a facted plot: ```{r} mydf <- ggpredict(fit, terms = c("c12hour", "c172code", "c161sex")) mydf ggplot(mydf, aes(x, predicted, colour = group)) + geom_line() + facet_wrap(~facet) ``` Finally, a third differentation can be defined, creating another column named `panel`. In such cases, you may create multiple plots (for each value in `panel`). **ggeffects** takes care of this when you use `plot()` and automatically creates an integrated plot with all panels in one figure. ```{r fig.height = 8} mydf <- ggpredict(fit, terms = c("c12hour", "c172code", "c161sex", "neg_c_7")) plot(mydf) ``` # Marginal effects for each model term If the `term` argument is either missing or `NULL`, marginal effects for each model term are calculated. The result is returned as a list, which can be plotted manually (or using the `plot()` function). ```{r} mydf <- ggpredict(fit) mydf ``` # Two-Way, Three-Way- and Four-Way-Interactions To plot the marginal effects of interaction terms, simply specify these terms in the `terms`-argument. ```{r} library(sjmisc) data(efc) # make categorical efc$c161sex <- to_factor(efc$c161sex) # fit model with interaction fit <- lm(neg_c_7 ~ c12hour + barthtot * c161sex, data = efc) # select only levels 30, 50 and 70 from continuous variable Barthel-Index mydf <- ggpredict(fit, terms = c("barthtot [30,50,70]", "c161sex")) ggplot(mydf, aes(x, predicted, colour = group)) + geom_line() ``` Since the `terms`-argument accepts up to four model terms, you can also compute marginal effects for a 3-way-interaction or 4-way-interaction. To plot the marginal effects of three interaction terms, just like before, specify all three terms in the `terms`-argument. ```{r} # fit model with 3-way-interaction fit <- lm(neg_c_7 ~ c12hour * barthtot * c161sex, data = efc) # select only levels 30, 50 and 70 from continuous variable Barthel-Index mydf <- ggpredict(fit, terms = c("c12hour", "barthtot [30,50,70]", "c161sex")) ggplot(mydf, aes(x, predicted, colour = group)) + geom_line() + facet_wrap(~facet) ``` 4-way-interactions are rather confusing to print and plot. When plotting, multiple plots (for each level of the fourth interaction term) are plotted for the remaining three interaction terms. This can easily be done using the [`plot()`-method](https://strengejacke.github.io/ggeffects/articles/introduction_plotmethod.html). ```{r fig.height = 8} # fit model with 4-way-interaction fit <- lm(neg_c_7 ~ c12hour * barthtot * c161sex * c172code, data = efc) # marginal effects for all 4 interaction terms pr <- ggpredict(fit, c("c12hour", "barthtot", "c161sex", "c172code")) # use plot() method, easier than own ggplot-code from scratch plot(pr) ``` # Polynomial terms and splines `ggpredict()` also works for models with polynomial terms or splines. Following code reproduces the plot from `?splines::bs`: ```{r} library(splines) data(women) fm1 <- lm(weight ~ bs(height, df = 5), data = women) dat <- ggpredict(fm1, "height") ggplot(dat, aes(x, predicted)) + geom_line() + geom_point() ``` # Survival models `ggpredict()` also supports `coxph`-models from the **survival**-package and is able to either plot risk-scores (the default), probabilities of survival (`type = "surv"`) or cumulative hazards (`type = "cumhaz"`). Since probabilities of survival and cumulative hazards are changing across time, the time-variable is automatically used as x-axis in such cases, so the `terms`-argument only needs up to **two** variables for `type = "surv"` or `type = "cumhaz"`. ```{r} data("lung", package = "survival") # remove category 3 (outlier) lung <- subset(lung, subset = ph.ecog %in% 0:2) lung$sex <- factor(lung$sex, labels = c("male", "female")) lung$ph.ecog <- factor(lung$ph.ecog, labels = c("good", "ok", "limited")) m <- survival::coxph(survival::Surv(time, status) ~ sex + age + ph.ecog, data = lung) # predicted risk-scores ggpredict(m, c("sex", "ph.ecog")) ``` ```{r} # probability of survival ggpredict(m, c("sex", "ph.ecog"), type = "surv") ``` # Labelling the data **ggeffects** makes use of the [sjlabelled-package](https://cran.r-project.org/package=sjlabelled) and supports [labelled data](https://cran.r-project.org/package=sjlabelled/vignettes/intro_sjlabelled.html). If the data from the fitted models is labelled, the value and variable label attributes are usually copied to the model frame stored in the model object. **ggeffects** provides various _getter_-functions to access these labels, which are returned as character vector and can be used in ggplot's `lab()`- or `scale_*()`-functions. * `get_title()` - a generic title for the plot, based on the model family, like "predicted values" or "predicted probabilities" * `get_x_title()` - the variable label of the first model term in `terms`. * `get_y_title()` - the variable label of the response. * `get_legend_title()` - the variable label of the second model term in `terms`. * `get_x_labels()` - value labels of the first model term in `terms`. * `get_legend_labels()` - value labels of the second model term in `terms`. The data frame returned by `ggpredict()`, `ggemmeans()` or `ggeffect()` must be used as argument to one of the above function calls. ```{r} get_x_title(mydf) get_y_title(mydf) ggplot(mydf, aes(x, predicted, colour = group)) + geom_line() + facet_wrap(~facet) + labs( x = get_x_title(mydf), y = get_y_title(mydf), colour = get_legend_title(mydf) ) ``` ggeffects/vignettes/practical_logisticmixedmodel.Rmd0000644000176200001440000001470513614010244022574 0ustar liggesusers--- title: "Practical example: Logistic Mixed Effects Model with Interaction Term" author: "Daniel Lüdecke" date: "`r Sys.Date()`" output: rmarkdown::html_vignette vignette: > %\VignetteIndexEntry{Practical example: Logistic Mixed Effects Model with Interaction Term} %\VignetteEngine{knitr::rmarkdown} %\VignetteEncoding{UTF-8} --- ```{r set-options, echo = FALSE} knitr::opts_chunk$set(collapse = TRUE, comment = "#>", dev = "png", fig.width = 7, fig.height = 5, message = FALSE, warning = FALSE) if (!requireNamespace("magrittr", quietly = TRUE) || !requireNamespace("lme4", quietly = TRUE) || !requireNamespace("splines", quietly = TRUE) || !requireNamespace("sjmisc", quietly = TRUE)) { knitr::opts_chunk$set(eval = FALSE) } ``` This vignette demonstrate how to use *ggeffects* to compute and plot marginal effects of a logistic regression model. To cover some frequently asked questions by users, we'll fit a mixed model, inlcuding an interaction term and a quadratic resp. spline term. A general introduction into the package usage can be found in the vignette [marginal effects of regression model](ggeffects.html). First, we load the required packages and create a sample data set with a binomial and continuous variable as predictor as well as a group factor. To avoid convergence warnings, the continuous variable is standardized. ```{r} library(magrittr) library(ggeffects) library(sjmisc) library(lme4) library(splines) set.seed(123) dat <- data.frame( outcome = rbinom(n = 100, size = 1, prob = 0.35), var_binom = as.factor(rbinom(n = 100, size = 1, prob = 0.2)), var_cont = rnorm(n = 100, mean = 10, sd = 7), group = sample(letters[1:4], size = 100, replace = TRUE) ) dat$var_cont <- sjmisc::std(dat$var_cont) ``` ## Simple Logistic Mixed Effects Model We start by fitting a simple mixed effects model. ```{r} m1 <- glmer( outcome ~ var_binom + var_cont + (1 | group), data = dat, family = binomial(link = "logit") ) ``` For a discrete variable, marginal effects for all levels are calculated by default. For continuous variables, a pretty range of values is generated. See more details about value ranges in the vignette [marginal effects at specific values](introduction_effectsatvalues.html). For logistic regression models, since *ggeffects* returns marginal effects on the response scale, the predicted values are predicted _probabilities_. Furthermore, for mixed models, the predicted values are typically at the _population_ level, not group-specific. ```{r message = TRUE} ggpredict(m1, "var_binom") ggpredict(m1, "var_cont") ``` To plot marginal effects, simply plot the returned results or use the pipe. ```{r message = FALSE} # save marginal effects in an object and plot me <- ggpredict(m1, "var_binom") plot(me) # plot using the pipe ggpredict(m1, "var_cont") %>% plot() ``` ## Logistic Mixed Effects Model with Interaction Term Next, we fit a model with an interaction between the binomial and continuous variable. ```{r} m2 <- glmer( outcome ~ var_binom * var_cont + (1 | group), data = dat, family = binomial(link = "logit") ) ``` To compute or plot marginal effects of interaction terms, simply specify these terms, i.e. the names of the variables, as character vector in the `terms`-argument. Since we have an interaction between `var_binom` and `var_cont`, the argument would be `terms = c("var_binom", "var_cont")`. However, the _first_ variable in the `terms`-argument is used as predictor along the x-axis. Marginal effects are then plotted for specific values or at specific levels from the _second_ variable. If the second variable is a factor, marginal effects for each level are plotted. If the second variable is continuous, representative values are chosen (typically, mean +/- one SD, see [marginal effects at specific values](introduction_effectsatvalues.html)). ```{r message = TRUE} ggpredict(m2, c("var_cont", "var_binom")) %>% plot() ggpredict(m2, c("var_binom", "var_cont")) %>% plot() ``` ## Logistic Mixed Effects Model with quadratic Interaction Term Now we fit a model with interaction term, where the continuous variable is modelled as quadratic term. ```{r} m3 <- glmer( outcome ~ var_binom * poly(var_cont, degree = 2, raw = TRUE) + (1 | group), data = dat, family = binomial(link = "logit") ) ``` Again, *ggeffect* automatically plots all high-order terms when these are specified in the `terms`-argument. Hence, the function call is identical to the previous examples with interaction terms, which had no polynomial term included. ```{r message = TRUE} ggpredict(m3, c("var_cont", "var_binom")) %>% plot() ``` As you can see, *ggeffects* also returned a message indicated that the plot may not look very smooth due to the involvement of polynomial or spline terms: > Model contains splines or polynomial terms. Consider using `terms="var_cont [all]"` to get smooth plots. See also package-vignette 'Marginal Effects at Specific Values'. This is because for mixed models, computing marginal effects with spline or polynomial terms may lead to memory allocation problems. If you are sure that this will not happen, add the `[all]`-tag to the `terms`-argument, as described in the message: ```{r message = TRUE} ggpredict(m3, c("var_cont [all]", "var_binom")) %>% plot() ``` The above plot produces much smoother curves. ## Logistic Mixed Effects Model with Three-Way Interaction The last model does not produce very nice plots, but for the sake of demonstration, we fit a model with three interaction terms, including polynomial and spline terms. ```{r message = FALSE} set.seed(321) dat <- data.frame( outcome = rbinom(n = 100, size = 1, prob = 0.35), var_binom = rbinom(n = 100, size = 1, prob = 0.5), var_cont = rnorm(n = 100, mean = 10, sd = 7), var_cont2 = rnorm(n = 100, mean = 5, sd = 2), group = sample(letters[1:4], size = 100, replace = TRUE) ) m4 <- glmer( outcome ~ var_binom * poly(var_cont, degree = 2) * ns(var_cont2, df = 3) + (1 | group), data = dat, family = binomial(link = "logit") ) ``` Since we have marginal effects for *var_cont* at the levels of *var_cont2* and *var_binom*, we not only have groups, but also facets to plot all three "dimensions". Three-way interactions are plotted simply by speficying all terms in question in the `terms`-argument. ```{r message = TRUE} ggpredict(m4, c("var_cont [all]", "var_cont2", "var_binom")) %>% plot() ``` ggeffects/vignettes/technical_stata.Rmd0000644000176200001440000000603513614010353020011 0ustar liggesusers--- title: "Technical Details: Different Output between Stata and ggeffects" author: "Daniel Lüdecke" date: "`r Sys.Date()`" output: rmarkdown::html_vignette vignette: > %\VignetteIndexEntry{Technical Details: Different Output between Stata and ggeffects} %\VignetteEngine{knitr::rmarkdown} %\VignetteEncoding{UTF-8} --- ```{r set-options, echo = FALSE} knitr::opts_chunk$set(collapse = TRUE, comment = "#>", dev = "png", fig.width = 7, fig.height = 5, message = FALSE, warning = FALSE) if (!requireNamespace("magrittr", quietly = TRUE)) { knitr::opts_chunk$set(eval = FALSE) } ``` # Why is the output from Stata different from the output from ggeffect? Stata's equivalent to the marginal effects produced by _ggeffects_ is the `margins`-command. However, the results are not always identical. For models from non-gaussian families, point estimates for the marginal effects are identical, but the confidence intervals differ. Here is an explanation, why there is a difference. First, we fit a logistic regression model. ```{r} library(magrittr) set.seed(5) data <- data.frame( outcome = rbinom(100, 1, 0.5), var1 = rbinom(100, 1, 0.1), var2 = rnorm(100, 10, 7) ) m <- glm( outcome ~ var1 * var2, data = data, family = binomial(link = "logit") ) ``` ## Example with graphical output ### The Stata plot This is the code in Stata to produce a marginal effects plot. ```{r eval=FALSE} use data.dta, clear quietly logit outcome c.var1##c.var2 quietly margins, at(var2 = (-8(0.5)28) var1 = (0 1)) marginsplot ``` The resulting image looks like this. ```{r out.width="100%", echo=FALSE} knitr::include_graphics("vignette-stata-1.png", dpi = 72) ``` ### The ggeffects plot When we use _ggeffects_, the plot slighlty differs. ```{r} library(ggeffects) ggpredict(m, c("var2", "var1")) %>% plot() ``` As we can see, the confidence intervals in the Stata plot are outside the plausible range of `[0, 1]`, which means that the predicted uncertainty from the Stata output has a probability higher than 1 and lower than 0, while `ggpredict()` computes confidence intervals _within_ the possible range. ## Conclusion It seems like Stata is getting the confidence intervals wrong. Predictions and standard errors returned in Stata are on the (transformed) response scale. Obviously, the confidence intervals are then computed by `estimate +/- 1.96 * standard error`, which may lead to confidence intervals that are out of reasonable bounds (e.g. above 1 or below 0 for predicted probabilities). The _transformed estimate_ (on the response scale) is always between 0 and 1, and the same is true for the _transformed standard errors_. However, adding or substracting approx. 2 * _transformed_ SE to the _transformed_ estimate does no longer ensure that the confidence intervals are within the correct range. The more precise way to do the calculation is to calculate estimates, standard errors and confidence intervals on the (untransformed) scale of the linear predictor and then back-transform. ggeffects/vignettes/technical_differencepredictemmeans.Rmd0000644000176200001440000000532413614010324023706 0ustar liggesusers--- title: "Technical Details: Difference between ggpredict() and ggemmeans()" author: "Daniel Lüdecke" date: "`r Sys.Date()`" output: rmarkdown::html_vignette vignette: > %\VignetteIndexEntry{Technical Details: Difference between ggpredict() and ggemmeans()} %\VignetteEngine{knitr::rmarkdown} %\VignetteEncoding{UTF-8} --- ```{r set-options, echo = FALSE} knitr::opts_chunk$set(collapse = TRUE, comment = "#>", dev = "png", fig.width = 7, fig.height = 3.5, message = FALSE, warning = FALSE) options(width = 800) if (!requireNamespace("magrittr", quietly = TRUE) || !requireNamespace("sjmisc", quietly = TRUE)) { knitr::opts_chunk$set(eval = FALSE) } ``` `ggpredict()` and `ggemmeans()` compute predicted values for all possible levels or values from a model's predictor. Basically, `ggpredict()` wraps the `predict()`-method for the related model, while `ggemmeans()` wraps the `emmeans()`-method from the **emmeans**-package. Both `ggpredict()` and `ggemmeans()` do some data-preparation to bring the data in shape for the `newdata`-argument (`predict()`) resp. the `at`-argument (`emmeans()`). It is recommended to read the [general introduction](ggeffects.html) first, if you haven't done this yet. For models without categorical predictors, the results from `ggpredict()` and `ggemmeans()` are identical (except some _slight_ differences in the associated confidence intervals, which are, however, negligable). ```{r} library(magrittr) library(ggeffects) data(efc) fit <- lm(barthtot ~ c12hour + neg_c_7, data = efc) ggpredict(fit, terms = "c12hour") ggemmeans(fit, terms = "c12hour") ``` As can be seen, the continuous predictor `neg_c_7` is held constant at its mean value, 11.83. For categorical predictors, `ggpredict()` and `ggemmeans()` behave differently. While `ggpredict()` uses the reference level of each categorical predictor to hold it constant, `ggemmeans()` - like `ggeffect()` - averages over the proportions of the categories of factors. ```{r} library(sjmisc) data(efc) efc$e42dep <- to_label(efc$e42dep) fit <- lm(barthtot ~ c12hour + neg_c_7 + e42dep, data = efc) ggpredict(fit, terms = "c12hour") ggemmeans(fit, terms = "c12hour") ``` In this case, one would obtain the same results for `ggpredict()` and `ggemmeans()` again, if `condition` is used to define specific levels at which variables, in our case the factor `e42dep`, should be held constant. ```{r} ggpredict(fit, terms = "c12hour") ggemmeans(fit, terms = "c12hour", condition = c(e42dep = "independent")) ``` Creating plots is as simple as described in the vignette [Plotting Marginal Effects](introduction_plotmethod.html). ```{r} ggemmeans(fit, terms = c("c12hour", "e42dep")) %>% plot() ``` ggeffects/vignettes/practical_robustestimation.Rmd0000644000176200001440000000573113614010221022314 0ustar liggesusers--- title: "Practical example: (Cluster) Robust Standard Errors" author: "Daniel Lüdecke" date: "`r Sys.Date()`" output: rmarkdown::html_vignette vignette: > %\VignetteIndexEntry{Practical example: (Cluster) Robust Standard Errors} %\VignetteEngine{knitr::rmarkdown} %\VignetteEncoding{UTF-8} --- ```{r set-options, echo = FALSE} knitr::opts_chunk$set(collapse = TRUE, comment = "#>", dev = "png", fig.width = 7, fig.height = 5, message = FALSE, warning = FALSE) if (!requireNamespace("sandwich", quietly = TRUE) || !requireNamespace("clubSandwich", quietly = TRUE)) { knitr::opts_chunk$set(eval = FALSE) } ``` This vignette demonstrate how to compute confidence intervals based on (cluster) robust variance-covariance matrices for standard errors. First, we load the required packages and create a sample data set with a binomial and continuous variable as predictor as well as a group factor. To avoid convergence warnings, the continuous variable is standardized. ```{r} library(ggeffects) set.seed(123) # example taken from "?clubSandwich::vcovCR" m <- 8 cluster <- factor(rep(LETTERS[1:m], 3 + rpois(m, 5))) n <- length(cluster) X <- matrix(rnorm(3 * n), n, 3) nu <- rnorm(m)[cluster] e <- rnorm(n) y <- X %*% c(.4, .3, -.3) + nu + e dat <- data.frame(y, X, cluster, row = 1:n) # fit linear model model <- lm(y ~ X1 + X2 + X3, data = dat) ``` ## Predictions with normal standard errors In this example, we use the normal standard errors, as returned by `predict()`, to compute confidence intervals. ```{r message = TRUE} ggpredict(model, "X1") ``` ```{r message = FALSE} me <- ggpredict(model, "X1") plot(me) ``` ## Predictions with HC-estimated standard errors Now, we use `sandwich::vcovHC()` to estimate heteroskedasticity-consistent standard errors. To do so, first the function name, `vcovHC()`, must be supplied to the `vcov.fun`-argument. `sandwich::vcovHC()`, in turn, has different types of estimation. This must be specified in `vcov.type`. ```{r message = TRUE} ggpredict(model, "X1", vcov.fun = "vcovHC", vcov.type = "HC0") ``` ```{r message = FALSE} me <- ggpredict(model, "X1", vcov.fun = "vcovHC", vcov.type = "HC0") plot(me) ``` ## Predictions with cluster-robust standard errors The last example shows how to define cluster-robust standard errors. These are based on `clubSandwich::vcovCR()`. Thus, `vcov.fun = "vcovCR"` is always required when estimating cluster robust standard errors. `clubSandwich::vcovCR()` has also different estimation types, which must be specified in `vcov.type`. Furthermore, `clubSandwich::vcovCR()` *requires* the `cluster`-argument, which must be specified in `vcov.args`: ```{r message = TRUE} ggpredict( model, "X1", vcov.fun = "vcovCR", vcov.type = "CR0", vcov.args = list(cluster = dat$cluster) ) ``` ```{r message = FALSE} me <- ggpredict( model, "X1", vcov.fun = "vcovCR", vcov.type = "CR0", vcov.args = list(cluster = dat$cluster) ) plot(me) ``` ggeffects/vignettes/vignette-stata-1.png0000644000176200001440000016217613451124203020032 0ustar liggesusersPNG  IHDRXW pHYs  ~ IDATxXչOgggͺnJV7))""b\)i)4I4TiJ#4R1p O$H"MiH늸l62;}8ϸ? ,>3gί9ynG@ T@ n&DaEC!@ V4DaEC!@ V4DaEC!@ V4DaEC!@ V4DaEC!@ V4DaEC!@ V4DaEC!@ V4DaEC!@ V4&333qEQbxΐ!EDIf- qΌ|B/MיUԡŰ(YZreXH$GŲ,EQ9?~p]Vbv:a%~@X,L&]333l_|qcB:swy M!pkںsGy]_yNo>X?q``H"\O?==s233sRO;~ڵ1a@@Q{_4/\PWWwk׾555wyp|}o[uuu~~]wݵ$nnn޷o߭ gT!{رcV5"""77KoFKK\.Bv@竨hoo߳gֹE"~o}[._\^^>99q\HHH^^T*a-j',!TDDB(99l6 ò,˲v=---,,bJe4q9⇁Ɯy!3%"""55n!N\"ad2ٱc<ɪ,BZE &P{{pfy߾}GӧOHד J8N(Wyy9MP'OB`dUsmyy9T> !!EQڭ[T~Lv9ZbGRq4|AB&ĉJanSo/233Z-±AOo_em+N!uΉl/Y5aaaCCC BR<.i"c}jj!vxxX*VTTy.+Isֲ]:]{{;EQ BXݹsgxxd9--J/]TQQAQԶm陜rСCv(|V5%%%??e޽{7BHV744 VUUd2Roڴi۶m.#df䔗a-Vu333,3p }(,h\߿~XYݾ};B(22ԩS===7o(n_xQTfffB'ȯPXXgN<R###P]fǎv^d###V^(tpp1B(88㽽iiiEA`s"$P(z=>S__pqqT* N|pN[,xFFF222fsmmFFGG8Y7b$''uvv[MQ3DD@~$$$`3Lحm6FBR ‚F!>v---Vluر'NߔDR}}=MBzEL`ZV+\Ahw {ݻ711! (x}ckh,g(ꂫV511aK.g%4ª*:;;!?}}}4MBQYY ZZZ i>9t0(<Lw^1дdr9'77WBѲ Ecc#$0---v^PNO: nnP*~;H!111"k8[ZZ'pV {zzᆱWvvv&%%Z TAAA 8BH!SO?tvv6jժ˗/dd8*ܲd2h2`(8B*H$ڼys=wN7>>RT2<<сfMX|B/h6m܌eơt|+B0<5gjuuup\>66+szHHHEt) YRR/ӧiu'@ѣj?H$iii8x0t_"h۶mǎ y׮]J299. H0_WqY'OJ0f5>|866688\?ǯYL'@TJ4t>JKKj/"œknn޽{uXxJ1b111?jkk)@vRGm6J` `+ *xPLL ,GQ5<<|iܹnzF100mɩ)ɔK.] [[[iNOOtR{{;hx d` #_knڵ+44 Aedds!TSScfsbb\.onnQxV?Prrr``nt,˶h&&&"""T*UkkhYYB(//raJ:uJPWCCCr?G>-- !}v JL  2 g\}Pxj#v`oP(r9ԏn?}L&Sqqq*gy*uX8W΀k}#4 $߂c=ッ|=59v y;s\@>!_.yv[ͦ}y݋ٰ9mPϞ!9ྏaEAˬ8TWGmjjJNNy8zbq^Hoą׳{Cj]l/|N8uȩw@ltʯg 8vá&C*aA4B@{QRRҒxСE@ nDwL |20?{ܜ¢!ΫbcrL?&}M᳂@ OY`LV&y }|S~磣9r's8ロLpXP`,pgC8NPq3E ^raYrU,x̐ŸzUhvY_ "`` ?>((h6'}CCC`@$>_Wd_1VW_dI#if?--j8$㊊:::a_իW;K!ĝp#Xrf0{aa~0"6tmmmO>#WvT"²s1T*Z$+999 k03PT4MWjnn l64ɚ@ )D#&K4 *]RR@@@#G@hbb%;;ȑ#pòxgN2@ ,GFM Noȳ!Fںvڏ?8666>>CUUU z '@7MMOOvmɍ 5Mnn.#0NGYH 'D#&KHmcZ׫T<;YzƍRIKh0/FMP6"D"HAͶ}RX0 ˲CCCq7@ ,GFMҎ8eAQuڵ'NDEEs=vR(,ܹ ?ɛD #D#&KHY/44TPټ{J999F1>>yժUγaC4Bo!BhX$Y_l7?cץD"J 9D#&K E" iZws)^76p 7Yr!pc4y(@XЛ @ ,kFM|IO@ ,kFM|S!@XЛfE!@XЛ @ ,kFM|IO@ ,kFM|S!@XЛfE!@XЛ @ ,kFM|IO@ ,kFM|S!@XЛfE!@XЛ @ ,kFM|IO@ ,kFM|S!@XЛfE!@XЛ @ ,kFM|IO@ ,kFM|S!@XЛfE!@XЛ @ ,kFM|IO@ ,kFr' IDATM|S!@XЛfE!@XЛ7;7"%fp; +o Bғ"333he8N"|rȕïX,N6A4Bo⛂#\>7P'|b6RҥKE1 399 @T"h8aR999 wd2LYUTY,a EQhF dKw 7MAH|o8Υ eD"r²DZsETFDD fBqq L&=~xAA0Rl6+z:;;<.ZVANRT*5r\>>11ò,HP\3990 EQׯ5L4Bo⛂t;?!q\.=yT*ZrjNNN"z}|||\\ܞ={@Ӻ|;oV?qj,˲,  "jJ$ʲjiZ$1 iZPH$Z RSO=SO{111WF+zwΞ=P( Yݽ{w^^^___JJ 04ML(^/JQեie :!7MAH|cٜ 644l߾_8Nզ755oM_JzAW3 HiH"_e<|*0h9|8P" 5>>^"|֭~~~6lޱcGEEEQ%%%;v(2ƲlffyЛfE!fffH$0rbb_zuHHիwAtii\.id $> 2YBౚ2nջ έC`[:X,vh_xSNT*:22266S__С(a j _WC{x'w:tqY:\SB"H&IRXhko Bғ"N7550222//o8Xnn.܋[?98Wnb9/M!F8']Zu2=h7!|fhLJJJOOOMM{BgΜ1?b_9i59}.A>BOBz'E |_WWW^h4AAAqqqsSpНԹ~/>BwyBth#|zkJ,/Re2hΎxa8r.rs#tkY#$>oV4IV0!400G'vi2pXP#D_a=z b>*))r`0,z(..[ ˦-p#th7!|Xa[[PtthǛ-XV's::\C7B4Bo⛂?W?㬬,ih4{_G?_t@O>B2pЛfElqll,>>>669<<<***((!dZm6(h.i}r1, D#&>!aCS(E&Iꢢ;!?˂|s+720_|S!a~?L^ϲ,M ØLGJRlsfsr?r LOO+ʺ+W4$J!FZ:oV4I#xwxc}K_;n6a~)J#D_pEᛂP{#,D,K$Œ0LTTTSSӱcҌF#v:b#$7!>B怽!H{UUU", M&44I|upN]gYBF@$>BKYEirrwEF$A$XP@@@__n?zL&z#ʏG@pajfQ|יO!7Y~fۻwN;s挻0D5B8؈7JJ299aup-7ya#-ų\!aQXfm0233J@0ғ",`Pb˗iqqqͷjQ|8] xܕ%d!aX!t>䓸,ⷙ 7ɎBĨ؟g07dxݐz@ d2YMMͿ8pҥKK"4=_pIu+,,<~80,d2ɔ-a(N[rcy swKGHJFC7v-ǣu|kkkr9q4MzɄFH.e |iD4B?5$$h4jzŝG p^eq#M!%IbI@$a)yk׮>|d2OMMd V+ɾoMOOK$:mKq]gY#P3GHpɲsI 0jjZeӡdY X;i7^ i.2WL`P(T*@0ғ" D"g}wyw>}oGqC&0`i'#tZ%7wD#\,3A> ">BuXt:M,BkUYY)ɬV+dK!Adz@E w oVF X#!xE^IHH0L `! G'紜>Bqz+;^rbikkCuww~_NCko^Л,3C#\_x'Nfe-KAAm݆?\²m^>9GjK#,#t7>n^>BU8@3`6L555jzhh颢6^XCo Bғ"\ЊA矛攔cǎi41RoO"2 [5P2>BeC=,˼|,43˲"'O4JLL͛A;5͹s(2L999!4с7 ){[o9h7!^8D,888LQTAAAdddLLOS&d2 򌃌E!7ýȍ䳬}! $fsWWM]]]G=tEQѻvBT*^rhdddddAB f3@4BoM|y^@,f%%%uwwdڢT2|.C<^nP8]gLOO,;99ڷo_jj*9YR4K|UV曃V̙3mٲ̕]]]AT/^%yH '˲ ySq\.?uTffN8al#FM|S.zOYliDlK͆g;g ӧOEEEQXT ȩFNvB| ]B0x^hvH4M~_jRRRii)Đ﯀'!B *x@r:11aZY///okk ٻw>822ۅ h7b9x)RT,˂s(77Cl6 Nť9hU6rJAAB]Afff4ړO>I:HO\|.cX>B #P!!b:ߟvؑZ[[ ỻ2gZ.$LWifk՚a2N>MQTDDĉ'z}FFF\\\iiwߝ%Jz衫WZJTF??iwu>_FM|S./88߿˖-GtVU*feeu]~4Eak6jllĶͦLa`/h|哻q>9weq;k1@'D8фmܜ #YN8qIa 'bc0iƶo߾{ ٳ_O<199y̙Wu] "22;A4Bo⛂pzRm{۱cX,NJJ*..ܽ{Zh4]]]ǥԼ;33cZ)epWйC4Bo⛂pq}{"Bl6ō珍=z;izjjd2Iً7_aX,˗zzzT*`ZAVt0Hrr2~dxE7G}}}v횚ڻwȹspTN ɲCϝ;cbb^g}n``@&Zj>0_R0e;_%ᛂ;P#ۓSSS=qTΩ\.[ #.Ʒ!q"7MCC xۛsrrbbbRRLNN\f BW_uh= sC}s2/x;|gA===CCC෫p* HA|&%%j"Z6'''77^*--ݼy3BHV4}뭷^z⯚O?RTt1, D#&)oPO]%pf˖-}bb_?cYdf_{(8]\ 88 (izdd*22ŋj:555999--ԩS7o~衇Bj8ΝP{#t|,GH$z&''?>|d޽||‘`k'q۶mo~4M曠䥦dddo GΗ׏p o byw =QZZPRRr <4E2k=OSl6ۅ L&4a###ga`62W6m=vؖ-[yF!ujv6z4aj>$$##CP%CWTTDEEUTT<~u;nvCIk{:qg^W=^Л >BO[ط'T*wQ\\Rv{M6ԗ^z 6fr EIҲuWhۏ=j2> vVVVт'8lts4B Qs8.9#ўSSS6mͽE%$$DEE!ӡf BhǎO?tddd||<^EV3 e04M/l9. zâ@4BorCMuݠޜ$۷K$Tk4kkkmۆjhhP!!!0\V)_fW8[8Ѭ=rHOOxKKpddlt Àjx}0=UEi4^BQYYY__cf555[nݳgԔJ ޿<z`#\3|>SX:wwiVRE;kUS@j H;vz ܹh4޽{hh" IDATEq`0sd4 aϞ= &//H"\|}Qq_IYXAH$3330!m",6=C7yC}6XO4~.hvB`0<裏>3g׿>VU.L؎I<\R >'+9==駟'[SSsIJ{yª!L&22283 o&(vG \vbX,.?qS&g/KÝtaisZ;99ymhO ۲d]q\VVVvvvjjRR400(Ojò8Ќ=uA$D?]]]'NZmFFFZZڽ޻({ty@p=kqunff&??_(,u֓'Ogߔ#..hccc{_zuppRWrU<]=Մְ^1, D#&%dYǎcf˖-.]L&knnj555֭[l I ?@uuu=, 8ʠ{<;C|b4<Ͷk׮xX;y"!!w͚5^qzRZ.8dz`6m$r9|EEEYVx@ŋa9쏡P(^/_fYj^k,b-!!!%%7M\f\.aB4ۜFx+4s4BwRrx&&&f#Gd2YyyN(Jgv !770eǎSRR\ O{S;C@|bعsgbbX,bֶW_}sQ7wp< JeWWX,>{͛{RF'$$tΜ9=Z]]~4MWTTMLLާ?O, NG"8ܷUyųD|.cy"B0x^x1++ ::HnqV^ذ9esKu#.H4Bz+Z$=EA#qDo<!fff[oyev=P7G9|KL&B!d222j`HOOظqcDDĆ ~u*.]444)Uvu侭Gnd|QjsrOG?]]]'O{K>ߒɲMZZZJCzzz`?iXUyira99ޘ':@4BohFbG}1<<.˟}Y͆^srzR߳z8.""B"r\Ntss3MCCC!afFX,J2 !BQThh]w݅6vIPO9.|epnݥkRA 9s(_444^xjvvv4m0`CzVSRRB2lϞ=dݺuў܂#ǡ,t "BDė_~'0Lwu׉'ݛj*oAF{D?p:&J =0f߿reٌ \.,,,d!۳MLLpWUUVW[(Q)_/q _]FPa] ={oO4;al'ccc_}) Qv(5~~/X,FHQf%Q`}HVEt;.B$ |:n``!tYsl V0̪U***㇇u:bikkS*/^ͥ( Q09mzFcAY">B:;)C;z(Bhƍx.6~OJemmDllɓ'SRRb=e];#~/xֹa>'y~FMM#A`{^6ɛ|o׃=yjZ,X{```Ϟ=cccn޼M6%''KnMT5|yFQ`X~ٳg/_T*.^sjŲuV Q|˚qY|8K 8x B0ؐ9ZЬoݛ7oNOO߶m۾}֭[/,, ov@b}/<)O9}őfڵBW^NLLۼ I |s]ƳlJq\ff0 d2 .mQSlFcrrg}vipbbbWW`OzС~i' #笇Bݝ$4|GFF4i&Vߟlj111!eU*UNN#<0LBB]~\>Bw%uANӿx ǰ(Л,Z6m֭qqqaJsnPO]x__{ohhh}}=:̯<ޏdKX#$,Bhff^!$;_GF#M0罬l6#BCC{%OZjetIIO~OB18w>|C9|wH(44ȑ#!RR]3tCTjȍqz(Sͩ CuA$RunJqHKKT*[zM5ob<B ⭷ںu+h4;::cS/..#ȹſo(_}Æ 4M>|d2=zTT- Vߜڣդ2.}!V)z7%|_&B(++k_ZKKC=[M]!C=>yira9.#O8>7|333300(d;wM I |ߛwGuttLJŲo߾\QQãF-.Tȩ)puݻwsyf.@_qI`g^~Zq ]J,SSSSPP@ޱ!RO:{~˜,Cx;]yrs]OnK+޴ޠ~,P(>,^ھ}L&EEE lܸT@UXLLD"a&(((44ȑ#Eegg$:X??L&S~~>PH$.5xh\ף ksjeq҉'N0L;::p㞰̀(@t^^޺uj5EQ6lE_^dj|j^EGqqqbʕ+a_ .@ ǏpwqVe˖{![o:Ĝ6F煿p]~(EQ_O> ӍFcNNJJ ۼy3^'V)]ֳKGp5Bo֞={`ѣG*  7lOeYܝpt&Mz?OIzřG>o| 3=\ޜ&'fffzlt`j؊7j.mmwrw 頕NOO7L<@ttD"m]_};>>0L__"eݕE8oxs}ܬ':;;FGGVs=ىo+XV၁]]]wqG}xaᮛ#NW8D'YQ333iiiMMMZ6!!12[ohA=)]_!QYY qwC};A}'&I\-G('}jdj `}effZ,BBa4>,˧V5kpVl}%/ r_.bK.$BGYjUyy?w}B-.;#$,SlllܲeB}\[%i)FDӈ #"!b|[RDDQ䅘1>"/>E bEF)V|b!"2D"" aciBϫuI.><䕜{sy?|?k$I[8hZpNV3 lE ܑh%;gmgv(ux5~={LNNjZq/((0~~~G?h#=P@BFH54NNNܹST666zzz f 8R@t}+a(;wjm_c>Pqt/vۃKh[#JreQ]{,C#^QD{D_r!d2ۇxVVX,}WVV<<!h5&##'ꊈ8~8Ț8)}R߻*['5u{5HreQ2͠ W|޶^H6C~COJdkuu]t ;;;gff***!ëmwz_*{-X,{ݻw/0wwwWWW755544|C)ÁNL&96Bmff&22`0`o yq>888;;{nn9<m2~a\.7HIIa[hXh> tAܥqjBIy:X,?BHV <**88 E^L&S||<ִ{/Փעa}WGFF 7hrd%<<<&&f׮]"h4'?E3 iQJqNK`XNZmp+ɕq^^^A=-#u$n}}=r-0.vQ__MOOnnnnll.--(F#bEVsDlϢYrI=B  ڡZzjp jZ*K7JVklȓ#"":vEbq\ok#tww_^^V(eee>55WXX`{{B(==`477t!dXi-v=#9-BFl8KĿ8N%---7tSNNNFFX,ӂ뫪l6D>Iݗ_íX,qbsY,jڵK(2ݻw*Yi܏~n4Z!}7:=P!ld2d2-,,CW flvrn%F_|Yo"5\XXǫrQQQ:nttPXXpe2YCCCll7d8tH$r=z,EFIQ(wJ.i5GH N[X,]wp### B`r7 tM$"Isl@8{ .477#MiqK$MOOĜ +u-UAM7DKmhx< M#Ri__h,//jyyyjPF`!lk[IL_8>::o1/t${n\N:-,,  Fd|>tt4!!!<<J3666j4//199988(˓E"GFF󛛛|>B(<<\.s\޻ iX@x"=IndF!u= 4?~^y>[ZZjii1 r!1uy FXVVh8GGGO<ew%FfsҮ(GHp2'0lyy9(((<<}Z,Ƚ{ bWAV4BPmmH$ꂟǏD .tr+j6¯h4L&ZPPP`2&&&Je||s=ZXXloo%2,,/??)fBaiiiee%uhh`0|___Nؘٙ %B!G WTb@eBQQQ\.oiih{+1E"r>к~ȉ_WVrIJZ9bikkt|>`0AF0"5BܹsjzbbܹsĥV񺻻e2Yaa!r, 8ڣ؆j4O a#܌v$L\^WW10^^^f9##㩧FB]^xu[#Jrpzz2annn Ջ7tӿc=BxbXXXP(! Ž+ Cddd~~~rr2 C ,..@>jRiggZjjjBԚt IDATbJ2|>_*Z' a_^ZZ)++KJJ;*fffLf|||\\<Bnnnq؉V+|]#v ]O{?>|SSS[[VU*&^R1L0x\.wzzj k8_o>}PJ}|]>Jb{-["h#$d -9fܑ!x3C(j裏ۣ+N裏Z- #'p0|͎y\Gr!oF#"[C@|Dri`999p;v"d~д_WႂɎFLð#GXT"3...228"" ]ĪR@ 󫯯 4կ9[[[Ϝ9iZO:UQQ1??o4+++gffrrrBՍnnn& b=lR;5%Npĉ'N~[oe6!MZZکSPE"!kl/Dxy'PbK?)FSXw9Y'^6҆RLFIGdR%hK9 h\YY)((Lf0Ο?7p8iiiV!/$###:pzСgB(G mk[I.322RiDD=ܳ7HRDr;J]xȵv808ryybXڵ`\t鮻 :qDBBBIIICCvĉhlVVVkk+xG&1SNpX,`d2<d2ܹl6755iZ__ߑPxxxkkk|||nnn@@`ffL&p8cccGEEUUUx>8b833SRuww3^{->>~iiXɓ@A(,%ymP?r"ﯿϋ/CDɄ?(88ktt;///55urr211qddq|||rrl6A*"e6 h4Yp0 {f@ )裏j@ 0KKK~F "S[@(xPŋ jUT.\$I`0BP@/,, .h4Ng6?⾾>V[TTTVVF<8(mBl6 .k2jkkV+\ZZB) @8̙30ZVk6+++-Kvv‚`hkk[\\JIIχ;^H$Vo~PZFyW!ԏ~_  d\X{u[#Jr ?T*;Jppptt4ڑ#;&mtJoBO?-}YpaaX[[`Ddff#aN>>> 811L&,,,1Xl^^JrmmmFeSSSBBB@@@]]؊ppZP]]A!JIClĶ#Ʌ2ώmBf/_1;;aX@@p2&s=`<~V* #Ξ=ys1LBPZZ鼼ryhhH$*//gX!DBP7tgg'ぁض[,hff&0liibxLԔZ~衇f3͞ᇁL&BI UjjMHH`Xj!TYYYQQA0==#eII{Z񓓓$!!ċLlUUUaa!a"hddO?&C!|MGDD(;v N>]UUE#0{zyyyzzm~߾}?A.)++x]wB_$#BЈy' ump+e||gZɓ/Bsss?V5h^⌅)?VVV8{nofqq!$J+@~bhںܖك n### N733p\bٹsBkk+QFFF0 "hpppjj+((J jZ. J뻺zzzgjZ݉<3.-b76gl@2/^פfB\Es̀9JLL~}̣Gz{{>sz~yyfodbլChpfs\'p8<'!xp"QX^??oBkkk~moo/0 &imm įoV}Z]]?>vO<JIIy{챤$ ò~}ݎ$EJǁ8׿&`3W#Cѡ)vJMMlKm5­$W~gᑑpUWW700{wɑdm?E`d FVrرh48/~))))099aZFX,6 QQQ}}}fffz{{ݭRT*͝qJFTVVFFF8ZPP8^^^^8 x2LP ;::QUURf7|4-A3K%''+J 7Uѕ:/PBB±cl+">5AZ@ICH eϞ= mmmF}]]][o=3w=ˀdggC-/|~푑V-rss#jCzxx:@1۽5b$Y"|L( yOJ6>+hIOO0r"gJKK{{{KJJ~ >C $"r_ 9)n5­$WJ`xȐd p־KX!&.4-l޳݅)~Vu]P(裏Aaa7Z^^xzz|*jWRpDeddw 7yyyL&3!!!TPP0:: l6wvveffBTWTTX,.))J%"##z}hhhKK?322rW+GN(/_ -o&82>f2`QffmoO.oîK/O96KRG>6*2"M'~MYwm0I$ཱུ1[M7X^^r#PXWWO.MܲX,^\\\ZZ 7v[(.o 4=ѫТ!E5L*jnnj~555fERDV ˳m^H߃Kh[#JreaNNN\\offlh4m2x<}}}/\o߾[n5///##g?!H[*fP.11W FMMMbbbpplRRWuuuXXBT@s &&&d2 ?|}}}QQ8?99 DJ>b&WHyyy@=g=qGh'h& 7iYܹ/..fN۷oχJ,9yq:%n%fGvww_[[{JJJ??&& W̘L'N=Lӳk$%\-lf$v;:̜!+B(&&!$ĄmmmIIIPgfffTT^OOO񉍍5jСC B&މ_IVf%h2!!!z#<<|rr[:66}F.0x|\\ŋ;::z|?S!ޗьپEch[t@G-dJs_-zGBN{nqqT(tgBܧk9$0/FpBBBBll,xCM%ySRq8򶶶ӟ7y w]ffװ{ F*wԿݞmlsd ]]]iiiҔDp˗/wvv9r[o=yd}}lNNNLOO̟j: @(+JrC566jǡQXXWǧ\&:u d2ӍF#i⋇tU~F)Ih{1nH-4{:JuyzB455zohh@m8ѾsFIFaGd?8kvss{wbccU*޽{7,R&''ccc&&&<844t 7 ._.Ʉ`h3Nӏc;ukA8/oN&vHEJ;v> 'D ޱHlvmmmmmmaaj h4p!ժT*ǂԩGX,Vgg`mkkKLL,,,;qqq uuu555ZVϫꢢ"(pX__ZђmZBCI?(g+ 9Uxua2i>#G&Iҩ)6viʍ%Jh#ʼq8~ p8Dī&x!||| ɓ'{9Jx k4sa"c~\5dGlݵ L=B8x@_| tnnnkkk7tSjju:]AAa(~%s`` ͮKII `0D;"pFFFX,D"hiir\.WR `]llB?`vw L $|rMH?4D{GH>ޓY___`l6rhuoHow]p[#ϣM1BX-7?334 A.={lRs?DzzzBڵ^z8*))s@66j$/ǑJiYWh=BP֕.!оh$_ J(vH87,,lii)&&7>>-%%%<<<33l6/..FDbJJJJeddD"U(111%77W*AL& ~T^O{v# m&qGTin tԂ}O6Bٳgsssq7 ( 55})GX(Fu ѶF)FeϞ=%&&:tHA.Hv7=u(!Q!Dؾ;7i6gur$y xwz_*{)h,..^ZZRTw?FOO8"bqNNNRRFi4ڂ1' i3L,JϜ9SYYd2Y,VBByA tvv~.=22(URtE N>ޮj}p$k%Iʫhk :LxF^D*Ѿt^CCCCCCr\VGDD444RF{ok8B! ny]|aaА'|2<<ގnKy?7Y9OQ]I,F)]0 }JQ1ݻwWWWwO{ӧ322D"Qss}Sz}aaacc#Q62&&d2􀏾J;>>z'Bf.f9D" Pn.<<JH$`\^RR" !uDDdʆ;<< )(b 9 ;|])Zh\{A\!)T*=P/Pᶍpl5 رcw۷wo[o[>sd'=_}-ֆd"7tSddL&뮻슮[iÆ͓3cdZ'e#tuuu]`#j wvvB*dWeuu.d2~B/--A\+簾{{{"""Fcxxdx&!!h4Ɓ0&YZZ:88hXBLfsssryTTT``TwwwffftttjjؘX,.**b0gΜ۞y6ξxK,Y2fA~1 sZil4o{}|||~~ð^{"( I}uF/%n%&|"|CCL&F x6 `0n$hC6zd伍pAbD~{$l---P9@pjMKK;/\D`Dv(0xB @P6[o?p@}}}LLLvvv\\ IDAT?SOA-%XJ%8%ΥRi[[[uuu]]wCCC8+ F!piii\\X,χ,K,m><>>rfGEEtQ9Zz}kkkZZFmjjX,IIIᳳ*jxxR@ y:"qꚟ7ͽ:.555>>>..;Vު48CaF\Y bOH$r|߾}Wd%h5FWvTKݫ!Yx?\"@Qq 02 |)*q MyMڢ =bpaa!##CTk_|ERR VUVWWWS ,,,  Z=??#J[[[j5ϯAb^d2sssFc`` 1===|>J%fffVVV"rrr- 7@,K=r"$A<߯T*ZmPPɤdƮN}sҶp\E˗/_|l6Z>(#N~)-e6B+n_x(Wtu׶6uY _RTT˃*bAl)))^^^###r ɭ{/4㷽S",,//rywTWW744g?YooX,f0ϟ/,, {mhhr!!!ZV$effBEINРP(T*YBPBa4{{{|~]]SRRx<DCEDD رcZPj6$E{)G}!TYYpԩ?/LNN6 E6mr`G&&''5y<looW(v¼ΠWA6B+Ҵ6Aip-*%%ðNbc%#;;ro_]] {]n;]D6Yˉ~':vXRR+++wygXX]._OtvvJ$NphRR⢗Wdd$BΊ8FDDX.--F>YS޵gΜ^\\ ǎL@IIIo'|GFF orr$%% $cbg_YYJ૶]=`MW;;;322Biii ?p>H 4@~~~[Ay6NrBeLfNN` 1LCCC`,((!ð0;; w$JBxS ڂPjwmI^֡=5­$G?\5t$)G6\5GqDepBfv=5O`nJjԻᄏٳҥK+Rꭓ@(u'S޵2])"m\cHCWXXXrr2dvNGDԜ9s&++KZVؘuSN  {jjJꆆ8r!{dttt}}}DDV'rFceeJg}}/Hd+Sm#&WkQЇ~xnnn?8pٕдl8j;y6C-Ey:͗0-lq}9N7==SOMMM 0Rb^^^Nf2rvL[[Ǯ;)nnn QbIܩ>>.]H$Gy'﫯zX/,,xyyy{{CPV;66h B߿!۫lh4L& CeDDR Zыgppp߾}>hLL B(::zhh… Zŋ;v@6VOGѿWh[#JڬFBٹs, =sLjjjVVmwɈ=== |6QLNUYJkzz# BN7|#0 J*v%; |>B&󧧧u:hx")4͋~VNfffrff!X\jph$ $pEd\|RSjw#I9'Dy"Dq.ꫯvuuZ[[WQQ Š UQQl6z=`dX%%%J{ttT655  **Z"tww544x>bqvv|gg':55Z\\[O X,OOODMfۇkm_e3n% '|YVVOo~'|rBgdë YIQ4f8r;` |8/_tE1rss#p*JP(FGGjÇM&g0:q||`0 `%CA?HKfp >!6!$(662$H$b0l6` G}dJ$b&olc4Z E[. -D=,_$!!u:Bhzz:$$VBCCsssۡ 7B(22ǧ0555//oxxxll |lv\\BHRn^϶յDHٜ811yrOmmo}}!&LY!j$൷)Dyiiq mk[Ibo9s?//ommX*Vod3t$)GhDaFf`vu8[6J=f33388FFFz/뽼4ٳg+**&''1 _yAHdH bIIIIOO_zl&I xJ\_FFFr8;w>󕕕_|_}u=#ꫯv޽_`2xݐihbbս<==_|⯾ҥKr~֦a*{j]񯉌4L`"" %?}d#8/E!YsMtkZֆڰ R__w bј! _LeeennnQQ& ^EGGChZ=55sk.֦j}||J%a"'""ȑ#---III,K(zyyCMJSOBntqСnd#8:,SpyKV3BD_[[ b@h4JtԵ]H2@J:Te2Z?񏥥%.kZ>chGMOOkZLdX0jfo$o:d2V zd*++oLZaXQQb孭))):l6ĴC>뜜A\__YȀX?o>Tz{O&|(N@#ϸrX ޏ\I7L&Do*alHI>?V<|{Ak27|~nnZV(0K&??_xyy L06OIm)vm[}눔]裏>Z\\DWWWCUmyUYY#+w}}1̃JRwwb`T/ RRRT*lϗH$d2 H2'q蘟9!~~~~BR+J`ZM&SffBhnnokk#rVVV&''{vvvUUL˗/S*Sfrh жFtv~Tyw_MMM7|3BoA8p!vpla[$kff&L&ݟ!TQQqm!BCCϋ2ΠUO =d8MwuĄ筷ޚ, KbbH$Z<77wqq~qqd2=#,+""Rol6!V%D Jn<> }P 0*AIIIQQ,p7770٢+)"n;wp8|C>믿nkmmmnn.**JMM}g= L&SyyyiiiPPP(799 B~~~rZu:Vp8yyyL&sqqQTaԔR,,,LLLLLL R(CCCCц>"ko+֐u!mk[IuqssJpQrNLLx{{bJ6D?+'&&jkk9rPhh(]0d2&P+k׮Ǐ///j`ww?O===O=daaaiiP8##~m|D@cW2J-q>td*--}S (_AAmv%q JHjv:R"BoD.z< af#;;]&%%%AӋ/ggg:33oBGGb0㯿dD(J=/Rݦ p%%%"^9 $!% 6))**dZFFFLL̙3g:@PVVhlD"0Y%J'&& HH|>ll6q ӟ8뛕P(SRRB!3L$66vhhHVCC[{w]HV l===> kkk_~eYYL&۱cGꥄ1 &&&V! \.OJJ/$$jLKK{7Ɵ~龾>]#=-J5Z,0 1%. )vw#7C6)kjj{Z 3~]*;vlmm>Q^H<82&k0yGyٳYYYowWPPR222FJZZZ0g4ᵉ'\Gi2d4>HHH(++;rHNNNzz^ryyy, gn=B~(*pL&seex=.--@o"狋M&@RRr\@ 7ĎYH4>>a瓓8z뭋ahh内1‘TYYXYYA@}}}RНW_}599UVҦ!yr<%%d2ܹܹs999AAAPFa߿_(zxx`i>3idE.ݎI^?)l<`Ck{ߎ{yy=z`;mmmeٙUWWC=i;Yw/ ߏ/m ms8`m~^˗/Kmp+Ƃ]KKK+--ݱcTpɈ!23h4Bz2r'uv3!Nm'+9 LLL枞afR^ttppl6b&Hp_ۻ#\g#{4~#gaσ._ =⵵<111SSS>>>|9fFoGb"y睓EEEVbd믿u(y^Ŷ͌ĶIbf3DP8Ns NO)vvGKo"|~zy"!񕛛VFPЬ͐ m(Pv%Jz RRRwpH$rJWWWGGGFk5 騝ʿK=BDs~333uuuCCCZGmoo J|VVV0 OP--- ,^ǎ3LYYY!~6ugflH.ZYY!~Zݻw " Rm5­$9hhhAmbU$'IR4~]Ū d4ٜ?<<<==M>lxf#(9Cj{R]"B+:j!_k]ɞjBu:by饗 aƴ'NWXaM(#aأ>p2H0??Od2 ߏ8y gڨ4lvF{*h[#JrcgVTT}wamUi?&7f"ED ZbEDDRȴVV2 dkvkmOEeVES+EAd) 4kBZB '9p}/LiU^bGt+\b< Luk׮DB">Wx^VA-x 2%3{ZuܹlTѡ N~5@fZMeˍkS4 <BXڵ;.{쉑ndf_=blzb5!M?>66H/zMU\tL&[lM"~TxVVV}}={:z(/z^M;FKOhLmSWW)bTqFr`y{,hQ7}dzl 3ɬAОL#$TUU555AQ~(;;;mg+-n'uE}ϟW՟}ѣGA902xJMMMCCCI,[SESlt놎[nTZF#~wtt {!!![nMJJj92 ܂b  looRY^^Η d$ 虭8wy[> gj6203 9>p)ڼT-–۱I@[hѥK+Tݤ4;$ +JklM3,eeeBVܔ a{ENNaZ ɚ5VG陖RfP;ۏ>"o:0LNNNzz\. tl_=B7zyydJXt+W>ydRRqOOv ~R#˺ jNlXgϞMNN.**JLL,++KLLlkkqk~ yX㦂@_;HMM}뭷x oooP/Yߞ? y2lbb"99((L|'P\ϛFB8KJJ&,۷㸐ax>ׂ6Py}}Oo B$ COOOxx8dﺻ\p] h%u>ca~ ?|rmm-˲OJiiipC4bR/q{)'3 iBHkkkWWXKKKyyyBBBbb" Ύ+0`̄mE"EQ555`j|*[i}jo>BVܼ ;FE59nׯ_ xzzDd^t%Bp0|o* W$sBPvvvOOwwwwAA訿c=FYq뉉;v޽"t:E"Q^^痛T*#xtƛl #tAf}4x f`§T;=JQTbb"v3iZf_8+)) ߽{wUUUIIɦM2-V:hll4@v th˧ FR]re۶mSVѧ999^^^^^^RT4}ĺUL˫1Yi D"\.'SO.(d9ZIݴL󄭫KKKcY6##L PFGGO8sΧzVYYcn*Â~i y=wwwNCi"a{zzz OWѬXBV 8qbppr|}}"&1g6}8TZ\\|ix^ p,+Ϝ9tyYƶXХ;b]v '&&ʠX(!RB^SҥK=X8[EU0tŊӏ>hpp% b iF9G؊YiwΝ111LoA_eGhzEgB]FBHcccBBqRTK/w!eو̞DbrB VfG8c GhzL&+))ٳg E"QHHW_}ezC)xsEra2%/]TZZZSS388j4d9B?G '|'gߎ b9KL}<|52x ҋjϝ;Gz_Up??֔{'22肊h6cnGhA_(>B`Ѷk׮ݻwk4~!?_~ngeSXVUUuvvvvvvtt(JVj///T 3oTqga)i5SbAh_y>a[<|B2}ii)˲k֬=G TT*\.W)fLs!\Usw?ĨB)܌V#Dfn/(GSSS0Ljj3g`Ɋj;ܟFO#ȲÇl܄ЂD"1ڌdwx<E"P*t'x"44E<<\SS裏C,me\ccc###GtQQQEERdY{efݞp28EeqNAhÕ.++;}4|!?M˾Jfr֭[{챇~xŊ.ļNݜU=.BWl;B̶CQԏ?x馦&V{qR)BCC-[fJABF SիRTAlTmmmooL&STfG8 eY$O ^^^pB@@@}}}iiiaaaSSSHH!቉uW|ޜh#e;a9BQTQQQaaovv6!dxx^@[s`ĔmT*~m2%/^8<<[YYIt޽{id}>B9ڶ>B__߸8~ׯ_XΟ?gBHvvvee)}v@5?.Y#nGhA_,w9yp||kK.]rʕ+/!]"t2*H$H$MLL555}w.\ʤRiQQqW Up?5CAqNAh*a3+aLvȑSNq۷o8n۶m`y~tC17dQ=l]y|'!-0ŌCXE777777H[WZ3<<0Lnn@nnnD"ܼyB.(>,:(i ݉RTB~Oxkٯ6F}3fk-f O%d2٩SnRTT0T*]~E:]ۅĄLiRRt:>ݵkpiMKban\d$yF!ԲG}dpx}"(ZSO=jv b >C#W,=zHUWW2 uV"/9FhOSGHqFua,,Ol#'<2ڷƟ7/yuuuP2nhhhdd$99o;"L ѧ׮]gVD0t:]hh0VS5B{✂pVRS1I>yyQ*oKOO'UVV E b%ѧpeYB!>thKKwUUUMMM___XXVrw1{5B{✂>(ht.7#> >BWl;Kȝw944ۤ!E8h/766–O>INNnoookk#@ƍ?;eP#')m#i;A#ZZZjuKKG}400}b$d.0*soʕ===4M })s B֌ G3KGH;vikk;wC^" Om j9,?>B:M=Bk!hxQ5B{✂n>B+3jř!^iFB]/s϶5B{+k2nS,sC>Bx]VVvw666/2)f9pFhOL#xmR>B'#w$,i&JqNjĹAО8 j, G8];i7ڡiNb5B{0 Ǘ_~u]cYvm1U%l4U&}'˖-+(({׭[Gzw6 FhOC#믿LII),,|ꅘGhG0pRR'EQa___B`QqVP#'!A9r$88ĉ$"""66vǎw,t)!6˲-[L j1VQsssJJD"XreHHHcc#1,4^eB/]tʼn ^P~GqP#'B XD"H$R*p(B`!i!!D&}qqq? 8N& ,^`0`<⚠FhO@/'HRR ˨a~1YF ]xtm"+=qA(AN *iǏ^z5//ZZ>BG#mYx\.GAFhO@0AAApP [2h0LGȓDzlQQXAAem=AО8@b^/ccc8pUDRYY944#>BrA\cBBꫯkky !ԭj111ZcddDBhF-ALAО8 0?>a[0XGhtp֢-A5B{`b n>B#1 ( 8P "tFhOOZKZ\._AQ8=<<4Mffᆱ1eAf5B{✂pkV1M²J:q\.Ye& 5B{✂n>B WtG*X,/9644K/4 hAd=qNAR>B5EQ644tuu򰰰fV MA) s …#Y /444p0< ;[!(݃3ms Bg#4iz``@T&$$Br(˲28>)ıFȇ/qFO,QLs B# ‹m۶KRa6nj-[n?ql4x{E o`0poT$,+!&HN:uر'OzWiΞe✂q}-3 0Z&\r%00EGRi0 d8Eņ!q,4tٯMED"]M\0 D2 ?ߴiShht~VTThm۶I8✂pg#P/..(eY???etN: N5``A|ȑ۷Bd2ٶmrssoVk*ɘ xHZZ!D,wuuo߾]"ϟ'<ӄ˗/?>00СC[n}饗9b xp䧟~o~Ct%%%5}9 \y۞ BH}}}[[Ν;jNh4---4M,˲T*EXZ:tb׮] ߰al:nϞ=qqq^^^ O?ٳg !/bݻ8[z5˲###MMM ~)66D"V u322Os B ޤɟi6sPeYe2޽IҀZ-Ji뮻Ak4u LqMCCCG)++dR4--6ea|||jkk 6oO?566_|s!駟R(EEE>,qGŵ|R`tt0ѽ èT*8MPBƈ)m#,@1 o.ě4Fwdd_?rH__H$Ȁoذa: QJAqVh6{&q2s`` <>>4 ??xh.^l &AP׃/˲III:z!R.Pg$P4]TT422k DDDD"JcBE"Q`` pNAh[aWWWXXT*fYV@qqqZeوEB>5P=Au&"jXf:m6nܘrJȂP(ZVz뭟|IUUYꧦQpxQ2 ihh`Y6??ժUb8 I[lٲebNA,c? =LtTT][nm W; (m>Mӛ7o.*smMsws y >덶 tK5;GsFhQAM ڬFk}HSNCġqip΁^+)A 8F48 D!  j9}84NZ,# ̱9,A%;h!qN+)AjR,4(lUSڛa >bbbB,}WfT ĄD"y7?cbR&8 D! mwOvoaRH$HS^p`0]r%==Asss裏v=00@Q"ذğ4VKtrr;311a[3Vx"0|+WtuuK$/㸐{Ձoo簾&G}?ܱc˲vki4]vUWWrGxcCġI hppp鱱fffw}&C555B644|ofNN-+**֭[w…,OOseggCJp!Nw 6wuuedd8 D( 5믿><<,JLJT@Q0E,EnV Sg+BHjjjII׮]kkkwwUWW]`0lܸdݺu ;%%3FD[o !AAAEA=BHJyRRRRRqZ5öA5aSS_ݴGGGɔ\]mdd̙3|\S@B\}رk׶j40??O>:..N׳,+SSS!о5pSAƚ~:c;ׯ_ooo(8˟|ɛΦ8N"协BAk4B+]222AT*ww1 zkllrի !jZ.+ Xwߝ>}~oYE"HGXꫯ-GSRiV+++.]j㣏AƆyEu]&mṺ~n߾}ҥҕ+Wi4s eYV+#ⓗaڱ!!a,9yH$ Շ/1@) cC&6jժXB^_lYCCCYY0۶m/++D~~~;wUVQ5c!t^$-_SI7js`.^T]] aaQsڵwygAm 2#c{dgyfټi,?] U*ɓ'/_ ¯>; uMvGOw"O,8AÏ𣾾8VuUH_bbbBAfC#tss}§Dw6,؊#4 㸬RXk=z_&ttt+JeiAq8FL, AKKKJZ޽{3<ۚ_z%B^t- njjz |S#$, cӰ!*Y9qѰ01 T*X ; !8(ړ$A[vڵk!R,VWW/Zʕ+Ν;pRU>qDP#' n#8P[[K),,loo_bŚ5ka!  jd >"( TT{%&&+;;w!kȔDa) qN F>k,z{{W\ G`0 q%dEPJ̙3k֬A8 $|IE߿_Vc qJrϞ=~~~ wyzz>|X&D2E*vwwWUU ad2Yvv޽{e2PB=[??Ec]˖-3{\R)Kzyy:X~*ʀ;y{{nMRaRym)JLKR_V xbpeX G~d䀻7qD"8E@3. H$48L{i܇d ? k;*w(En Wm:9_cP]jAtLL|bh\>}Y-_*j:T*UTT!Xeb$ѼQ b RȮ`Yk3h!rQv[ոSzѰz?eY0 GC)8{qYq#3g}+?aadLQB`HFa ]k``੧z衇z-f nx瞻v횰+aW^y#SSE?>K/t:W+?GyG|M"> Qp}Wk֬y?Bb^ş?裏:uh~t Xjqqqjm߾]&eee~50ׯܹsgPPʕ+a\sLڵK.wttLNͨH;wFGG3 jcŲ,={PGRt۶m]]]7 YYY bӦM02,111--M$=zth~qAP(VX ܐN799(']r OnnnV(mmmcccœS+P(ANNNVWW{zzMNN TI.9'''JeKK訧cM6M ħ?6''gϞ=pI&,qF8^\\h\j|7 ^_PP^QQj RSSW^|)Ϝ92>>~뭷nܸv”655IR>̙3111wyח.]W__O\rrÇz^V(VR~iko7ހEӴ\.dj3##,III4Mt;*8bbb"Hjkk_}U SB((ӳg~{> ^OOO777"__߁Ch_iiǥR)008ח Ѿ[scO>T*]F9"JW\O|8ollLVۻo߾M6~###jDRQ5000ߝu-SJpss%L /lⲘ> t,FG&[,vԩDgS.]N8At]]8EQ=)iRS BC=f͚{/ 4}\ШEr)?66oLp_F"E"7mTUUn:2+y`0$$$|7]]]7Z0 0 nfgJ|4͟`T2 L1 344ln;.V! ###dj4]'`۶m;sSO=@x|ooJR*/\ GTݭT*U* !### Àc' jMNNIІA//NXY`|m]]J윜T(5557 M F`ddfN KOOtɨQek4<8GhZy9!t:j|,'$$P7}%eSRRRiRRgttVt')0ސ^y111J2))IP$''`!/---R4000)))222&&&22irr2???z3 ~/ VX"HXjUTTMӮ_yyyX,b*T*}(T(:n޽YYY5",3ׯdIIIEYd0<<<&''bMzpZZڿJ$tD"p1ONNT*\7߄?~g_/޸q˗Y7L˗//Zկ\pk+R^["FkǚF]y|ŻQ ]V)FKV * )`i!f/"5sM\K" .@A!(A! Ҹb|F`!AlQ4 "Ff@fӧ/\@zM3.YP D#&(Hd8mz^$Yt۶m+// tCA2, X,vssFg8"(DRUUuرg~~mnnnAA_d!!?Cuu֭[=<<@ 0#GIDAT!Ǐ;6<>> CFdA?)HRSSN>zj_\\ޞJ!!7, 5uuu2Lq ! ےׯ_d###{-[ 1 has_facets <- .obj_has_name(x, "facet") && length(unique(x$facet)) > 1 has_panel <- .obj_has_name(x, "panel") && length(unique(x$panel)) > 1 has_response <- .obj_has_name(x, "response.level") && length(unique(x$response.level)) > 1 has_se <- .obj_has_name(x, "std.error") cat("\n") lab <- attr(x, "title", exact = TRUE) if (!is.null(lab)) insight::print_color(paste0(sprintf("# %s", lab), "\n", collapse = ""), "blue") lab <- attr(x, "x.title", exact = TRUE) if (!is.null(lab)) insight::print_color(paste0(sprintf("# x = %s", lab), "\n", collapse = ""), "blue") consv <- attr(x, "constant.values") terms <- attr(x, "terms") ci.lvl <- attr(x, "ci.lvl") # fix terms for survival models a1 <- attr(x, "fitfun", exact = TRUE) a2 <- attr(x, "y.title", exact = TRUE) if (!is.null(a1) && !is.null(a2) && a1 == "coxph" && !(a2 == "Risk Score")) terms <- c("time", terms) x <- .round_numeric(x, digits = digits) # if we have groups, show n rows per group .n <- 1 # justify terms tl <- length(terms) if (tl > 2) terms[2:tl] <- format(terms[2:tl], justify = "right") if (has_groups) { .n <- .n_distinct(x$group) if (!is.null(terms) && length(terms) >= 2) { vals <- sprintf("%s = %s", terms[2], as.character(x$group)) lvls <- unique(vals) x$group <- factor(vals, levels = lvls) } } if (has_facets) { .n <- .n * .n_distinct(x$facet) if (!is.null(terms) && length(terms) >= 3) { x$facet <- sprintf("%s = %s", terms[3], as.character(x$facet)) } } if (has_panel) { .n <- .n * .n_distinct(x$panel) if (!is.null(terms) && length(terms) >= 4) { x$panel <- sprintf("%s = %s", terms[4], as.character(x$panel)) } } if (has_response) { .n <- .n * .n_distinct(x$response.level) vals <- sprintf("Response Level = %s", as.character(x$response.level)) lvls <- unique(vals) x$response.level <- ordered(vals, levels = lvls) } # make sure that by default not too many rows are printed if (missing(n)) { n <- if (.n >= 6) 4 else if (.n >= 4 & .n < 6) 5 else if (.n >= 2 & .n < 4) 6 else 8 } if (!has_groups) { if (!has_response) { cat("\n") if (.obj_has_name(x, "group")) x <- .remove_column(x, "group") # print.data.frame(x[.get_sample_rows(x, n), ], ..., row.names = FALSE, quote = FALSE) .print_block(x, n, digits, ci.lvl, ...) } else { x$.nest <- tapply(x$predicted, list(x$response.level), NULL) xx <- split(x, x$.nest) for (i in xx) { insight::print_color(sprintf("\n# %s\n\n", i$response.level[1]), "red") .print_block(i, n, digits, ci.lvl, ...) } } } else if (has_groups && !has_facets) { if (!has_response) { x$.nest <- tapply(x$predicted, list(x$group), NULL) xx <- split(x, x$.nest) for (i in xx) { insight::print_color(sprintf("\n# %s\n\n", i$group[1]), "red") .print_block(i, n, digits, ci.lvl, ...) } } else { x$.nest <- tapply(x$predicted, list(x$response.level, x$group), NULL) xx <- split(x, x$.nest) for (i in xx) { insight::print_color(sprintf("\n# %s\n# %s\n\n", i$response.level[1], i$group[1]), "red") .print_block(i, n, digits, ci.lvl, ...) } } } else if (has_groups && has_facets && !has_panel) { if (!has_response) { x$.nest <- tapply(x$predicted, list(x$group, x$facet), NULL) xx <- split(x, x$.nest) for (i in xx) { insight::print_color(sprintf("\n# %s\n# %s\n\n", i$group[1], i$facet[1]), "red") .print_block(i, n, digits, ci.lvl, ...) } } else { x$.nest <- tapply(x$predicted, list(x$response.level, x$group, x$facet), NULL) xx <- split(x, x$.nest) for (i in xx) { insight::print_color(sprintf("\n# %s\n# %s\n# %s\n\n", i$response.level[1], i$group[1], i$facet[1]), "red") .print_block(i, n, digits, ci.lvl, ...) } } } else { if (!has_response) { x$.nest <- tapply(x$predicted, list(x$group, x$facet, x$panel), NULL) xx <- split(x, x$.nest) for (i in xx) { insight::print_color(sprintf("\n# %s\n# %s\n# %s\n\n", i$group[1], i$facet[1], i$panel[1]), "red") .print_block(i, n, digits, ci.lvl, ...) } } else { x$.nest <- tapply(x$predicted, list(x$response.level, x$group, x$facet, x$panel), NULL) xx <- split(x, x$.nest) for (i in xx) { insight::print_color(sprintf("\n# %s\n# %s\n# %s\n# %s\n\n", i$response.level[1], i$group[1], i$facet[1], i$panel[1]), "red") .print_block(i, n, digits, ci.lvl, ...) } } } cv <- lapply( consv, function(.x) { if (is.numeric(.x)) sprintf("%.2f", .x) else as.character(.x) }) if (!.is_empty(cv)) { cv.names <- names(cv) cv.space <- max(nchar(cv.names)) # ignore this string when determing maximum length poplev <- which(cv %in% c("NA (population-level)", "0 (population-level)")) if (!.is_empty(poplev)) mcv <- cv[-poplev] else mcv <- cv if (!.is_empty(mcv)) cv.space2 <- max(nchar(mcv)) else cv.space2 <- 0 insight::print_color(paste0( "\nAdjusted for:\n", paste0(sprintf("* %*s = %*s", cv.space, cv.names, cv.space2, cv), collapse = "\n") ), "blue") cat("\n") } cat("\n") fitfun <- attr(x, "fitfun", exact = TRUE) if (has_se && !is.null(fitfun) && fitfun != "lm") { message("Standard errors are on link-scale (untransformed).") } predint <- attr(x, "prediction.interval", exact = TRUE) if (!is.null(predint) && isTRUE(predint)) { message("Intervals are prediction intervals.") } } .get_sample_rows <- function(x, n) { nr.of.rows <- seq_len(nrow(x)) if (n < length(nr.of.rows)) { sample.rows <- round(c( min(nr.of.rows), stats::quantile(nr.of.rows, seq_len(n - 2) / n), max(nr.of.rows) )) } else { sample.rows <- nr.of.rows } sample.rows } #' @importFrom insight format_table format_ci .print_block <- function(i, n, digits, ci.lvl, ...) { i <- i[setdiff(colnames(i), c("group", "facet", "panel", "response.level", ".nest"))] # print.data.frame(, ..., row.names = FALSE, quote = FALSE) dd <- i[.get_sample_rows(i, n), ] if ("conf.low" %in% colnames(dd) && "conf.high" %in% colnames(dd)) { dd$CI <- insight::format_ci(dd$conf.low, dd$conf.high, digits = digits, width = "auto") dd$CI <- gsub("95% CI ", "", dd$CI, fixed = TRUE) if (is.null(ci.lvl)) ci.lvl <- .95 colnames(dd)[which(colnames(dd) == "CI")] <- sprintf("%g%% CI", 100 * ci.lvl) dd$conf.low <- NULL dd$conf.high <- NULL } if ("std.error" %in% colnames(dd)) { colnames(dd)[which(colnames(dd) == "std.error")] <- "SE" } colnames(dd)[which(colnames(dd) == "predicted")] <- "Predicted" cat(insight::format_table(dd, digits = digits, protect_integers = TRUE)) # print.data.frame(dd, ..., quote = FALSE, row.names = FALSE) } ggeffects/R/get_predictions_gee.R0000644000176200001440000000056413523224123016537 0ustar liggesusers#' @importFrom insight get_data get_predictions_gee <- function(model, terms, ...) { prdat <- stats::predict( model, type = "response", ... ) mf <- insight::get_data(model)[, terms, drop = FALSE] # copy predictions mf$predicted <- as.vector(prdat) # No CI mf$conf.low <- NA mf$conf.high <- NA unique(mf) } ggeffects/R/get_predictions_gam.R0000644000176200001440000000707613545114160016553 0ustar liggesusersget_predictions_gam <- function(model, fitfram, ci.lvl, linv, type, ...) { se <- !is.null(ci.lvl) && !is.na(ci.lvl) # compute ci, two-ways if (!is.null(ci.lvl) && !is.na(ci.lvl)) ci <- (1 + ci.lvl) / 2 else ci <- .975 mi <- insight::model_info(model) if (!mi$is_zero_inflated && type %in% c("fe.zi", "re.zi")) { type <- "fe" message(sprintf("Model has no zero-inflation part. Changing prediction-type to \"%s\".", type)) } prdat <- stats::predict( model, newdata = fitfram, type = "link", se.fit = se ) if (type == "fe.zi") { # check if number of simulations was provided add.args <- lapply(match.call(expand.dots = F)$`...`, function(x) x) if ("nsim" %in% names(add.args)) nsim <- eval(add.args[["nsim"]]) else nsim <- 1000 # simulate predictions, for standad errors / CI prdat.sim <- .get_zeroinfl_gam_predictions(model = model, newdata = fitfram, nsim = nsim) # make sure we have only predicted values as vector, no SE if (.obj_has_name(prdat, "fit")) { prdat <- list( cond = as.vector(prdat$fit[, 1]), zi = as.vector(prdat$fit[, 2]) ) } else { prdat <- list( cond = as.vector(prdat[, 1]), zi = as.vector(prdat[, 2]) ) } prdat <- as.vector(exp(prdat$cond) * (1 - stats::plogis(prdat$zi))) # success? if (is.null(prdat.sim) || inherits(prdat.sim, c("error", "simpleError"))) { insight::print_color("Error: Confidence intervals could not be computed.\n", "red") fitfram$predicted <- prdat fitfram$conf.low <- NA fitfram$conf.high <- NA } else { sims <- exp(prdat.sim$cond) * (1 - stats::plogis(prdat.sim$zi)) fitfram$predicted <- prdat fitfram$std.error <- apply(sims, 1, stats::sd) conf.low <- apply(sims, 1, stats::quantile, probs = 1 - ci) conf.high <- apply(sims, 1, stats::quantile, probs = ci) ci.range <- (conf.high - conf.low) / 2 # fix negative CI ci.low <- fitfram$predicted - ci.range neg.ci <- ci.low < 0 if (any(neg.ci)) { ci.range[neg.ci] <- ci.range[neg.ci] - abs(ci.low[neg.ci]) - 1e-05 fitfram$std.error[neg.ci] <- fitfram$std.error[neg.ci] - ((abs(ci.low[neg.ci]) + 1e-05) / stats::qnorm(ci)) } fitfram$conf.low <- fitfram$predicted - ci.range fitfram$conf.high <- fitfram$predicted + ci.range if (.obj_has_name(fitfram, "std.error")) { # copy standard errors attr(fitfram, "std.error") <- fitfram$std.error fitfram <- .remove_column(fitfram, "std.error") } } } else { if (mi$is_zero_inflated) { if (.obj_has_name(prdat, "fit")) { prdat$fit <- as.vector(prdat$fit[, 1]) prdat$se.fit <- as.vector(prdat$se.fit[, 1]) } else { prdat <- as.vector(prdat[, 1]) } linv <- exp } # did user request standard errors? if yes, compute CI if (se) { # copy predictions fitfram$predicted <- linv(prdat$fit) # calculate CI fitfram$conf.low <- linv(prdat$fit - stats::qnorm(ci) * prdat$se.fit) fitfram$conf.high <- linv(prdat$fit + stats::qnorm(ci) * prdat$se.fit) # copy standard errors attr(fitfram, "std.error") <- prdat$se.fit } else { # copy predictions fitfram$predicted <- linv(as.vector(prdat)) # no CI fitfram$conf.low <- NA fitfram$conf.high <- NA } fitfram } } ggeffects/R/vcov.R0000644000176200001440000002176613604103340013516 0ustar liggesusers#' @title Calculate variance-covariance matrix for marginal effects #' @name vcov #' #' @description Returns the variance-covariance matrix for the predicted values from \code{object}. #' #' @param object An object of class \code{"ggeffects"}, as returned by \code{ggpredict()}. #' @param ... Currently not used. #' @inheritParams ggpredict #' #' @return The variance-covariance matrix for the predicted values from \code{object}. #' #' @details The returned matrix has as many rows (and columns) as possible combinations #' of predicted values from the \code{ggpredict()} call. For example, if there #' are two variables in the \code{terms}-argument of \code{ggpredict()} with 3 and 4 #' levels each, there will be 3*4 combinations of predicted values, so the returned #' matrix has a 12x12 dimension. In short, \code{nrow(object)} is always equal to #' \code{nrow(vcov(object))}. See also 'Examples'. #' #' @examples #' data(efc) #' model <- lm(barthtot ~ c12hour + neg_c_7 + c161sex + c172code, data = efc) #' result <- ggpredict(model, c("c12hour [meansd]", "c161sex")) #' #' vcov(result) #' #' # compare standard errors #' sqrt(diag(vcov(result))) #' as.data.frame(result) #' #' # only two predicted values, no further terms #' # vcov() returns a 2x2 matrix #' result <- ggpredict(model, "c161sex") #' vcov(result) #' #' # 2 levels for c161sex multiplied by 3 levels for c172code #' # result in 6 combinations of predicted values #' # thus vcov() returns a 6x6 matrix #' result <- ggpredict(model, c("c161sex", "c172code")) #' vcov(result) #' #' @importFrom stats model.matrix terms formula #' @importFrom insight find_random clean_names find_parameters get_varcov #' @export vcov.ggeffects <- function(object, vcov.fun = NULL, vcov.type = NULL, vcov.args = NULL, ...) { model <- tryCatch({ get(attr(object, "model.name"), envir = parent.frame()) }, error = function(e) { NULL } ) if (is.null(model)) { warning("Can't access original model to compute variance-covariance matrix of predictions.", call. = FALSE) return(NULL) } model_frame <- insight::get_data(model) # check random effect terms. We can't compute SE if data has # factors with only one level, however, if user conditions on # random effects and only conditions on one level, it is indeed # possible to calculate SE - so, ignore random effects for the # check of one-level-factors only random_effect_terms <- insight::find_random(model, split_nested = TRUE, flatten = TRUE) # we can't condition on categorical variables condition <- attr(object, "condition") if (!is.null(condition)) { cn <- names(condition) cn.factors <- sapply(cn, function(.x) is.factor(model_frame[[.x]]) && !(.x %in% random_effect_terms)) condition <- condition[!cn.factors] if (.is_empty(condition)) condition <- NULL } const.values <- attr(object, "constant.values") const.values <- c(condition, unlist(const.values[sapply(const.values, is.numeric)])) terms <- attr(object, "original.terms") # copy data frame with predictions newdata <- .data_grid( model, model_frame, terms, value_adjustment = "mean", factor_adjustment = FALSE, show_pretty_message = FALSE, condition = const.values ) # make sure we have enough values to compute CI nlevels_terms <- sapply( colnames(newdata), function(.x) !(.x %in% random_effect_terms) && is.factor(newdata[[.x]]) && nlevels(newdata[[.x]]) == 1 ) if (any(nlevels_terms)) { not_enough <- colnames(newdata)[which(nlevels_terms)[1]] remove_lvl <- paste0("[", gsub(pattern = "(.*)\\[(.*)\\]", replacement = "\\2", x = terms[which(.clean_terms(terms) == not_enough)]), "]", collapse = "") stop(sprintf("`%s` does not have enough factor levels. Try to remove `%s`.", not_enough, remove_lvl), call. = TRUE) } # add response to newdata. For models fitted with "glmmPQL", # the response variable is renamed internally to "zz". if (inherits(model, "glmmPQL")) { new_response <- 0 names(new_response) <- "zz" } else { fr <- insight::find_response(model, combine = FALSE) new_response <- rep(0, length.out = length(fr)) names(new_response) <- fr } new_response <- new_response[setdiff(names(new_response), colnames(newdata))] newdata <- cbind(as.list(new_response), newdata) # clean terms from brackets terms <- .clean_terms(terms) # sort data by grouping levels, so we have the correct order # to slice data afterwards if (length(terms) > 2) { trms <- terms[3] newdata <- newdata[order(newdata[[trms]]), ] } if (length(terms) > 1) { trms <- terms[2] newdata <- newdata[order(newdata[[trms]]), ] } trms <- terms[1] newdata <- newdata[order(newdata[[trms]]), ] # rownames were resorted as well, which causes troubles in model.matrix rownames(newdata) <- NULL .vcov_helper(model, model_frame, get_predict_function(model), newdata, vcov.fun, vcov.type, vcov.args, terms) } .vcov_helper <- function(model, model_frame, model_class, newdata, vcov.fun, vcov.type, vcov.args, terms) { # check if robust vcov-matrix is requested if (!is.null(vcov.fun)) { if (vcov.type %in% c("CR0", "CR1", "CR1p", "CR1S", "CR2", "CR3")) { if (!requireNamespace("clubSandwich", quietly = TRUE)) { stop("Package `clubSandwich` needed for this function. Please install and try again.") } robust_package <- "clubSandwich" } else { if (!requireNamespace("sandwich", quietly = TRUE)) { stop("Package `sandwich` needed for this function. Please install and try again.") } robust_package <- "sandwich" } # compute robust standard errors based on vcov if (robust_package == "sandwich") { vcov.fun <- get(vcov.fun, asNamespace("sandwich")) vcm <- as.matrix(do.call(vcov.fun, c(list(x = model, type = vcov.type), vcov.args))) } else { vcov.fun <- clubSandwich::vcovCR vcm <- as.matrix(do.call(vcov.fun, c(list(obj = model, type = vcov.type), vcov.args))) } } else { # get variance-covariance-matrix, depending on model type vcm <- insight::get_varcov(model, component = "conditional") } model_terms <- tryCatch({ stats::terms(model) }, error = function(e) { insight::find_formula(model)$conditional }) # code to compute se of prediction taken from # http://bbolker.github.io/mixedmodels-misc/glmmFAQ.html#predictions-andor-confidence-or-prediction-intervals-on-predictions mm <- stats::model.matrix(model_terms, newdata) # here we need to fix some term names, so variable names match the column # names from the model matrix. NOTE that depending on the type of contrasts, # the naming column names for factors differs: for "contr.sum", column names # of factors are named "Species1", "Species2", etc., while for "contr.treatment", # column names are "Speciesversicolor", "Speciesvirginica", etc. contrs <- attr(mm, "contrasts") if (!.is_empty(contrs)) { # check which contrasts are actually in terms-argument, # and which terms also appear in contrasts keep.c <- names(contrs) %in% terms rem.t <- terms %in% names(contrs) # only iterate required terms and contrasts contrs <- contrs[keep.c] terms <- terms[!rem.t] add.terms <- unlist(mapply(function(.x, .y) { f <- model_frame[[.y]] if (.x %in% c("contr.sum", "contr.helmert")) sprintf("%s%s", .y, 1:(nlevels(f) - 1)) else if (.x == "contr.poly") sprintf("%s%s", .y, c(".L", ".Q", ".C")) else sprintf("%s%s", .y, levels(f)[2:nlevels(f)]) }, contrs, names(contrs), SIMPLIFY = FALSE)) terms <- c(terms, add.terms) } # we need all this intersection-stuff to reduce the model matrix and remove # duplicated entries. Else, especially for mixed models, we often run into # memory allocation problems. The problem is to find the correct rows of # the matrix that should be kept, and only take those columns of the # matrix for which terms we need standard errors. model_matrix_data <- as.data.frame(mm) rows_to_keep <- as.numeric(rownames(unique(model_matrix_data[intersect(colnames(model_matrix_data), terms)]))) # for poly-terms, we have no match, so fix this here if (.is_empty(rows_to_keep) || !all(terms %in% colnames(model_matrix_data))) { inters <- which(insight::clean_names(colnames(model_matrix_data)) %in% terms) rows_to_keep <- as.numeric(rownames(unique(model_matrix_data[inters]))) } mm <- mm[rows_to_keep, ] # check class of fitted model, to make sure we have just one class-attribute # (while "inherits()" may return multiple attributes) model_class <- get_predict_function(model) if (!is.null(model_class) && model_class %in% c("polr", "mixor", "multinom", "brmultinom", "bracl", "fixest")) { keep <- intersect(colnames(mm), colnames(vcm)) vcm <- vcm[keep, keep] mm <- mm[, keep] } mm %*% vcm %*% t(mm) } ggeffects/R/utils_colors.R0000644000176200001440000000241213565462052015263 0ustar liggesusers# get color palette .get_colors <- function(geom.colors, collen, continuous) { # check for corrct color argument if (!is.null(geom.colors)) { geom.colors <- tolower(geom.colors) # check for color brewer palette if (geom.colors[1] %in% names(ggeffects_colors)) { geom.colors <- ggeffects_pal(palette = geom.colors[1], n = collen) } else if (geom.colors[1] == "bw") { geom.colors <- rep("black", times = collen) } else if (geom.colors[1] == "gs") { geom.colors <- ggeffects_pal(palette = "greyscale", n = collen) # do we have correct amount of colours? } else if (length(geom.colors) > 1 & continuous) { # preserve colors as is for latter use in gradient scale return(geom.colors) } else if (length(geom.colors) > collen) { # shorten palette geom.colors <- geom.colors[1:collen] } else if (length(geom.colors) < collen) { # warn user abount wrong color palette warning(sprintf("Insufficient length of color palette provided. %i color values needed.", collen), call. = F) # set default palette geom.colors <- ggeffects_pal(palette = "set1", n = collen) } } else { geom.colors <- ggeffects_pal(palette = "set1", n = collen) } geom.colors } ggeffects/R/utils_ggpredict.R0000644000176200001440000000230213545114055015723 0ustar liggesusers#' @importFrom insight find_terms .back_transform_response <- function(model, mydf, back.transform) { # check if outcome is log-transformed, and if so, # back-transform predicted values to response scale rv <- insight::find_terms(model)[["response"]] if (any(grepl("log\\((.*)\\)", rv))) { if (back.transform) { # do we have log-log models? if (grepl("log\\(log\\((.*)\\)\\)", rv)) { mydf$predicted <- exp(exp(mydf$predicted)) if (.obj_has_name(mydf, "conf.low") && .obj_has_name(mydf, "conf.high")) { mydf$conf.low <- exp(exp(mydf$conf.low)) mydf$conf.high <- exp(exp(mydf$conf.high)) } } else { mydf$predicted <- exp(mydf$predicted) if (.obj_has_name(mydf, "conf.low") && .obj_has_name(mydf, "conf.high")) { mydf$conf.low <- exp(mydf$conf.low) mydf$conf.high <- exp(mydf$conf.high) } } message("Model has log-transformed response. Back-transforming predictions to original response scale. Standard errors are still on the log-scale.") } else { message("Model has log-transformed response. Predictions are on log-scale.") } } mydf } ggeffects/R/ggemmeans.R0000644000176200001440000001251713614007013014477 0ustar liggesusers#' @importFrom stats confint na.omit #' @importFrom sjlabelled get_labels as_numeric #' @importFrom insight find_response get_data model_info #' @rdname ggpredict #' @export ggemmeans <- function(model, terms, ci.lvl = .95, type = "fe", typical = "mean", condition = NULL, back.transform = TRUE, ...) { if (!requireNamespace("emmeans")) { stop("Package `emmeans` required to compute marginal effects with `ggemmeans()`.", call. = FALSE) } # check arguments type <- match.arg(type, choices = c("fe", "fe.zi", "re", "re.zi")) model_name <- deparse(substitute(model)) # check if terms are a formula if (!missing(terms) && !is.null(terms) && inherits(terms, "formula")) { terms <- all.vars(terms) } # for gamm/gamm4 objects, we have a list with two items, mer and gam # extract just the mer-part then if (is.gamm(model) || is.gamm4(model)) model <- model$gam # check model family, do we have count model? model_info <- .get_model_info(model) # get model frame original_model_frame <- model_frame <- insight::get_data(model) # check terms argument terms <- .check_vars(terms, model) cleaned_terms <- .clean_terms(terms) data_grid <- .data_grid( model = model, model_frame = model_frame, terms = terms, value_adjustment = typical, condition = condition, emmeans.only = TRUE ) # for zero-inflated mixed models, we need some extra handling if (model_info$is_zero_inflated && inherits(model, c("glmmTMB", "MixMod")) && type == "fe.zi") { preds <- .emmeans_mixed_zi(model, data_grid, cleaned_terms, ...) additional_dot_args <- lapply(match.call(expand.dots = F)$`...`, function(x) x) if ("nsim" %in% names(additional_dot_args)) nsim <- eval(additional_dot_args[["nsim"]]) else nsim <- 1000 prediction_data <- .ggemmeans_zi_predictions( model = model, model_frame = model_frame, preds = preds, ci.lvl = ci.lvl, terms = terms, cleaned_terms = cleaned_terms, value_adjustment = typical, condition = condition, nsim = nsim, type = type ) pmode <- "response" } else { # get prediction mode, i.e. at which scale predicted # values should be returned pmode <- .get_prediction_mode_argument(model, model_info, type) prediction_data <- .emmeans_prediction_data(model, data_grid, cleaned_terms, ci.lvl, pmode, type, model_info, ...) # fix gam here if (inherits(model, "gam") && model_info$is_zero_inflated) { prediction_data$predicted <- exp(prediction_data$predicted) prediction_data$conf.low <- exp(prediction_data$conf.low) prediction_data$conf.high <- exp(prediction_data$conf.high) } } # return NULL on error if (is.null(prediction_data)) return(NULL) attr(prediction_data, "continuous.group") <- attr(data_grid, "continuous.group") if (model_info$is_ordinal | model_info$is_categorical) { colnames(prediction_data)[1] <- "response.level" } result <- .post_processing_predictions( model = model, prediction_data = prediction_data, original_model_frame = original_model_frame, cleaned_terms = cleaned_terms ) # apply link inverse function linv <- insight::link_inverse(model) if (!is.null(linv) && (inherits(model, "lrm") || pmode == "link" || (inherits(model, "MixMod") && type != "fe.zi"))) { result$predicted <- linv(result$predicted) result$conf.low <- linv(result$conf.low) result$conf.high <- linv(result$conf.high) } # check if outcome is log-transformed, and if so, # back-transform predicted values to response scale result <- .back_transform_response(model, result, back.transform) attr(result, "model.name") <- model_name # add raw data as well attr(result, "rawdata") <- .get_raw_data(model, original_model_frame, cleaned_terms) .post_processing_labels( model = model, result = result, original_model_frame = original_model_frame, data_grid = data_grid, cleaned_terms = cleaned_terms, original_terms = terms, model_info = model_info, type = type, prediction.interval = attr(prediction_data, "prediction.interval", exact = TRUE), at_list = .data_grid( model = model, model_frame = original_model_frame, terms = terms, value_adjustment = typical, condition = condition, show_pretty_message = FALSE, emmeans.only = TRUE ), ci.lvl = ci.lvl ) } .get_prediction_mode_argument <- function(model, model_info, type) { if (inherits(model, "betareg")) "response" else if (inherits(model, c("polr", "clm", "clmm", "clm2", "rms"))) "prob" else if (inherits(model, "lmerMod")) "asymptotic" else if (inherits(model, "MixMod")) "fixed-effects" else if (inherits(model, "gls")) "satterthwaite" else if (model_info$is_ordinal | model_info$is_categorical) "prob" else if (model_info$is_zero_inflated && type %in% c("fe", "re") && inherits(model, "glmmTMB")) "link" else if (model_info$is_zero_inflated && type %in% c("fe.zi", "re.zi")) "response" else if (model_info$is_zero_inflated && type %in% c("fe", "re")) "count" else "link" } ggeffects/R/get_predictions_glmrob_base.R0000644000176200001440000000163713451124203020253 0ustar liggesusersget_predictions_glmrob_base <- function(model, fitfram, ci.lvl, linv, ...) { # does user want standard errors? se <- !is.null(ci.lvl) && !is.na(ci.lvl) # compute ci, two-ways if (!is.null(ci.lvl) && !is.na(ci.lvl)) ci <- (1 + ci.lvl) / 2 else ci <- .975 # for models from "robust"-pkg (glmRob) we need to # suppress warnings about fake models prdat <- stats::predict( model, newdata = fitfram, type = "link", se.fit = se, ... ) # get predicted values, on link-scale fitfram$predicted <- linv(prdat$fit) if (se) { fitfram$conf.low <- linv(prdat$fit - stats::qnorm(ci) * prdat$se.fit) fitfram$conf.high <- linv(prdat$fit + stats::qnorm(ci) * prdat$se.fit) # copy standard errors attr(fitfram, "std.error") <- prdat$se.fit } else { fitfram$conf.low <- NA fitfram$conf.high <- NA } fitfram } ggeffects/R/get_predictions_rq.R0000644000176200001440000000106213451124203016411 0ustar liggesusersget_predictions_rq <- function(model, fitfram, ci.lvl, ...) { if (!is.null(ci.lvl) && !is.na(ci.lvl)) ci <- "confidence" else ci <- "none" prdat <- stats::predict( model, newdata = fitfram, interval = ci, level = ci.lvl, ... ) # get predicted values, on link-scale fitfram$predicted <- prdat[, 1] if (ci == "none") { fitfram$conf.low <- NA fitfram$conf.high <- NA } else { fitfram$conf.low <- prdat[, 2] fitfram$conf.high <- prdat[, 3] } fitfram } ggeffects/R/utils.R0000644000176200001440000001440013577127064013706 0ustar liggesusersdata_frame <- function(...) { x <- data.frame(..., stringsAsFactors = FALSE) rownames(x) <- NULL x } .check_vars <- function(terms, model) { if (missing(terms) || is.null(terms)) { stop("`terms` needs to be a character vector with at least one predictor names: one term used for the x-axis, more optional terms as grouping factors.", call. = F) } # check for correct length of vector if (length(terms) > 4) { message("`terms` must have not more than four values. Using first four values now.") terms <- terms[1:4] } if (!is.null(model)) { tryCatch( { pv <- insight::find_predictors(model, effects = "all", component = "all", flatten = TRUE) clean.terms <- .clean_terms(terms) for (i in clean.terms) { if (!(i %in% pv)) { insight::print_color(sprintf("`%s` was not found in model terms. Maybe misspelled?\n", i), "red") } } }, error = function(x) { NULL } ) } terms } #' @importFrom stats complete.cases #' @importFrom sjlabelled as_label as_numeric .get_raw_data <- function(model, mf, terms) { # for matrix variables, don't return raw data if (any(sapply(mf, is.matrix)) && !inherits(model, c("coxph", "coxme"))) return(NULL) if (!all(insight::find_response(model, combine = FALSE) %in% colnames(mf))) return(NULL) # get response and x-value response <- insight::get_response(model) x <- sjlabelled::as_numeric(mf[[terms[1]]]) # for cox-models, modify response if (inherits(model, "coxph")) { response <- response[[2]] } # add optional grouping variable if (length(terms) > 1) { group <- sjlabelled::as_label( mf[[terms[2]]], prefix = FALSE, drop.na = TRUE, drop.levels = !is.numeric(mf[[terms[2]]]) ) } else { group <- as.factor(1) } # remove missings from model frame mf <- mf[stats::complete.cases(mf), ] # return all as data.frame tryCatch( { data_frame(response = response, x = x, group = group) }, error = function(x) { NULL }, warning = function(x) { NULL }, finally = function(x) { NULL } ) } #' @importFrom stats na.omit .prettify_data <- function(conditional_terms, original_model_frame, terms, use_all_values = FALSE, show_pretty_message = FALSE) { lapply(conditional_terms, function(.x) { pr <- original_model_frame[[terms[.x]]] if (is.numeric(pr)) { if (.x > 1 && .n_distinct(pr) >= 10) values_at(pr) else if (.n_distinct(pr) < 20 || isTRUE(use_all_values)) { sort(stats::na.omit(unique(pr))) } else { if (show_pretty_message) { message(sprintf("Data were 'prettified'. Consider using `terms=\"%s [all]\"` to get smooth plots.", terms[.x])) show_pretty_message <- FALSE } pretty_range(pr) } } else if (is.factor(pr)) levels(droplevels(pr)) else stats::na.omit(unique(pr)) }) } #' @importFrom insight get_variance_random n_obs find_parameters #' @importFrom stats deviance .get_random_effect_variance <- function(x) { tryCatch( { if (inherits(x, c("merMod", "rlmerMod", "lmerMod", "glmerMod", "glmmTMB", "stanreg", "MixMod"))) { re.var <- insight::get_variance_random(x) } else if (inherits(x, c("lme", "nlme"))) { re.var <- x$sigma^2 } else { re.var <- stats::deviance(x) / (insight::n_obs(x) - length(insight::find_parameters(x)[["conditional"]])) } re.var }, error = function(x) { 0 } ) } .frac_length <- function(x) { if (is.numeric(x)) { max(nchar(gsub(pattern = "(.\\.)(.*)", "\\2", sprintf("%f", abs(x) %% 1)))) } else 0 } is.whole <- function(x) { (is.numeric(x) && all(floor(x) == x, na.rm = T)) || is.character(x) || is.factor(x) } .get_poly_term <- function(x) { p <- "(.*)poly\\(([^,]*)[^)]*\\)(.*)" sub(p, "\\2", x) } .get_poly_degree <- function(x) { p <- "(.*)poly\\(([^,]*)([^)])*\\)(.*)" tryCatch( { as.numeric(sub(p, "\\3", x)) }, error = function(x) { 1 } ) } #' @importFrom stats formula is_brms_trial <- function(model) { is.trial <- FALSE if (inherits(model, "brmsfit") && is.null(stats::formula(model)$responses)) { is.trial <- tryCatch({ rv <- .safe_deparse(stats::formula(model)$formula[[2L]]) trimws(sub("(.*)\\|(.*)\\(([^,)]*).*", "\\2", rv)) %in% c("trials", "resp_trials") }, error = function(x) { FALSE } ) } is.trial } .get_model_info <- function(model) { faminfo <- insight::model_info(model) if (insight::is_multivariate(model)) faminfo <- faminfo[[1]] faminfo$is_brms_trial <- is_brms_trial(model) faminfo } .compact_list <- function(x) x[!sapply(x, function(i) length(i) == 0 || is.null(i) || any(i == "NULL"))] .safe_deparse <- function(string) { paste0(sapply(deparse(string, width.cutoff = 500), trimws, simplify = TRUE), collapse = " ") } is.gamm <- function(x) { inherits(x, c("list", "gamm")) && all(names(x) %in% c("lme", "gam")) } is.gamm4 <- function(x) { inherits(x, "list") && all(names(x) %in% c("mer", "gam")) } .n_distinct <- function(x, na.rm = TRUE) { if (na.rm) x <- x[!is.na(x)] length(unique(x)) } # select rows where values in "variable" match "value" .select_rows <- function(data, variable, value) { data[which(data[[variable]] == value), ] } # remove column .remove_column <- function(data, variables) { a <- attributes(data) if (!length(variables) || is.null(variables)) return(data) if (is.numeric(variables)) variables <- colnames(data)[variables] data <- data[, -which(colnames(data) %in% variables), drop = FALSE] remaining <- setdiff(names(a), names(attributes(data))) if (length(remaining)) attributes(data) <- c(attributes(data), a[remaining]) data } .convert_numeric_factors <- function(x) { num_facs <- sapply(x, .is_numeric_factor) if (any(num_facs)) { x[num_facs] <- lapply(x[num_facs], function(i) as.numeric(as.character(i))) } x } .is_numeric_factor <- function(x) { is.factor(x) && !anyNA(suppressWarnings(as.numeric(levels(x)))) } ggeffects/R/get_predictions_lme.R0000644000176200001440000000403013565464751016567 0ustar liggesusers#' @importFrom stats model.matrix formula vcov get_predictions_lme <- function(model, fitfram, ci.lvl, linv, type, terms, value_adjustment, model_class, vcov.fun, vcov.type, vcov.args, condition, ...) { # does user want standard errors? se <- (!is.null(ci.lvl) && !is.na(ci.lvl)) || !is.null(vcov.fun) # compute ci, two-ways if (!is.null(ci.lvl) && !is.na(ci.lvl)) ci <- (1 + ci.lvl) / 2 else ci <- .975 if (inherits(model, "glmmPQL")) pr.type <- "link" else pr.type <- "response" prdat <- stats::predict( model, newdata = fitfram, type = pr.type, level = 0, ... ) # copy predictions fitfram$predicted <- as.vector(prdat) # did user request standard errors? if yes, compute CI if (se) { se.pred <- .standard_error_predictions( model = model, prediction_data = fitfram, value_adjustment = value_adjustment, terms = terms, model_class = model_class, type = type, vcov.fun = vcov.fun, vcov.type = vcov.type, vcov.args = vcov.args, condition = condition ) if (!is.null(se.pred)) { se.fit <- se.pred$se.fit fitfram <- se.pred$prediction_data # calculate CI fitfram$conf.low <- fitfram$predicted - stats::qnorm(ci) * se.fit fitfram$conf.high <- fitfram$predicted + stats::qnorm(ci) * se.fit # copy standard errors attr(fitfram, "std.error") <- se.fit attr(fitfram, "prediction.interval") <- attr(se.pred, "prediction_interval") } else { # No CI fitfram$conf.low <- NA fitfram$conf.high <- NA } } else { # No CI fitfram$conf.low <- NA fitfram$conf.high <- NA } # for glmmPQL, we need to back-transform using link-inverse if (inherits(model, "glmmPQL")) { fitfram$predicted <- linv(fitfram$predicted) fitfram$conf.low <- linv(fitfram$conf.low) fitfram$conf.high <- linv(fitfram$conf.high) } fitfram } ggeffects/R/ggemmeans_add_confint.R0000644000176200001440000000336113577126731017045 0ustar liggesusers#' @importFrom stats confint .ggemmeans_add_confint <- function(model, tmp, ci.lvl, type = "fe", pmode) { # compute ci, two-ways if (!is.null(ci.lvl) && !is.na(ci.lvl)) ci <- (1 + ci.lvl) / 2 else ci <- .975 if (type %in% c("re", "re.zi")) { fitfram <- suppressWarnings( .var_rename( as.data.frame(tmp), SE = "std.error", emmean = "predicted", lower.CL = "conf.low", upper.CL = "conf.high", prob = "predicted", asymp.LCL = "conf.low", asymp.UCL = "conf.high", lower.HPD = "conf.low", upper.HPD = "conf.high" ) ) revar <- .get_random_effect_variance(model) # get link-function and back-transform fitted values # to original scale, so we compute proper CI if (pmode %in% c("prob", "count")) { lf <- insight::link_function(model) fitfram$conf.low <- exp(lf(fitfram$conf.low) - stats::qnorm(ci) * sqrt(revar)) fitfram$conf.high <- exp(lf(fitfram$conf.high) + stats::qnorm(ci) * sqrt(revar)) } else { fitfram$conf.low <- fitfram$conf.low - stats::qnorm(ci) * sqrt(revar) fitfram$conf.high <- fitfram$conf.high + stats::qnorm(ci) * sqrt(revar) } fitfram$std.error <- sqrt(fitfram$std.error^2 + revar) fitfram } else { suppressWarnings( .var_rename( as.data.frame(stats::confint(tmp, level = ci.lvl)), SE = "std.error", emmean = "predicted", lower.CL = "conf.low", upper.CL = "conf.high", prob = "predicted", asymp.LCL = "conf.low", asymp.UCL = "conf.high", lower.HPD = "conf.low", upper.HPD = "conf.high" ) ) } } ggeffects/R/get_predictions_glimML.R0000644000176200001440000000203313466756545017201 0ustar liggesusers#' @importFrom stats plogis qnorm get_predictions_glimML <- function(model, fitfram, ci.lvl, linv, ...) { # does user want standard errors? se <- !is.null(ci.lvl) && !is.na(ci.lvl) # compute ci, two-ways if (!is.null(ci.lvl) && !is.na(ci.lvl)) ci <- (1 + ci.lvl) / 2 else ci <- .975 if (!requireNamespace("aod", quietly = TRUE)) { stop("Package 'aod' is required. Please install it.") } prdat <- aod::predict( model, newdata = fitfram, type = "link", se.fit = se, ... ) # copy predictions fitfram$predicted <- linv(prdat$fit) # did user request standard errors? if yes, compute CI if (se) { # calculate CI fitfram$conf.low <- linv(prdat$fit - stats::qnorm(ci) * prdat$se.fit) fitfram$conf.high <- linv(prdat$fit + stats::qnorm(ci) * prdat$se.fit) # copy standard errors attr(fitfram, "std.error") <- prdat$se.fit } else { # No CI fitfram$conf.low <- NA fitfram$conf.high <- NA } fitfram } ggeffects/R/pretty_range.R0000644000176200001440000000403213524252352015240 0ustar liggesusers#' @title Create a pretty sequence over a range of a vector #' @name pretty_range #' #' @description Creates an evenly spaced, pretty sequence of numbers for a #' range of a vector. #' #' @param x A numeric vector. #' @param n Numeric value, indicating the size of how many values are used to #' create a pretty sequence. If \code{x} has a large value range (> 100), #' \code{n} could be something between 1 to 5. If \code{x} has a rather #' small amount of unique values, \code{n} could be something between #' 10 to 20. If \code{n = NULL}, \code{pretty_range()} automatically #' tries to find a pretty sequence. #' @param length Integer value, as alternative to \code{n}, defines the number of #' intervals to be returned. #' #' @return A numeric vector with a range corresponding to the minimum and maximum #' values of \code{x}. #' #' @examples #' library(sjmisc) #' data(efc) #' #' x <- std(efc$c12hour) #' x #' # pretty range for vectors with decimal points #' pretty_range(x) #' #' # pretty range for large range, increasing by 50 #' pretty_range(1:1000) #' #' # increasing by 20 #' pretty_range(1:1000, n = 7) #' #' # return 10 intervals #' pretty_range(1:1000, length = 10) #' #' # same result #' pretty_range(1:1000, n = 2.5) #' #' @export pretty_range <- function(x, n = NULL, length = NULL) { ra.min <- min(x, na.rm = TRUE) ra.max <- max(x, na.rm = TRUE) ra <- seq(ra.min, ra.max, sqrt(ra.max - ra.min) / 10) if (!is.null(length)) { pretty(ra, n = length) } else { if (!is.null(n)) pr <- n else if (.n_distinct(x) > 100) pr <- 3 else if (.n_distinct(x) > 50) pr <- 5 else pr <- 10 pr <- pr^(floor(log10(length(ra)))) p1 <- pretty(ra, n = pr) p2 <- pretty(ra, n = ceiling(pr * 1.5)) p3 <- pretty(ra, n = 2 * pr) if (length(p1) >= .n_distinct(x)) p1 else if (length(p1) < 10 && length(p2) < 25) p2 else if (length(p2) < 15 && length(p3) < 25) p3 else p1 } } ggeffects/R/get_predictions_stan.R0000644000176200001440000001114613614012561016744 0ustar liggesusers#' @importFrom stats median formula get_predictions_stan <- function(model, fitfram, ci.lvl, type, model_info, ppd, terms = NULL, ...) { # check if pkg is available if (!requireNamespace("rstantools", quietly = TRUE)) { stop("Package `rstantools` is required to compute predictions.", call. = F) } # does user want standard errors? se <- !is.null(ci.lvl) && !is.na(ci.lvl) # check whether predictions should be conditioned # on random effects (grouping level) or not. if (type != "fe") ref <- NULL else ref <- NA # check if we have terms that are ordinal, and if so, # convert factors to ordinal in "newdata" to allow # predictions for monotonic models if (!is.null(terms)) { mf <- insight::get_data(model) vo <- names(which(sapply(mf, is.ordered))) fac2ord <- which(terms %in% vo) if (!.is_empty(fac2ord)) { for (i in fac2ord) fitfram[[terms[i]]] <- as.ordered(fitfram[[terms[i]]]) } } # compute posterior predictions if (ppd) { # for binomial models, "newdata" also needs a response # value. we take the value for a successful event if (model_info$is_binomial) { resp.name <- insight::find_response(model) # successfull events fitfram[[resp.name]] <- factor(1) } prdat2 <- prdat <- rstantools::posterior_predict( model, newdata = fitfram, re.form = ref, ... ) } else { # get posterior distribution of the linear predictor # note that these are not best practice for inferences, # because they don't take the measurement error into account prdat <- rstantools::posterior_linpred( model, newdata = fitfram, transform = TRUE, re.form = ref, re_formula = ref, ... ) if (model_info$is_mixed) { # tell user message("Note: uncertainty of error terms are not taken into account. You may want to use `rstantools::posterior_predict()`.") } } # we have a list of 4000 samples, so we need to coerce to data frame prdat <- as.data.frame(prdat) # handle cumulative link models if (inherits(model, "brmsfit") && model_info$family %in% c("cumulative", "categorical")) { tmp <- as.data.frame(lapply(prdat, stats::median)) tmp <- .gather(tmp, names_to = "grp", values_to = "predicted") rownames(tmp) <- NULL tmp$grp <- gsub("X", "", tmp$grp, fixed = TRUE) resp.vals <- levels(insight::get_response(model)[[1]]) term.cats <- nrow(fitfram) fitfram <- do.call(rbind, rep(list(fitfram), time = length(resp.vals))) fitfram$response.level <- rep(unique(resp.vals), each = term.cats) fitfram$predicted <- tmp$predicted } else if (insight::is_multivariate(model)) { # handle multivariate response models tmp <- as.data.frame(lapply(prdat, stats::median)) tmp <- .gather(tmp, names_to = "grp", values_to = "predicted") rownames(tmp) <- NULL tmp$grp <- gsub("X", "", tmp$grp, fixed = TRUE) resp.vars <- insight::find_response(model, combine = FALSE) fitfram <- do.call(rbind, rep(list(fitfram), time = length(resp.vars))) fitfram$response.level <- "" for (i in resp.vars) { pos <- string_ends_with(pattern = i, x = tmp$grp) if (.is_empty(pos)) { i <- gsub(pattern = "[\\_\\.]", replacement = "", x = i) # same as # i <- gsub(pattern = "(\\_|\\.)", replacement = "", x = i) pos <- string_ends_with(pattern = i, x = tmp$grp) } fitfram$response.level[pos] <- i } fitfram$predicted <- tmp$predicted } else { # compute median, as "most probable estimate" fitfram$predicted <- sapply(prdat, stats::median) } # for posterior predictive distributions, we compute # the predictive intervals if (ppd) { # for multivariate reponse models, we have an array # instead of matrix - get CIs for each response if (inherits(prdat2, "array")) { tmp <- do.call(rbind, lapply(1:dim(prdat2)[3], function(.x) { as.data.frame(rstantools::predictive_interval(as.matrix(prdat2[, , .x]), prob = ci.lvl)) })) } else { tmp <- rstantools::predictive_interval(prdat2, prob = ci.lvl) } } else { tmp <- rstantools::predictive_interval(as.matrix(prdat), prob = ci.lvl) } predint <- list( tmp[, 1], tmp[, 2] ) if (se) { # bind predictive intervals int fitfram$conf.low <- predint[[1]] fitfram$conf.high <- predint[[2]] } else { # no CI fitfram$conf.low <- NA fitfram$conf.high <- NA } fitfram } ggeffects/R/utils_check_transformations.R0000644000176200001440000000234513575640521020355 0ustar liggesusers.has_splines <- function(model) { form <- .get_pasted_formula(model) if (is.null(form)) return(FALSE) any( grepl("s\\(([^,)]*)", form) | grepl("bs\\(([^,)]*)", form) | grepl("ns\\(([^,)]*)", form) | grepl("pspline\\(([^,)]*)", form) | grepl("poly\\(([^,)]*)", form) ) } .has_poly <- function(model) { form <- .get_pasted_formula(model) if (is.null(form)) return(FALSE) any(grepl("I\\(.*?\\^.*?\\)", form) | grepl("poly\\(([^,)]*)", form)) } .has_log <- function(model) { any(.get_log_terms(model)) } .get_log_terms <- function(model) { form <- .get_pasted_formula(model) if (is.null(form)) return(FALSE) grepl("log\\(([^,)]*).*", form) } #' @importFrom insight find_terms .get_pasted_formula <- function(model) { tryCatch( { unlist(.compact_list(insight::find_terms(model)[c("conditional", "random", "instruments")])) }, error = function(x) { NULL } ) } .has_poly_term <- function(x) { any(grepl("poly\\(([^,)]*)", x)) } .uses_all_tag <- function(terms) { tags <- unlist(regmatches( terms, gregexpr( pattern = "\\[(.*)\\]", text = terms, perl = T ) )) "[all]" %in% tags } ggeffects/R/predict_zero_inflation.R0000644000176200001440000004114313577125046017304 0ustar liggesusers#' @importFrom stats quantile sd .join_simulations <- function(prediction_data, newdata, prdat, sims, ci, clean_terms) { # after "bootstrapping" confidence intervals by simulating from the # multivariate normal distribution, we need to prepare the data and # calculate "bootstrapped" estimates and CIs. prediction_data$sort__id <- 1:nrow(prediction_data) column_matches <- sapply(colnames(prediction_data), function(.x) any(unique(prediction_data[[.x]]) %in% newdata[[.x]])) # we need two data grids here: one for all combination of levels from the # model predictors ("newdata"), and one with the current combinations only # for the terms in question ("prediction_data"). "sims" has always the same # number of rows as "newdata", but "prediction_data" might be shorter. So we # merge "prediction_data" and "newdata", add mean and quantiles from "sims" # as new variables, and then later only keep the original observations # from "prediction_data" - by this, we avoid unequal row-lengths. join_by <- colnames(prediction_data)[column_matches] prediction_data <- merge(newdata, prediction_data, by = join_by, all = TRUE, sort = FALSE) prediction_data$predicted <- apply(sims, 1, mean) prediction_data$conf.low <- apply(sims, 1, stats::quantile, probs = 1 - ci) prediction_data$conf.high <- apply(sims, 1, stats::quantile, probs = ci) prediction_data$std.error <- apply(sims, 1, stats::sd) # group_by() changes the order of rows / variables in "prediction_data", however # we later add back the original predictions "prdat" (see below), which # correspond to the *current* sorting of prediction_data. So we add a dummy-ID, # which we use to restore the original sorting of prediction_data later... # The following code is a replace for dplyr, when grouping and # summarizing data. previous code was: # # grp <- rlang::syms(clean_terms) # prediction_data <- prediction_data %>% # dplyr::filter(!is.na(.data$sort__id)) %>% # dplyr::group_by(!!! grp) %>% # dplyr::summarize( # predicted = mean(.data$predicted), # conf.low = mean(.data$conf.low), # conf.high = mean(.data$conf.high), # std.error = mean(.data$std.error), # id = .data$sort__id # ) %>% # dplyr::ungroup() # # New code is a bit longer, but reduces pkg dependencies... prediction_data <- prediction_data[!is.na(prediction_data$sort__id), ] means_predicted <- tapply( prediction_data$predicted, lapply(clean_terms, function(i) prediction_data[[i]]), function(j) mean(j, na.rm = TRUE), simplify = FALSE ) means_conf_low <- tapply( prediction_data$conf.low, lapply(clean_terms, function(i) prediction_data[[i]]), function(j) mean(j, na.rm = TRUE), simplify = FALSE ) means_conf_high <- tapply( prediction_data$conf.high, lapply(clean_terms, function(i) prediction_data[[i]]), function(j) mean(j, na.rm = TRUE), simplify = FALSE ) means_se <- tapply( prediction_data$std.error, lapply(clean_terms, function(i) prediction_data[[i]]), function(j) mean(j, na.rm = TRUE), simplify = FALSE ) terms_df <- data.frame(expand.grid(attributes(means_predicted)$dimnames), stringsAsFactors = FALSE) colnames(terms_df) <- clean_terms terms_df <- .convert_numeric_factors(terms_df) prediction_data <- cbind( terms_df, predicted = unlist(lapply(means_predicted, function(i) if (is.null(i)) NA else i)), conf.low = unlist(lapply(means_conf_low, function(i) if (is.null(i)) NA else i)), conf.high = unlist(lapply(means_conf_high, function(i) if (is.null(i)) NA else i)), std.error = unlist(lapply(means_se, function(i) if (is.null(i)) NA else i)), id = prediction_data$sort__id ) rownames(prediction_data) <- NULL if (length(clean_terms) == 1) { prediction_data <- prediction_data[order(prediction_data[[1]]), ] } else if (length(clean_terms) == 2) { prediction_data <- prediction_data[order(prediction_data[[1]], prediction_data[[2]]), ] } else if (length(clean_terms) == 3) { prediction_data <- prediction_data[order(prediction_data[[1]], prediction_data[[2]], prediction_data[[3]]), ] } else if (length(clean_terms) == 4) { prediction_data <- prediction_data[order(prediction_data[[1]], prediction_data[[2]], prediction_data[[3]], prediction_data[[4]]), ] } # we use the predicted values from "predict(type = "reponse")", but the # bootstrapped CI - so we need to fix a bit here. "predict(type = "reponse")" # works as intended, but the related standard errors are not reliable (due # to the combination of the conditional and zero-inflated model), so we need # the simulated standard errors and CI's - but can use the "correct" predictions. # in order to make CI and predictions match, we take the simulated CI-range # and use the original predicted values as "center" for those CI-ranges. if (length(prdat) == nrow(prediction_data)) { prediction_data <- prediction_data[order(prediction_data$id), ] ci.range <- (prediction_data$conf.high - prediction_data$conf.low) / 2 prediction_data$predicted <- prdat # fix negative CI ci.low <- prediction_data$predicted - ci.range neg.ci <- ci.low < 0 if (any(neg.ci)) { ci.range[neg.ci] <- ci.range[neg.ci] - abs(ci.low[neg.ci]) - 1e-05 prediction_data$std.error[neg.ci] <- prediction_data$std.error[neg.ci] - ((abs(ci.low[neg.ci]) + 1e-05) / stats::qnorm(ci)) } prediction_data$conf.low <- prediction_data$predicted - ci.range prediction_data$conf.high <- prediction_data$predicted + ci.range prediction_data <- .remove_column(prediction_data, "id") } prediction_data } .simulate_predictions <- function(model, newdata, nsim = 1000, terms = NULL, value_adjustment = NULL, condition = NULL) { # Since the zero inflation and the conditional model are working in "opposite # directions", confidence intervals can not be derived directly from the # "predict()"-function. Thus, confidence intervals for type = "fe.zi" are # based on quantiles of simulated draws from a multivariate normal distribution # (see also _Brooks et al. 2017, pp.391-392_ for details). if (inherits(model, "glmmTMB")) { .simulate_predictions_glmmTMB(model, newdata, nsim, terms, value_adjustment, condition) } else if (inherits(model, "MixMod")) { .simulate_predictions_MixMod(model, newdata, nsim, terms, value_adjustment, condition) } else { .simulate_predictions_zeroinfl(model, newdata, nsim, terms, value_adjustment, condition) } } #' @importFrom MASS mvrnorm #' @importFrom stats model.matrix formula #' @importFrom insight get_varcov .simulate_predictions_glmmTMB <- function(model, newdata, nsim, terms = NULL, value_adjustment = NULL, condition = NULL) { if (!requireNamespace("lme4", quietly = TRUE)) { stop("You need to install package `lme4` first to compute marginal effects.", call. = FALSE) } # Since the zero inflation and the conditional model are working in "opposite # directions", confidence intervals can not be derived directly from the # "predict()"-function. Thus, confidence intervals for type = "fe.zi" are # based on quantiles of simulated draws from a multivariate normal distribution # (see also _Brooks et al. 2017, pp.391-392_ for details). tryCatch( { condformula <- lme4::nobars(stats::formula(model)[-2]) ziformula <- lme4::nobars(stats::formula(model$modelInfo$allForm$ziformula)) # if formula has a polynomial term, and this term is one that is held # constant, model.matrix() with "newdata" will throw an error - so we # re-build the newdata-argument by including all values for poly-terms, if # these are hold constant. fixes <- .rows_to_keep(model, newdata, condformula, ziformula, terms, value_adjustment, condition) if (!is.null(fixes)) { keep <- fixes$keep newdata <- fixes$newdata } else { keep <- NULL } x.cond <- stats::model.matrix(condformula, newdata) beta.cond <- lme4::fixef(model)$cond x.zi <- stats::model.matrix(ziformula, newdata) beta.zi <- lme4::fixef(model)$zi cond.varcov <- insight::get_varcov(model, component = "conditional") zi.varcov <- insight::get_varcov(model, component = "zero_inflated") pred.condpar.psim <- MASS::mvrnorm(n = nsim, mu = beta.cond, Sigma = cond.varcov) pred.cond.psim <- x.cond %*% t(pred.condpar.psim) pred.zipar.psim <- MASS::mvrnorm(n = nsim, mu = beta.zi, Sigma = zi.varcov) pred.zi.psim <- x.zi %*% t(pred.zipar.psim) if (!.is_empty(keep)) { pred.cond.psim <- pred.cond.psim[keep, ] pred.zi.psim <- pred.zi.psim[keep, ] } list(cond = pred.cond.psim, zi = pred.zi.psim) }, error = function(x) { x }, warning = function(x) { NULL }, finally = function(x) { NULL } ) } #' @importFrom MASS mvrnorm #' @importFrom stats model.matrix formula .simulate_predictions_MixMod <- function(model, newdata, nsim, terms = NULL, value_adjustment = NULL, condition = NULL) { if (!requireNamespace("lme4", quietly = TRUE)) { stop("You need to install package `lme4` first to compute marginal effects.", call. = FALSE) } tryCatch( { condformula <- stats::formula(model, type = "fixed") ziformula <- stats::formula(model, type = "zi_fixed") # if formula has a polynomial term, and this term is one that is held # constant, model.matrix() with "newdata" will throw an error - so we # re-build the newdata-argument by including all values for poly-terms, if # these are hold constant. fixes <- .rows_to_keep(model, newdata, condformula, ziformula, terms, value_adjustment, condition) if (!is.null(fixes)) { keep <- fixes$keep newdata <- fixes$newdata } else { keep <- NULL } x.cond <- stats::model.matrix(condformula, newdata) beta.cond <- lme4::fixef(model, sub_model = "main") x.zi <- stats::model.matrix(ziformula, newdata) beta.zi <- lme4::fixef(model, sub_model = "zero_part") cond.varcov <- insight::get_varcov(model, component = "conditional") zi.varcov <- insight::get_varcov(model, component = "zero_inflated") pred.condpar.psim <- MASS::mvrnorm(n = nsim, mu = beta.cond, Sigma = cond.varcov) pred.cond.psim <- x.cond %*% t(pred.condpar.psim) pred.zipar.psim <- MASS::mvrnorm(n = nsim, mu = beta.zi, Sigma = zi.varcov) pred.zi.psim <- x.zi %*% t(pred.zipar.psim) if (!.is_empty(keep)) { pred.cond.psim <- pred.cond.psim[keep, ] pred.zi.psim <- pred.zi.psim[keep, ] } list(cond = pred.cond.psim, zi = pred.zi.psim) }, error = function(x) { x }, warning = function(x) { NULL }, finally = function(x) { NULL } ) } #' @importFrom stats model.matrix coef formula as.formula #' @importFrom MASS mvrnorm .simulate_predictions_zeroinfl <- function(model, newdata, nsim = 1000, terms = NULL, value_adjustment = NULL, condition = NULL) { tryCatch( { condformula <- stats::as.formula(paste0("~", .safe_deparse(stats::formula(model)[[3]][[2]]))) ziformula <- stats::as.formula(paste0("~", .safe_deparse(stats::formula(model)[[3]][[3]]))) # if formula has a polynomial term, and this term is one that is held # constant, model.matrix() with "newdata" will throw an error - so we # re-build the newdata-argument by including all values for poly-terms, if # these are hold constant. fixes <- .rows_to_keep(model, newdata, condformula, ziformula, terms, value_adjustment, condition) if (!is.null(fixes)) { keep <- fixes$keep newdata <- fixes$newdata } else { keep <- NULL } x.cond <- stats::model.matrix(condformula, model = "count", data = newdata) beta.cond <- stats::coef(model, model = "count") x.zi <- stats::model.matrix(ziformula, model = "zero", data = newdata) beta.zi <- stats::coef(model, model = "zero") cond.varcov <- insight::get_varcov(model, component = "conditional") zi.varcov <- insight::get_varcov(model, component = "zero_inflated") pred.condpar.psim <- MASS::mvrnorm(nsim, mu = beta.cond, Sigma = cond.varcov) pred.cond.psim <- x.cond %*% t(pred.condpar.psim) pred.zipar.psim <- MASS::mvrnorm(nsim, mu = beta.zi, Sigma = zi.varcov) pred.zi.psim <- x.zi %*% t(pred.zipar.psim) if (!.is_empty(keep)) { pred.cond.psim <- pred.cond.psim[keep, ] pred.zi.psim <- pred.zi.psim[keep, ] } list(cond = pred.cond.psim, zi = pred.zi.psim) }, error = function(x) { x }, warning = function(x) { NULL }, finally = function(x) { NULL } ) } #' @importFrom insight get_data #' @importFrom stats quantile .rows_to_keep <- function(model, newdata, condformula, ziformula, terms, value_adjustment, condition) { # if formula has a polynomial term, and this term is one that is held # constant, model.matrix() with "newdata" will throw an error - so we # re-build the newdata-argument by including all values for poly-terms, if # these are hold constant. const.values <- attr(newdata, "constant.values") condformula_string <- .safe_deparse(condformula) ziformula_string <- .safe_deparse(ziformula) keep <- NULL if (.has_poly_term(condformula_string) || .has_poly_term(ziformula_string)) { model_frame <- insight::get_data(model) polycondcheck <- NULL polyzicheck <- NULL if (.has_poly_term(condformula_string)) { polyterm <- .get_poly_term(condformula_string) if (polyterm %in% names(const.values)) { polycondcheck <- polyterm polydg <- .get_poly_degree(condformula_string) polyvals <- paste0( stats::quantile(model_frame[[polyterm]], probs = seq_len(polydg + 1) / (polydg + 2)), collapse = "," ) terms <- c(terms, sprintf("%s [%s]", polyterm, polyvals)) } } if (.has_poly_term(ziformula_string)) { polyterm <- .get_poly_term(ziformula_string) if (polyterm %in% names(const.values)) { polyzicheck <- polyterm polydg <- .get_poly_degree(ziformula_string) polyvals <- paste0( stats::quantile(model_frame[[polyterm]], probs = seq_len(polydg + 1) / (polydg + 2)), collapse = "," ) terms <- c(terms, sprintf("%s [%s]", polyterm, polyvals)) } } newdata <- .data_grid( model = model, model_frame = model_frame, terms = terms, value_adjustment = value_adjustment, factor_adjustment = FALSE, show_pretty_message = FALSE, condition = condition ) keep.cond <- vector("numeric") keep.zi <- vector("numeric") if (!is.null(polycondcheck)) { keep.cond <- unlist(lapply(polycondcheck, function(.x) { wm <- newdata[[.x]][which.min(abs(newdata[[.x]] - .typical_value(newdata[[.x]], fun = value_adjustment)))] as.vector(which(newdata[[.x]] == wm)) })) } if (!is.null(polyzicheck)) { keep.zi <- unlist(lapply(polyzicheck, function(.x) { wm <- newdata[[.x]][which.min(abs(newdata[[.x]] - .typical_value(newdata[[.x]], fun = value_adjustment)))] as.vector(which(newdata[[.x]] == wm)) })) } keep <- intersect(keep.cond, keep.zi) if (.is_empty(keep)) keep <- unique(c(keep.cond, keep.zi)) } if (.is_empty(keep)) return(NULL) list(keep = keep, newdata = newdata) } #' @importFrom stats model.matrix coef formula as.formula #' @importFrom MASS mvrnorm .get_zeroinfl_gam_predictions <- function(model, newdata, nsim = 1000) { tryCatch( { mm <- stats::model.matrix(model, data = newdata) linpred <- attr(mm, "lpi", exact = TRUE) cond <- linpred[[1]] zi <- linpred[[2]] x.cond <- mm[, cond] x.zi <- mm[, zi] beta.cond <- stats::coef(model)[cond] beta.zi <- stats::coef(model)[zi] varcov.cond <- stats::vcov(model)[cond, cond] varcov.zi <- stats::vcov(model)[zi, zi] psim.cond <- MASS::mvrnorm(nsim, mu = beta.cond, Sigma = varcov.cond) pred.cond <- x.cond %*% t(psim.cond) psim.zi <- MASS::mvrnorm(nsim, mu = beta.zi, Sigma = varcov.zi) pred.zi <- x.zi %*% t(psim.zi) list(cond = pred.cond, zi = pred.zi) }, error = function(x) { x }, warning = function(x) { NULL }, finally = function(x) { NULL } ) } ggeffects/R/get_predictions_clm2.R0000644000176200001440000000334213577110542016641 0ustar liggesusers#' @importFrom insight get_response find_response get_predictions_clm2 <- function(model, fitfram, ci.lvl, linv, ...) { stop("`ggpredict()` does currently not support clm2-models.", call. = FALSE) # does user want standard errors? se <- !is.null(ci.lvl) && !is.na(ci.lvl) # compute ci, two-ways if (!is.null(ci.lvl) && !is.na(ci.lvl)) ci <- (1 + ci.lvl) / 2 else ci <- .975 fitfram <- cbind(data.frame(as.factor(insight::get_response(model))), fitfram) colnames(fitfram)[1] <- insight::find_response(model) # prediction, with CI prdat <- stats::predict( model, newdata = fitfram, type = "prob", interval = se, level = ci, ... ) # convert to data frame. prdat <- as.data.frame(prdat) # bind predictions to model frame fitfram <- cbind(prdat, fitfram) # get levels of response lv <- levels(insight::get_response(model)) # for proportional ordinal logistic regression (see ordinal::clm), # we have predicted values for each response category. Hence, # gather columns. Since we also have conf. int. for each response # category, we need to gather multiple columns at once if (isTRUE(se)) { # length of each variable block l <- seq_len(ncol(prdat) / 3) colnames(fitfram)[l] <- lv fitfram <- .multiple_gather( fitfram, names_to = "response.level", values_to = c("predicted", "conf.low", "conf.high"), columns = list(l, l + length(l), l + 2 * length(l)) ) } else { fitfram <- .gather(fitfram, names_to = "response.level", values_to = "predicted", colnames(prdat)) # No CI fitfram$conf.low <- NA fitfram$conf.high <- NA } fitfram } ggeffects/R/utils_set_attr.R0000644000176200001440000000323613575640521015614 0ustar liggesusers.set_attributes_and_class <- function(data, model, t.title, x.title, y.title, l.title, legend.labels, x.axis.labels, model_info, constant.values = NULL, terms = NULL, original_terms = NULL, at_list = NULL, n.trials = NULL, prediction.interval = NULL, condition = NULL, ci.lvl = .95) { # check correct labels if (!is.null(x.axis.labels) && length(x.axis.labels) != length(stats::na.omit(unique(data$x)))) x.axis.labels <- as.vector(sort(stats::na.omit(unique(data$x)))) rownames(data) <- NULL if (!is.null(at_list) && !is.null(terms)) at_list <- at_list[names(at_list) %in% terms] # add attributes attr(data, "title") <- t.title attr(data, "x.title") <- x.title attr(data, "y.title") <- y.title attr(data, "legend.title") <- l.title attr(data, "legend.labels") <- legend.labels attr(data, "x.axis.labels") <- x.axis.labels attr(data, "constant.values") <- constant.values attr(data, "terms") <- terms attr(data, "original.terms") <- original_terms attr(data, "at.list") <- at_list attr(data, "prediction.interval") <- prediction.interval attr(data, "condition") <- condition attr(data, "ci.lvl") <- ci.lvl # remember fit family attr(data, "family") <- model_info$family attr(data, "link") <- model_info$link_function attr(data, "logistic") <- ifelse(model_info$is_binomial || model_info$is_ordinal, "1", "0") attr(data, "is.trial") <- ifelse(model_info$is_trial && inherits(model, "brmsfit"), "1", "0") attr(data, "n.trials") <- n.trials # and model-function attr(data, "fitfun") <- .get_model_function(model) # add class attribute class(data) <- c("ggeffects", class(data)) data } ggeffects/R/get_predictions_generic.R0000644000176200001440000000076613577116047017435 0ustar liggesusersget_predictions_generic <- function(model, fitfram, linv, ...) { if (!requireNamespace("prediction", quietly = TRUE)) { stop("You need to install package `prediction` first to compute marginal effects.", call. = FALSE) } prdat <- prediction::prediction( model, data = fitfram, type = "response", ... ) # copy predictions fitfram$predicted <- prdat$fitted # No CI fitfram$conf.low <- NA fitfram$conf.high <- NA fitfram } ggeffects/R/new_data.R0000644000176200001440000000265313565013677014340 0ustar liggesusers#' @title Create a data frame from all combinations of predictor values #' @name new_data #' #' @description Create a data frame for the "newdata"-argument that contains #' all combinations of values from the terms in questions. Similar to #' \code{\link{expand.grid}}. The \code{terms}-argument accepts all shortcuts #' for representative values as in \code{ggpredict()}. #' #' @param model A fitted model object. #' @param terms Character vector with the names of those terms from #' \code{model} for which all combinations of values should be created. #' #' @inheritParams ggpredict #' #' @return A data frame containing one row for each combination of values of the #' supplied variables. #' #' @examples #' data(efc) #' fit <- lm(barthtot ~ c12hour + neg_c_7 + c161sex + c172code, data = efc) #' new_data(fit, c("c12hour [meansd]", "c161sex")) #' #' nd <- new_data(fit, c("c12hour [meansd]", "c161sex")) #' pr <- predict(fit, type = "response", newdata = nd) #' nd$predicted <- pr #' nd #' #' # compare to #' ggpredict(fit, c("c12hour [meansd]", "c161sex")) #' #' @export new_data <- function(model, terms, typical = "mean", condition = NULL) { .data_grid( model = model, model_frame = insight::get_data(model), terms = terms, value_adjustment = typical, factor_adjustment = TRUE, show_pretty_message = TRUE, condition = condition, emmeans.only = FALSE ) } ggeffects/R/get_predictions_glmRob.R0000644000176200001440000000127513565167350017237 0ustar liggesusersget_predictions_glmRob <- function(model, fitfram, ci.lvl, linv, value_adjustment, model_class, terms, vcov.fun, vcov.type, vcov.args, condition, interval, ...) { # does user want standard errors? se <- !is.null(ci.lvl) && !is.na(ci.lvl) && is.null(vcov.fun) # for models from "robust"-pkg (glmRob) we need to # suppress warnings about fake models prdat <- suppressWarnings(stats::predict( model, newdata = fitfram, type = "link", se.fit = se, ... )) # copy predictions .generic_prediction_data(model, fitfram, linv, prdat, se, ci.lvl, model_class, value_adjustment, terms, vcov.fun, vcov.type, vcov.args, condition, interval) } ggeffects/R/post_processing_labels.R0000644000176200001440000000224113575640521017305 0ustar liggesusers.post_processing_labels <- function(model, result, original_model_frame, data_grid, cleaned_terms, original_terms, model_info, type, prediction.interval, at_list, condition = NULL, ci.lvl = .95) { # get axis titles and labels all.labels <- .get_axis_titles_and_labels( original_model_frame = original_model_frame, terms = cleaned_terms, fun = .get_model_function(model), model_info = model_info, no.transform = FALSE, type = type ) # set attributes with necessary information .set_attributes_and_class( data = result, model = model, t.title = all.labels$t.title, x.title = all.labels$x.title, y.title = all.labels$y.title, l.title = all.labels$l.title, legend.labels = attr(result, "legend.labels"), x.axis.labels = all.labels$axis.labels, model_info = model_info, constant.values = attr(data_grid, "constant.values", exact = TRUE), terms = cleaned_terms, original_terms = original_terms, at_list = at_list, n.trials = attr(data_grid, "n.trials", exact = TRUE), prediction.interval = prediction.interval, condition = condition, ci.lvl = ci.lvl ) } ggeffects/R/get_predictions_MixMod.R0000644000176200001440000001142613567451645017216 0ustar liggesusers#' @importFrom insight find_response get_predictions_MixMod <- function(model, data_grid, ci.lvl, linv, type, terms, value_adjustment, condition, ...) { # does user want standard errors? se <- !is.null(ci.lvl) && !is.na(ci.lvl) # compute ci, two-ways if (!is.null(ci.lvl) && !is.na(ci.lvl)) ci <- (1 + ci.lvl) / 2 else ci <- .975 # get info about model model_info <- insight::model_info(model) # copy object predicted_data <- data_grid if (!model_info$is_zero_inflated && type %in% c("fe.zi", "re.zi")) { if (type == "fe.zi") type <- "fe" else type <- "re" message(sprintf("Model has no zero-inflation part. Changing prediction-type to \"%s\".", type)) } if (model_info$is_zero_inflated && type %in% c("fe", "re")) { if (type == "fe") type <- "fe.zi" else type <- "re.zi" message(sprintf("Model has zero-inflation part, predicted values can only be conditioned on zero-inflation part. Changing prediction-type to \"%s\".", type)) } response_name <- insight::find_response(model) if (is.null(condition) || !(response_name %in% names(condition))) { warning(sprintf("Results for MixMod-objects may vary depending on which value the response is conditioned on. Make sure to choose a sensible value for '%s' using the 'condition'-argument.", response_name), call. = FALSE) } prtype <- switch( type, "fe" = , "fe.zi" = "mean_subject", "re" = , "re.zi" = "subject_specific", "mean_subject" ) prdat <- stats::predict( model, newdata = data_grid, type = prtype, type_pred = "response", se.fit = se, level = ci.lvl, ... ) if (!is.list(prdat)) { prdat <- list(pred = prdat) } predicted_data$predicted <- prdat$pred if (model_info$is_zero_inflated && prtype == "mean_subject") { add.args <- lapply(match.call(expand.dots = F)$`...`, function(x) x) if ("nsim" %in% names(add.args)) nsim <- eval(add.args[["nsim"]]) else nsim <- 1000 model_frame <- insight::get_data(model) clean_terms <- .clean_terms(terms) newdata <- .data_grid( model = model, model_frame = model_frame, terms = terms, value_adjustment = value_adjustment, factor_adjustment = FALSE, show_pretty_message = FALSE, condition = condition ) # Since the zero inflation and the conditional model are working in "opposite # directions", confidence intervals can not be derived directly from the # "predict()"-function. Thus, confidence intervals for type = "fe.zi" are # based on quantiles of simulated draws from a multivariate normal distribution # (see also _Brooks et al. 2017, pp.391-392_ for details). prdat.sim <- .simulate_predictions(model, newdata, nsim, terms, value_adjustment, condition) if (is.null(prdat.sim) || inherits(prdat.sim, c("error", "simpleError"))) { insight::print_color("Error: Confidence intervals could not be computed.\n", "red") if (inherits(prdat.sim, c("error", "simpleError"))) { cat(sprintf("* Reason: %s\n", .safe_deparse(prdat.sim[[1]]))) cat(sprintf("* Source: %s\n", .safe_deparse(prdat.sim[[2]]))) } predicted_data$conf.low <- NA predicted_data$conf.high <- NA } else { # we need two data grids here: one for all combination of levels from the # model predictors ("newdata"), and one with the current combinations only # for the terms in question ("data_grid"). "sims" has always the same # number of rows as "newdata", but "data_grid" might be shorter. So we # merge "data_grid" and "newdata", add mean and quantiles from "sims" # as new variables, and then later only keep the original observations # from "data_grid" - by this, we avoid unequal row-lengths. sims <- exp(prdat.sim$cond) * (1 - stats::plogis(prdat.sim$zi)) predicted_data <- .join_simulations(data_grid, newdata, prdat, sims, ci, clean_terms) } } else { if (.obj_has_name(prdat, "upp")) { predicted_data$conf.low <- prdat$low predicted_data$conf.high <- prdat$upp } else if (!is.null(prdat$se.fit)) { lf <- insight::link_function(model) if (is.null(lf)) lf <- function(x) x predicted_data$conf.low <- linv(lf(predicted_data$predicted) - stats::qnorm(ci) * prdat$se.fit) predicted_data$conf.high <- linv(lf(predicted_data$predicted) + stats::qnorm(ci) * prdat$se.fit) } else { predicted_data$conf.low <- NA predicted_data$conf.high <- NA } } # copy standard errors attr(predicted_data, "std.error") <- prdat$se.fit attr(predicted_data, "prediction.interval") <- type %in% c("re", "re.zi") predicted_data } ggeffects/R/utils_get_cleaned_terms.R0000644000176200001440000000077613567465450017450 0ustar liggesusers.clean_terms <- function(x) { # get positions of variable names and see if we have # a suffix for certain values cleaned.pos <- regexpr(pattern = "(\\s|\\[)", x) # position "-1" means we only had variable name, no suffix replacers <- which(cleaned.pos == -1) # replace -1 with number of chars cleaned.pos[replacers] <- nchar(x)[replacers] # get variable names only x <- trimws(substr(x, 0, cleaned.pos)) # be sure to remove any brackets sub("[", "", x, fixed = TRUE) } ggeffects/R/predictions.R0000644000176200001440000002331313604103340015052 0ustar liggesusers# select prediction method, based on model-object #' @importFrom insight find_response get_response get_data model_info link_inverse is_multivariate select_prediction_method <- function(model_class, model, data_grid, ci.lvl, type, model_info, ppd, terms, value_adjustment, vcov.fun, vcov.type, vcov.args, condition, interval, ...) { # get link-inverse-function linv <- insight::link_inverse(model) if (is.null(linv)) linv <- function(x) x if (model_class == "svyglm") { prediction_data <- get_predictions_svyglm(model, data_grid, ci.lvl, linv, ...) } else if (model_class == "svyglm.nb") { prediction_data <- get_predictions_svyglmnb(model, data_grid, ci.lvl, linv, model_class, value_adjustment, terms, vcov.fun, vcov.type, vcov.args, condition, interval, ...) } else if (model_class == "stanreg") { prediction_data <- get_predictions_stan(model, data_grid, ci.lvl, type, model_info, ppd, terms, ...) } else if (model_class == "brmsfit") { prediction_data <- get_predictions_stan(model, data_grid, ci.lvl, type, model_info, ppd, terms, ...) } else if (model_class == "coxph" && type != "surv" && type != "cumhaz") { prediction_data <- get_predictions_coxph(model, data_grid, ci.lvl, value_adjustment, model_class, vcov.fun, vcov.type, vcov.args, condition, interval, ...) } else if (model_class == "coxph" && type %in% c("surv", "cumhaz")) { prediction_data <- get_predictions_survival(model, data_grid, ci.lvl, type, terms, ...) } else if (model_class == "ols") { prediction_data <- get_predictions_ols(model, data_grid, ci.lvl, ...) } else if (model_class == "lrm") { prediction_data <- get_predictions_lrm(model, data_grid, ci.lvl, linv, ...) } else if (model_class == "glimML") { prediction_data <- get_predictions_glimML(model, data_grid, ci.lvl, linv, ...) } else if (model_class == "glmmTMB") { prediction_data <- get_predictions_glmmTMB(model, data_grid, ci.lvl, linv, type, terms, value_adjustment, condition, ...) } else if (model_class == "wbm") { prediction_data <- get_predictions_wbm(model, data_grid, ci.lvl, linv, type, terms, condition, ...) } else if (model_class %in% c("lmer", "nlmer", "glmer")) { prediction_data <- get_predictions_merMod(model, data_grid, ci.lvl, linv, type, terms, value_adjustment, condition, ...) } else if (model_class == "geeglm") { prediction_data <- get_predictions_geeglm(model, data_grid, ci.lvl, type, model_class, value_adjustment, terms, condition, ...) } else if (model_class == "gamlss") { prediction_data <- get_predictions_gamlss(model, data_grid, ci.lvl, terms, model_class, value_adjustment, condition, ...) } else if (model_class == "bamlss") { prediction_data <- get_predictions_bamlss(model, data_grid, linv, ...) } else if (model_class == "bayesx") { prediction_data <- get_predictions_bayesx(model, data_grid, ...) } else if (model_class == "cgam") { prediction_data <- get_predictions_cgam(model, data_grid, ci.lvl, linv, value_adjustment, model_class, terms, condition, ...) } else if (model_class == "gam") { prediction_data <- get_predictions_gam(model, data_grid, ci.lvl, linv, type, ...) } else if (model_class == "Gam") { prediction_data <- get_predictions_Gam(model, data_grid, ci.lvl, linv, value_adjustment, terms, model_class, condition, ...) # } else if (model_class == "vgam") { # prediction_data <- get_predictions_vgam(model, data_grid, ci.lvl, linv, ...) } else if (model_class == "vglm") { prediction_data <- get_predictions_vglm(model, data_grid, ci.lvl, linv, ...) } else if (model_class == "tobit") { prediction_data <- get_predictions_tobit(model, data_grid, ci.lvl, linv, ...) } else if (model_class %in% c("lme", "gls", "plm")) { prediction_data <- get_predictions_lme(model, data_grid, ci.lvl, linv, type, terms, value_adjustment, model_class, vcov.fun, vcov.type, vcov.args, condition, ...) } else if (model_class == "gee") { prediction_data <- get_predictions_gee(model, terms, ...) } else if (model_class %in% c("multinom", "bracl", "brmultinom")) { prediction_data <- get_predictions_multinom(model, data_grid, ci.lvl, linv, value_adjustment, terms, model_class, ...) } else if (model_class == "clmm") { prediction_data <- get_predictions_clmm(model, terms, value_adjustment, condition, ci.lvl, linv, ...) } else if (model_class == "clm") { prediction_data <- get_predictions_clm(model, data_grid, ci.lvl, linv, ...) } else if (model_class == "clm2") { prediction_data <- get_predictions_clm2(model, data_grid, ci.lvl, linv, ...) } else if (model_class == "Zelig-relogit") { prediction_data <- get_predictions_zelig(model, data_grid, ci.lvl, linv, ...) } else if (model_class == "mixor") { prediction_data <- get_predictions_mixor(model, data_grid, ci.lvl, linv, value_adjustment, terms, model_class, condition, ...) } else if (model_class == "polr") { prediction_data <- get_predictions_polr(model, data_grid, ci.lvl, linv, value_adjustment, terms, model_class, vcov.fun, vcov.type, vcov.args, condition, interval, ...) } else if (model_class %in% c("betareg", "truncreg", "ivreg", "vgam", "fixest", "feglm", "glmx")) { prediction_data <- get_predictions_generic2(model, data_grid, ci.lvl, linv, type, model_class, value_adjustment, terms, vcov.fun, vcov.type, vcov.args, condition, interval, ...) } else if (model_class %in% c("zeroinfl", "hurdle", "zerotrunc")) { prediction_data <- get_predictions_zeroinfl(model, data_grid, ci.lvl, linv, type, model_class, value_adjustment, terms, vcov.fun, vcov.type, vcov.args, condition, ...) } else if (model_class %in% c("glm", "glm.nb")) { prediction_data <- get_predictions_glm(model, data_grid, ci.lvl, linv, value_adjustment, model_class, terms, vcov.fun, vcov.type, vcov.args, condition, interval, ...) } else if (model_class %in% c("rq")) { prediction_data <- get_predictions_rq(model, data_grid, ci.lvl, ...) } else if (model_class %in% c("lmrob")) { prediction_data <- get_predictions_lmrob_base(model, data_grid, ci.lvl, ...) } else if (model_class %in% c("glmrob")) { prediction_data <- get_predictions_glmrob_base(model, data_grid, ci.lvl, linv, ...) } else if (model_class %in% c("glmRob")) { prediction_data <- get_predictions_glmRob(model, data_grid, ci.lvl, linv, value_adjustment, model_class, terms, vcov.fun, vcov.type, vcov.args, condition, interval, ...) } else if (model_class == "logistf") { prediction_data <- get_predictions_logistf(model, data_grid, terms, ...) } else if (model_class == "lm") { prediction_data <- get_predictions_lm(model, data_grid, ci.lvl, model_class, value_adjustment, terms, vcov.fun, vcov.type, vcov.args, condition, interval, ...) } else if (model_class == "MixMod") { prediction_data <- get_predictions_MixMod(model, data_grid, ci.lvl, linv, type, terms, value_adjustment, condition, ...) } else if (model_class == "MCMCglmm") { prediction_data <- get_predictions_MCMCglmm(model, data_grid, ci.lvl, interval, ...) } else { prediction_data <- get_predictions_generic(model, data_grid, linv, ...) } prediction_data } .generic_prediction_data <- function(model, fitfram, linv, prdat, se, ci.lvl, model_class, value_adjustment, terms, vcov.fun, vcov.type, vcov.args, condition = NULL, interval = NULL) { # compute ci, two-ways if (!is.null(ci.lvl) && !is.na(ci.lvl)) ci <- (1 + ci.lvl) / 2 else ci <- .975 # copy predictions if (typeof(prdat) == "double") .predicted <- prdat else .predicted <- prdat$fit # get standard errors, if computed if (.obj_has_name(prdat, "se.fit")) { se.fit <- prdat$se.fit # reset interval, since we have normal confidence intervals already here if (interval == "confidence") interval <- NULL } else { se.fit <- NULL } # get predicted values, on link-scale fitfram$predicted <- .predicted # did user request robust standard errors? if (!is.null(vcov.fun) || (!is.null(interval) && se)) { se.pred <- .standard_error_predictions( model = model, prediction_data = fitfram, value_adjustment = value_adjustment, terms = terms, model_class = model_class, vcov.fun = vcov.fun, vcov.type = vcov.type, vcov.args = vcov.args, condition = condition, interval = interval ) if (!is.null(se.pred)) { fitfram <- se.pred$prediction_data se.fit <- se.pred$se.fit se <- TRUE } else { se.fit <- NULL se <- FALSE } } else { se.pred <- NULL } # did user request standard errors? if yes, compute CI if (se && !is.null(se.fit)) { fitfram$conf.low <- linv(fitfram$predicted - stats::qnorm(ci) * se.fit) fitfram$conf.high <- linv(fitfram$predicted + stats::qnorm(ci) * se.fit) # copy standard errors attr(fitfram, "std.error") <- se.fit if (!is.null(se.pred)) attr(fitfram, "prediction.interval") <- attr(se.pred, "prediction_interval") } else { # No CI fitfram$conf.low <- NA fitfram$conf.high <- NA } # transform predicted values fitfram$predicted <- linv(fitfram$predicted) fitfram } ggeffects/R/moderator_pattern.R0000644000176200001440000001023713533466546016306 0ustar liggesusers#' @title Calculate representative values of a vector #' @name values_at #' #' @description This function calculates representative values of a vector, #' like minimum/maximum values or lower, median and upper quartile etc., #' which can be used for numeric vectors to plot marginal effects at these #' representative values. #' #' @param x A numeric vector. #' @param values Character vector, naming a pattern for which representative values #' should be calculcated. #' \describe{ #' \item{\code{"minmax"}}{(default) minimum and maximum values (lower and upper bounds) of the moderator are used to plot the interaction between independent variable and moderator.} #' \item{\code{"meansd"}}{uses the mean value of the moderator as well as one standard deviation below and above mean value to plot the effect of the moderator on the independent variable.} #' \item{\code{"zeromax"}}{is similar to the \code{"minmax"} option, however, \code{0} is always used as minimum value for the moderator. This may be useful for predictors that don't have an empirical zero-value, but absence of moderation should be simulated by using 0 as minimum.} #' \item{\code{"quart"}}{calculates and uses the quartiles (lower, median and upper) of the moderator value, \emph{including} minimum and maximum value.} #' \item{\code{"quart2"}}{calculates and uses the quartiles (lower, median and upper) of the moderator value, \emph{excluding} minimum and maximum value.} #' \item{\code{"all"}}{uses all values of the moderator variable. Note that this option only applies to \code{type = "eff"}, for numeric moderator values.} #' } #' #' @return A numeric vector of length two or three, representing the required #' values from \code{x}, like minimum/maximum value or mean and +/- 1 SD. #' #' @examples #' data(efc) #' values_at(efc$c12hour) #' values_at(efc$c12hour, "quart2") #' #' @importFrom stats sd quantile #' @export values_at <- function(x, values = "meansd") { # check if representative value is possible to compute # e.g. for quantiles, if we have at least three values values <- check_rv(values, x) # we have more than two values, so re-calculate effects, just using # min and max value of moderator. if (values == "minmax") { # retrieve min and max values mv.min <- min(x, na.rm = T) mv.max <- max(x, na.rm = T) # re-compute effects, prepare xlevels xl <- c(mv.min, mv.max) # we have more than two values, so re-calculate effects, just using # 0 and max value of moderator. } else if (values == "zeromax") { # retrieve max values mv.max <- max(x, na.rm = T) # re-compute effects, prepare xlevels xl <- c(0, mv.max) # compute mean +/- sd } else if (values == "meansd") { # retrieve mean and sd mv.mean <- mean(x, na.rm = T) mv.sd <- stats::sd(x, na.rm = T) # re-compute effects, prepare xlevels xl <- c(mv.mean - mv.sd, mv.mean, mv.mean + mv.sd) } else if (values == "all") { # re-compute effects, prepare xlevels xl <- as.vector(unique(sort(x, na.last = NA))) } else if (values == "quart") { # re-compute effects, prepare xlevels xl <- as.vector(stats::quantile(x, na.rm = T)) } else if (values == "quart2") { # re-compute effects, prepare xlevels xl <- as.vector(stats::quantile(x, na.rm = T))[2:4] } if (is.whole(x)) { rv <- round(xl, 1) if (length(unique(rv)) < length(rv)) rv <- unique(round(xl, 2)) } else rv <- round(xl, 2) if (length(unique(rv)) < length(rv)) { rv <- unique(round(xl, 3)) if (length(unique(rv)) < length(rv)) rv <- unique(round(xl, 4)) } rv } #' @importFrom stats quantile check_rv <- function(values, x) { mvc <- length(unique(as.vector(stats::quantile(x, na.rm = T)))) if (values %in% c("quart", "quart2") && mvc < 3) { # tell user that quart won't work message("Could not compute quartiles, too small range of variable. Defaulting `values` to `minmax`.") values <- "minmax" } values } #' @rdname values_at #' @export representative_values <- values_at ggeffects/R/efc.R0000644000176200001440000000113613451124203013264 0ustar liggesusers#' @docType data #' @title Sample dataset from the EUROFAMCARE project #' @name efc #' @aliases efc_test #' @keywords data #' #' @description A SPSS sample data set, imported with the \code{\link[sjlabelled]{read_spss}} function. #' #' @examples #' # Attach EFC-data #' data(efc) #' #' # Show structure #' str(efc) #' #' # show first rows #' head(efc) #' #' # show variables #' \dontrun{ #' library(sjmisc) #' library(sjPlot) #' view_df(efc) #' #' # show variable labels #' get_label(efc) #' #' # plot efc-data frame summary #' sjt.df(efc, alternateRowColor = TRUE)} #' NULL ggeffects/R/get_predictions_coxph.R0000644000176200001440000000410313565167350017127 0ustar liggesusersget_predictions_coxph <- function(model, fitfram, ci.lvl, value_adjustment, model_class, vcov.fun, vcov.type, vcov.args, condition, interval, ...) { # does user want standard errors? se <- !is.null(ci.lvl) && !is.na(ci.lvl) # compute ci, two-ways if (!is.null(ci.lvl) && !is.na(ci.lvl)) ci <- (1 + ci.lvl) / 2 else ci <- .975 prdat <- stats::predict( model, newdata = fitfram, type = "lp", se.fit = se, ... ) # did user request standard errors? if yes, compute CI if (!is.null(vcov.fun) || (!is.null(interval) && interval == "prediction")) { # copy predictions fitfram$predicted <- exp(prdat$fit) se.pred <- .standard_error_predictions( model = model, prediction_data = fitfram, value_adjustment = value_adjustment, terms = terms, model_class = model_class, vcov.fun = vcov.fun, vcov.type = vcov.type, vcov.args = vcov.args, condition = condition, interval = interval ) if (!is.null(se.pred)) { se.fit <- se.pred$se.fit fitfram <- se.pred$prediction_data # CI fitfram$conf.low <- fitfram$predicted - stats::qnorm(ci) * se.fit fitfram$conf.high <- fitfram$predicted + stats::qnorm(ci) * se.fit # copy standard errors attr(fitfram, "std.error") <- se.fit attr(fitfram, "prediction.interval") <- attr(se.pred, "prediction_interval") } else { # CI fitfram$conf.low <- NA fitfram$conf.high <- NA } } else if (se) { # copy predictions fitfram$predicted <- exp(prdat$fit) # calculate CI fitfram$conf.low <- exp(prdat$fit - stats::qnorm(ci) * prdat$se.fit) fitfram$conf.high <- exp(prdat$fit + stats::qnorm(ci) * prdat$se.fit) # copy standard errors attr(fitfram, "std.error") <- prdat$se.fit } else { # copy predictions fitfram$predicted <- exp(as.vector(prdat)) # no CI fitfram$conf.low <- NA fitfram$conf.high <- NA } fitfram } ggeffects/R/get_predictions_zelig.R0000644000176200001440000000104513451124203017102 0ustar liggesusersget_predictions_zelig <- function(model, fitfram, ci.lvl, linv, ...) { stop("`ggpredict()` does currently not support Zelig-models.", call. = FALSE) # does user want standard errors? se <- !is.null(ci.lvl) && !is.na(ci.lvl) # compute ci, two-ways if (!is.null(ci.lvl) && !is.na(ci.lvl)) ci <- (1 + ci.lvl) / 2 else ci <- .975 # prediction, with CI # prdat <- # Zelig::predict( # model, # newdata = fitfram, # interval = se, # level = ci, # ... # ) NULL } ggeffects/R/get_predictions_cgam.R0000644000176200001440000000363413604103340016704 0ustar liggesusersget_predictions_cgam <- function(model, fitfram, ci.lvl, linv, value_adjustment, model_class, terms, condition, ...) { # does user want standard errors? se <- !is.null(ci.lvl) && !is.na(ci.lvl) # does user want standard errors? if (se) interval <- "confidence" else interval <- "none" # compute ci, two-ways if (!is.null(ci.lvl) && !is.na(ci.lvl)) ci <- (1 + ci.lvl) / 2 else ci <- .975 prdat <- stats::predict( model, newData = fitfram, type = "link", interval = "none", ... ) # copy predictions if (typeof(prdat) == "double") .predicted <- prdat else .predicted <- prdat$fit # get standard errors, if computed # get predicted values, on link-scale fitfram$predicted <- .predicted if (se) { se.pred <- .standard_error_predictions( model = model, prediction_data = fitfram, value_adjustment = value_adjustment, terms = terms, model_class = model_class, vcov.fun = NULL, vcov.type = NULL, vcov.args = NULL, condition = condition, interval = interval ) if (!is.null(se.pred)) { fitfram <- se.pred$prediction_data se.fit <- se.pred$se.fit se <- TRUE } else { se.fit <- NULL se <- FALSE } } else { se.pred <- NULL } if (se) { fitfram$conf.low <- linv(fitfram$predicted - stats::qnorm(ci) * se.fit) fitfram$conf.high <- linv(fitfram$predicted + stats::qnorm(ci) * se.fit) # copy standard errors attr(fitfram, "std.error") <- se.fit if (!is.null(se.pred)) attr(fitfram, "prediction.interval") <- attr(se.pred, "prediction_interval") } else { # No CI fitfram$conf.low <- NA fitfram$conf.high <- NA } # transform predicted values fitfram$predicted <- linv(fitfram$predicted) fitfram } ggeffects/R/get_predictions_vgam.R0000644000176200001440000000041313451124203016720 0ustar liggesusersget_predictions_vgam <- function(model, fitfram, ci.lvl, linv, ...) { prdat <- stats::predict( model, newdata = fitfram, type = "link", se.fit = FALSE ) # copy predictions fitfram$predicted <- linv(as.vector(prdat)) fitfram } ggeffects/R/get_predictions_lm.R0000644000176200001440000000510513577107525016422 0ustar liggesusersget_predictions_lm <- function(model, fitfram, ci.lvl, model_class, value_adjustment, terms, vcov.fun, vcov.type, vcov.args, condition, interval, ...) { # does user want standard errors? se <- !is.null(ci.lvl) && !is.na(ci.lvl) && is.null(vcov.fun) # compute ci, two-ways if (!is.null(ci.lvl) && !is.na(ci.lvl)) ci <- (1 + ci.lvl) / 2 else ci <- .975 prdat <- stats::predict( model, newdata = fitfram, type = "response", se.fit = se, ... ) # did user request standard errors? if yes, compute CI if (!is.null(vcov.fun) || (!is.null(interval) && interval == "prediction")) { # copy predictions if ("fit" %in% names(prdat)) fitfram$predicted <- as.vector(prdat$fit) else fitfram$predicted <- as.vector(prdat) se.pred <- .standard_error_predictions( model = model, prediction_data = fitfram, value_adjustment = value_adjustment, terms = terms, model_class = model_class, vcov.fun = vcov.fun, vcov.type = vcov.type, vcov.args = vcov.args, condition = condition, interval = interval ) if (!is.null(se.pred)) { se.fit <- se.pred$se.fit fitfram <- se.pred$prediction_data # CI fitfram$conf.low <- fitfram$predicted - stats::qnorm(ci) * se.fit fitfram$conf.high <- fitfram$predicted + stats::qnorm(ci) * se.fit # copy standard errors attr(fitfram, "std.error") <- se.fit attr(fitfram, "prediction.interval") <- attr(se.pred, "prediction_interval") } else { # CI fitfram$conf.low <- NA fitfram$conf.high <- NA } } else if (se) { # copy predictions fitfram$predicted <- prdat$fit # calculate CI fitfram$conf.low <- prdat$fit - stats::qnorm(ci) * prdat$se.fit fitfram$conf.high <- prdat$fit + stats::qnorm(ci) * prdat$se.fit # copy standard errors attr(fitfram, "std.error") <- prdat$se.fit } else { # check if we have a multivariate response model pdim <- dim(prdat) if (!is.null(pdim) && pdim[2] > 1) { tmp <- cbind(fitfram, as.data.frame(prdat)) gather.vars <- (ncol(fitfram) + 1):ncol(tmp) fitfram <- .gather( tmp, names_to = "response.level", values_to = "predicted", colnames(tmp)[gather.vars] ) } else { # copy predictions fitfram$predicted <- as.vector(prdat) } # no CI fitfram$conf.low <- NA fitfram$conf.high <- NA } fitfram } ggeffects/R/get_predictions_svyglm.R0000644000176200001440000000260513451124203017314 0ustar liggesusersget_predictions_svyglm <- function(model, fitfram, ci.lvl, linv, ...) { # does user want standard errors? se <- !is.null(ci.lvl) && !is.na(ci.lvl) # compute ci, two-ways if (!is.null(ci.lvl) && !is.na(ci.lvl)) ci <- (1 + ci.lvl) / 2 else ci <- .975 # get predictions prdat <- stats::predict( model, newdata = fitfram, type = "link", se.fit = se, ... ) # check if user wants standard errors if (se) { # get variance matrix for standard errors. "survey" stores the information # somewhat different from classical predict function vv <- attr(prdat, "var") # compute standard errors if (is.matrix(vv)) prdat <- as.data.frame(cbind(prdat, sqrt(diag(vv)))) else prdat <- as.data.frame(cbind(prdat, sqrt(vv))) # consistent column names colnames(prdat) <- c("fit", "se.fit") # copy predictions fitfram$predicted <- linv(prdat$fit) # calculate CI fitfram$conf.low <- linv(prdat$fit - stats::qnorm(ci) * prdat$se.fit) fitfram$conf.high <- linv(prdat$fit + stats::qnorm(ci) * prdat$se.fit) # copy standard errors attr(fitfram, "std.error") <- prdat$se.fit } else { # copy predictions fitfram$predicted <- linv(as.vector(prdat)) # no CI fitfram$conf.low <- NA fitfram$conf.high <- NA } fitfram } ggeffects/R/get_predictions_polr.R0000644000176200001440000000410513577107561016765 0ustar liggesusersget_predictions_polr <- function(model, fitfram, ci.lvl, linv, value_adjustment, terms, model_class, vcov.fun, vcov.type, vcov.args, condition, interval, ...) { se <- (!is.null(ci.lvl) && !is.na(ci.lvl)) || !is.null(vcov.fun) # compute ci, two-ways if (!is.null(ci.lvl) && !is.na(ci.lvl)) ci <- (1 + ci.lvl) / 2 else ci <- .975 prdat <- stats::predict( model, newdata = fitfram, type = "probs", ... ) prdat <- as.data.frame(prdat) # usually, we have same numbers of rows for predictions and model frame. # this is, however. not true when calling the "emm()" function. in this # case. just return predictions if (nrow(prdat) > nrow(fitfram) && ncol(prdat) == 1) { colnames(prdat)[1] <- "predicted" return(.rownames_as_column(prdat, var = "response.level")) } # bind predictions to model frame fitfram <- cbind(prdat, fitfram) # for proportional ordinal logistic regression (see MASS::polr), # we have predicted values for each response category. Hence, # gather columns fitfram <- .gather(fitfram, names_to = "response.level", values_to = "predicted", colnames(prdat)) se.pred <- .standard_error_predictions( model = model, prediction_data = fitfram, value_adjustment = value_adjustment, terms = terms, model_class = model_class, vcov.fun = vcov.fun, vcov.type = vcov.type, vcov.args = vcov.args, condition = condition, interval = interval ) if (!is.null(se.pred) && isTRUE(se)) { se.fit <- se.pred$se.fit fitfram <- se.pred$prediction_data # CI fitfram$conf.low <- linv(stats::qlogis(fitfram$predicted) - stats::qnorm(ci) * se.fit) fitfram$conf.high <- linv(stats::qlogis(fitfram$predicted) + stats::qnorm(ci) * se.fit) # copy standard errors attr(fitfram, "std.error") <- se.fit attr(fitfram, "prediction.interval") <- attr(se.pred, "prediction_interval") } else { # CI fitfram$conf.low <- NA fitfram$conf.high <- NA } fitfram } ggeffects/R/get_predictions_geeglm.R0000644000176200001440000000256213567445614017261 0ustar liggesusersget_predictions_geeglm <- function(model, fitfram, ci.lvl, type, model_class, value_adjustment, terms, condition, ...) { se <- (!is.null(ci.lvl) && !is.na(ci.lvl)) # compute ci, two-ways if (!is.null(ci.lvl) && !is.na(ci.lvl)) ci <- (1 + ci.lvl) / 2 else ci <- .975 # get predictions prdat <- stats::predict( model, newdata = fitfram, ... ) fitfram$predicted <- as.vector(prdat) # get standard errors from variance-covariance matrix se.pred <- .standard_error_predictions( model = model, prediction_data = fitfram, value_adjustment = value_adjustment, type = type, terms = terms, model_class = model_class, vcov.fun = NULL, vcov.type = NULL, vcov.args = NULL, condition = condition, interval = NULL ) if (!is.null(se.pred) && isTRUE(se)) { se.fit <- se.pred$se.fit fitfram <- se.pred$prediction_data # CI fitfram$conf.low <- fitfram$predicted - stats::qnorm(ci) * se.fit fitfram$conf.high <- fitfram$predicted + stats::qnorm(ci) * se.fit # copy standard errors attr(fitfram, "std.error") <- se.fit attr(fitfram, "prediction.interval") <- attr(se.pred, "prediction_interval") } else { # CI fitfram$conf.low <- NA fitfram$conf.high <- NA } fitfram } ggeffects/R/utils_is_empty.R0000644000176200001440000000120213567464647015625 0ustar liggesusers.is_empty <- function(x) { # do we have a valid vector? if (!is.null(x)) { # if it's a character, check if we have only one element in that vector if (is.character(x)) { # characters may also be of length 0 if (length(x) == 0) return(TRUE) # else, check first elements of x zero_len <- isTRUE((nchar(x) == 0)[1]) if (length(x) > 0) x <- x[1] # we have a non-character vector here. check for length } else if (is.list(x)) { x <- .compact_list(x) zero_len <- length(x) == 0 } else { zero_len <- length(x) == 0 } } any(is.null(x) || zero_len || all(is.na(x))) } ggeffects/R/get_predictions_ols.R0000644000176200001440000000166513541155270016605 0ustar liggesusersget_predictions_ols <- function(model, fitfram, ci.lvl, ...) { # does user want standard errors? se <- !is.null(ci.lvl) && !is.na(ci.lvl) # compute ci, two-ways if (!is.null(ci.lvl) && !is.na(ci.lvl)) ci <- (1 + ci.lvl) / 2 else ci <- .975 prdat <- stats::predict( model, newdata = fitfram, type = "lp", se.fit = se, ... ) if (se) { # copy predictions fitfram$predicted <- prdat$linear.predictors # calculate CI fitfram$conf.low <- prdat$linear.predictors - stats::qnorm(ci) * prdat$se.fit fitfram$conf.high <- prdat$linear.predictors + stats::qnorm(ci) * prdat$se.fit # copy standard errors attr(fitfram, "std.error") <- prdat$se.fit } else { # copy predictions fitfram$predicted <- as.vector(prdat$linear.predictors) # no CI fitfram$conf.low <- NA fitfram$conf.high <- NA } fitfram } ggeffects/R/get_predictions_merMod.R0000644000176200001440000000465613565167350017246 0ustar liggesusersget_predictions_merMod <- function(model, fitfram, ci.lvl, linv, type, terms, value_adjustment, condition, ...) { # does user want standard errors? se <- !is.null(ci.lvl) && !is.na(ci.lvl) # compute ci, two-ways if (!is.null(ci.lvl) && !is.na(ci.lvl)) ci <- (1 + ci.lvl) / 2 else ci <- .975 # check whether predictions should be conditioned # on random effects (grouping level) or not. if (type == "fe") ref <- NA else ref <- NULL clean_terms <- .clean_terms(terms) if (type == "sim") { add.args <- lapply(match.call(expand.dots = F)$`...`, function(x) x) if ("nsim" %in% names(add.args)) nsim <- eval(add.args[["nsim"]]) else nsim <- 1000 fitfram <- simulate_predictions(model, nsim, clean_terms, ci) } else { fitfram$predicted <- suppressWarnings(stats::predict( model, newdata = fitfram, type = "response", re.form = ref, allow.new.levels = TRUE, ... )) if (se) { # get standard errors from variance-covariance matrix se.pred <- .standard_error_predictions( model = model, prediction_data = fitfram, value_adjustment = value_adjustment, terms = terms, type = type, condition = condition ) if (!is.null(se.pred)) { se.fit <- se.pred$se.fit fitfram <- se.pred$prediction_data if (is.null(linv)) { # calculate CI for linear mixed models fitfram$conf.low <- fitfram$predicted - stats::qnorm(ci) * se.fit fitfram$conf.high <- fitfram$predicted + stats::qnorm(ci) * se.fit } else { # get link-function and back-transform fitted values # to original scale, so we compute proper CI lf <- insight::link_function(model) # calculate CI for glmm fitfram$conf.low <- linv(lf(fitfram$predicted) - stats::qnorm(ci) * se.fit) fitfram$conf.high <- linv(lf(fitfram$predicted) + stats::qnorm(ci) * se.fit) } # copy standard errors attr(fitfram, "std.error") <- se.fit attr(fitfram, "prediction.interval") <- attr(se.pred, "prediction_interval") } else { fitfram$conf.low <- NA fitfram$conf.high <- NA } } else { fitfram$conf.low <- NA fitfram$conf.high <- NA } } fitfram } ggeffects/R/post_processing_predictions.R0000644000176200001440000000614513577113327020376 0ustar liggesusers.post_processing_predictions <- function(model, prediction_data, original_model_frame, cleaned_terms) { # check for correct terms specification if (!all(cleaned_terms %in% colnames(prediction_data))) { stop("At least one term specified in `terms` is no valid model term.", call. = FALSE) } # copy standard errors if (!.obj_has_name(prediction_data, "std.error")) { prediction_data$std.error <- attr(prediction_data, "std.error") } else { attr(prediction_data, "std.error") <- prediction_data$std.error } # now select only relevant variables: the predictors on the x-axis, # the predictions and the originial response vector (needed for scatter plot) columns_to_keep <- c(cleaned_terms, "predicted", "std.error", "conf.low", "conf.high", "response.level") result <- prediction_data[, intersect(columns_to_keep, colnames(prediction_data))] # name and sort columns, depending on groups, facet and panel result <- .prepare_columns(result, cleaned_terms) # grouping variable may not be labelled # do this here, so we convert to labelled factor later result <- .add_labels_to_groupvariable(result, original_model_frame, cleaned_terms) # convert grouping variable to factor, for proper legend result <- .groupvariable_to_labelled_factor(result) # check if we have legend labels legend.labels <- sjlabelled::get_labels(result$group) # if we had numeric variable w/o labels, these still might be numeric # make sure we have factors here for our grouping and facet variables if (is.numeric(result$group)) { result$group <- as.factor(result$group) } # remember if x was a factor x.is.factor <- ifelse(is.factor(result$x), "1", "0") # sort values result <- result[order(result$x, result$group), ] empty_columns <- which(colSums(is.na(result)) == nrow(result)) if (length(empty_columns)) result <- result[, -empty_columns] if (.obj_has_name(result, "facet") && is.numeric(result$facet)) { result$facet <- as.factor(result$facet) attr(result, "numeric.facet") <- TRUE } attr(result, "legend.labels") <- legend.labels attr(result, "x.is.factor") <- x.is.factor attr(result, "continuous.group") <- attr(prediction_data, "continuous.group") & is.null(attr(original_model_frame[[cleaned_terms[2]]], "labels")) result } # name and sort columns, depending on groups, facet and panel .prepare_columns <- function(result, cleaned_terms) { columns <- c("x", "predicted", "std.error", "conf.low", "conf.high", "response.level", "group", "facet", "panel") # with or w/o grouping factor? if (length(cleaned_terms) == 1) { colnames(result)[1] <- "x" # convert to factor for proper legend result$group <- as.factor(1) } else if (length(cleaned_terms) == 2) { colnames(result)[1:2] <- c("x", "group") } else if (length(cleaned_terms) == 3) { colnames(result)[1:3] <- c("x", "group", "facet") } else if (length(cleaned_terms) == 4) { colnames(result)[1:4] <- c("x", "group", "facet", "panel") } # sort columns result[, columns[columns %in% colnames(result)]] } ggeffects/R/ggeffect.R0000644000176200001440000002363213611270610014310 0ustar liggesusers#' @rdname ggpredict #' #' @importFrom stats na.omit #' @importFrom sjlabelled as_numeric #' @importFrom insight find_predictors link_inverse print_color #' @export ggeffect <- function(model, terms, ci.lvl = .95, ...) { if (!requireNamespace("effects", quietly = TRUE)) { message("Package `effects` is not available, but needed for `ggeffect()`. Either install package `effects`, or use `ggpredict()`. Calling `ggpredict()` now.", call. = FALSE) return(ggpredict(model = model, terms = terms, ci.lvl = ci.lvl)) } # check if terms are a formula if (!missing(terms) && !is.null(terms) && inherits(terms, "formula")) { terms <- all.vars(terms) } if (inherits(model, "list") && !inherits(model, c("bamlss", "maxLik"))) { res <- lapply(model, function(.x) ggeffect_helper(.x, terms, ci.lvl, ...)) } else { if (missing(terms) || is.null(terms)) { predictors <- insight::find_predictors(model, effects = "fixed", component = "conditional", flatten = TRUE) res <- lapply( predictors, function(.x) { tmp <- ggeffect_helper(model, terms = .x, ci.lvl, ...) if (!is.null(tmp)) tmp$group <- .x tmp } ) no_results <- sapply(res, is.null) res <- .compact_list(res) if (!is.null(res) && !.is_empty(res)) { names(res) <- predictors[!no_results] class(res) <- c("ggalleffects", class(res)) } else { res <- NULL } } else { res <- ggeffect_helper(model, terms, ci.lvl, ...) } } res } ggeffect_helper <- function(model, terms, ci.lvl, ...) { # check terms argument original_terms <- terms <- .check_vars(terms, model) cleaned_terms <- .clean_terms(terms) # get model frame original_model_frame <- insight::get_data(model) # get model family model_info <- .get_model_info(model) # check whether we have an argument "transformation" for effects()-function # in this case, we need another default title, since we have # non-transformed effects additional_dot_args <- lapply(match.call(expand.dots = F)$`...`, function(x) x) # check whether we have a "transformation" argument t.add <- which(names(additional_dot_args) == "transformation") # if we have a "transformation" argument, and it's NULL, # no transformation of scale no.transform <- !.is_empty(t.add) && is.null(eval(additional_dot_args[[t.add]])) # check if we have specific levels in square brackets at_values <- .get_representative_values(terms, original_model_frame) # clear argument from brackets terms <- .clean_terms(terms) # check for character vectors, transform to factor is_char <- sapply(terms, function(.i) is.character(original_model_frame[[.i]])) if (any(is_char)) { for (.i in terms[is_char]) { original_model_frame[[.i]] <- as.factor(original_model_frame[[.i]]) } } # fix remaining x-levels conditional_terms <- which(!(terms %in% names(at_values))) if (!.is_empty(conditional_terms)) { xl <- .prettify_data(conditional_terms, original_model_frame, terms) names(xl) <- terms[conditional_terms] at_values <- c(at_values, xl) } # restore inital order of focal predictors at_values <- at_values[match(terms, names(at_values))] # compute marginal effects for each model term eff <- tryCatch( { suppressWarnings( effects::Effect( focal.predictors = terms, mod = model, xlevels = at_values, confidence.level = ci.lvl, ... ) ) }, error = function(e) { insight::print_color("Can't compute marginal effects, 'effects::Effect()' returned an error.\n\n", "red") cat(sprintf("Reason: %s\n", e$message)) cat("You may try 'ggpredict()' or 'ggemmeans()'.\n\n") NULL } ) # return NULL on error if (is.null(eff)) return(NULL) # build data frame, with raw values # predicted response and lower/upper ci if (inherits(model, c("polr", "clm", "clm2", "clmm", "clmm2", "multinom"))) { # for categorical outcomes, we need to gather the data # from effects to get a single data frame eff.logits <- as.data.frame(eff$logit, stringsAsFactors = FALSE) tmp <- cbind(eff$x, eff.logits) ft <- (ncol(tmp) - ncol(eff.logits) + 1):ncol(tmp) tmp <- .gather(tmp, names_to = "response.level", values_to = "predicted", colnames(tmp)[ft]) fx.term <- eff$term colnames(tmp)[1] <- "x" if (length(terms) > 1) colnames(tmp)[2] <- "group" if (length(terms) > 2) colnames(tmp)[3] <- "facet" if (!is.null(ci.lvl) && !is.na(ci.lvl)) ci <- 1 - ((1 - ci.lvl) / 2) else ci <- .975 # same for standard errors. we need to gather all data frames together, # compute CI manually and then also fix column names. eff.se.logits <- as.data.frame(eff$se.logit) tmp2 <- .gather(eff.se.logits, names_to = "response.level", values_to = "se", colnames(eff.se.logits)) tmp2$conf.low <- tmp$predicted - stats::qnorm(ci) * tmp2$se tmp2$conf.high <- tmp$predicted + stats::qnorm(ci) * tmp2$se tmp2$std.error <- tmp2$se tmp <- cbind(tmp, tmp2[, c("std.error", "conf.low", "conf.high")]) tmp$response.level <- substr(tmp$response.level, 7, max(nchar(tmp$response.level))) } else { # check for multi response .ne <- names(eff) .mv <- insight::find_response(model, combine = FALSE) if (length(.ne) == length(.mv) && all.equal(.ne, .mv)) { l <- lapply(names(eff), function(.x) { tmpl <- data.frame( x = eff[[.x]]$x[[terms[1]]], predicted = eff[[.x]]$fit, std.error = eff[[.x]]$se, conf.low = eff[[.x]]$lower, conf.high = eff[[.x]]$upper, response.level = .x, stringsAsFactors = FALSE ) .create_eff_group(tmpl, terms, eff, sub = .x) }) tmp <- do.call(rbind, l) fx.term <- eff[[1]]$term } else { tmp <- data.frame( x = eff$x[[terms[1]]], predicted = eff$fit, std.error = eff$se, conf.low = eff$lower, conf.high = eff$upper, stringsAsFactors = FALSE ) tmp <- .create_eff_group(tmp, terms, eff, sub = NULL) # effects-package keeps the order of numeric value as they are # returned by "unique()", so we want to sort the data frame # in the order of ascending values if (is.numeric(eff$data[[terms[1]]])) tmp <- tmp[order(tmp$x), ] fx.term <- eff$term } } if (!no.transform) { linv <- insight::link_inverse(model) tmp$predicted <- linv(tmp$predicted) tmp$conf.low <- linv(tmp$conf.low) tmp$conf.high <- linv(tmp$conf.high) } # init legend labels legend.labels <- NULL # get axis titles and labels all.labels <- .get_axis_titles_and_labels( original_model_frame, terms, .get_model_function(model), model_info = model_info, no.transform, type = NULL ) # slice data, only select observations that have specified # levels for the grouping variables # for numeric values with many decimal places, we need to round if (.frac_length(tmp$x) > 5) filter.keep <- round(tmp$x, 5) %in% round(at_values[[1]], 5) else filter.keep <- tmp$x %in% at_values[[1]] tmp <- tmp[filter.keep, , drop = FALSE] # slice data, only select observations that have specified # levels for the facet variables if (length(at_values) > 1) { filter.keep <- tmp$group %in% at_values[[2]] tmp <- tmp[filter.keep, , drop = FALSE] } # slice data, only select observations that have specified # levels for the facet variables if (length(at_values) > 2) { filter.keep <- tmp$facet %in% at_values[[3]] tmp <- tmp[filter.keep, , drop = FALSE] } # label grouping variables, for axis and legend labels in plot if (length(terms) > 1) { # grouping variable may not be labelled # do this here, so we convert to labelled factor later tmp <- .add_labels_to_groupvariable(tmp, original_model_frame, terms) # convert to factor for proper legend tmp <- .groupvariable_to_labelled_factor(tmp) # check if we have legend labels legend.labels <- sjlabelled::get_labels(tmp$group, attr.only = FALSE, drop.unused = TRUE) } # convert to data frame result <- as.data.frame(tmp, stringsAsFactors = FALSE) if(length(terms) > 1) { attr(result, "continuous.group") <- is.numeric(original_model_frame[[terms[2]]]) & is.null(attr(original_model_frame[[terms[2]]], "labels")) } else { attr(result, "continuous.group") <- FALSE } # add raw data as well attr(result, "rawdata") <- .get_raw_data(model, original_model_frame, terms) x_v <- original_model_frame[[fx.term]] if (is.null(x_v)) xif <- ifelse(is.factor(tmp$x), "1", "0") else xif <- ifelse(is.factor(x_v), "1", "0") attr(result, "x.is.factor") <- xif # set attributes with necessary information .set_attributes_and_class( data = result, model = model, t.title = all.labels$t.title, x.title = all.labels$x.title, y.title = all.labels$y.title, l.title = all.labels$l.title, legend.labels = legend.labels, x.axis.labels = all.labels$axis.labels, model_info = model_info, terms = cleaned_terms, original_terms = original_terms, ci.lvl = ci.lvl ) } .create_eff_group <- function(tmp, terms, eff, sub) { if (!is.null(sub)) { fx <- eff[[sub]] } else { fx <- eff } # with or w/o grouping factor? if (length(terms) == 1) { # convert to factor for proper legend tmp$group <- as.factor(1) } else if (length(terms) == 2) { tmp$group <- as.factor(fx$x[[terms[2]]]) } else { tmp$group <- as.factor(fx$x[[terms[2]]]) tmp$facet <- as.factor(fx$x[[terms[3]]]) } tmp } ggeffects/R/utils_reshape.R0000644000176200001440000000374113577126075015424 0ustar liggesusers#' @importFrom stats reshape #' @keywords internal .gather <- function(x, names_to = "key", values_to = "value", columns = colnames(x)) { if (is.numeric(columns)) columns <- colnames(x)[columns] dat <- stats::reshape( x, idvar = "id", ids = row.names(x), times = columns, timevar = names_to, v.names = values_to, varying = list(columns), direction = "long" ) if (is.factor(dat[[values_to]])) dat[[values_to]] <- as.character(dat[[values_to]]) dat[, 1:(ncol(dat) - 1), drop = FALSE] } #' @importFrom stats reshape #' @keywords internal .multiple_gather <- function(x, names_to = "key", values_to = "value", columns = colnames(x), numeric_timvar = FALSE, id = "id") { ## TODO make timevar numeric? variable_attr <- lapply(x, attributes) if (is.numeric(columns)) columns <- colnames(x)[columns] if (!is.list(columns)) columns <- list(columns) dat <- stats::reshape( x, idvar = id, times = columns[[1]], timevar = names_to, v.names = values_to, varying = columns, direction = "long" ) if (numeric_timvar) { f <- as.factor(dat[[names_to]]) levels(f) <- 1:nlevels(f) dat[[names_to]] <- as.numeric(as.character(f)) } for (i in colnames(dat)) { attributes(dat[[i]]) <- variable_attr[[i]] } dat[[id]] <- NULL rownames(dat) <- NULL dat } .var_rename <- function(x, ...) { .dots <- unlist(match.call(expand.dots = FALSE)$...) old_names <- names(.dots) new_names <- unname(.dots) non.match <- which(!(old_names %in% colnames(x))) if (length(non.match)) { # remove invalid names old_names <- old_names[-non.match] new_names <- new_names[-non.match] } name_pos <- match(old_names, colnames(x)) colnames(x)[name_pos] <- new_names x } .round_numeric <- function(x, digits = 2) { x[] <- lapply(x, function(.i) { if (is.numeric(.i)) round(.i, digits = digits) else .i }) x } ggeffects/R/get_predictions_logistf.R0000644000176200001440000000133413575640521017455 0ustar liggesusers#' @importFrom insight get_data get_predictions_logistf <- function(model, fitfram, terms, ...) { prdat <- data.frame( predictions = model$predict, insight::get_data(model) ) grp_means <- tapply( prdat$predictions, lapply(terms, function(i) prdat[[i]]), function(j) mean(j, na.rm = TRUE), simplify = FALSE ) terms_df <- data.frame(expand.grid(attributes(grp_means)$dimnames), stringsAsFactors = FALSE) colnames(terms_df) <- terms terms_df <- .convert_numeric_factors(terms_df) pv <- cbind(terms_df, predicted = unlist(grp_means)) rownames(pv) <- NULL fitfram <- merge(fitfram, pv) # CI fitfram$conf.low <- NA fitfram$conf.high <- NA fitfram } ggeffects/R/emmeans_prediction_data.R0000644000176200001440000000656013577124762017416 0ustar liggesusers.emmeans_mixed_zi <- function(model, data_grid, cleaned_terms, ...) { if (inherits(model, "glmmTMB")) { .ggemmeans_glmmTMB(model, data_grid, cleaned_terms, ...) } else { .ggemmeans_MixMod(model, data_grid, cleaned_terms, ...) } } .emmeans_prediction_data <- function(model, data_grid, cleaned_terms, ci.lvl, pmode, type, model_info, ...) { if (model_info$is_ordinal | model_info$is_categorical) { prediction_data <- .ggemmeans_predict_ordinal(model, data_grid, cleaned_terms, ci.lvl, type, ...) } else if (inherits(model, "MCMCglmm")) { prediction_data <- .ggemmeans_predict_MCMCglmm(model, data_grid, cleaned_terms, ci.lvl, pmode, type, ...) } else { prediction_data <- .ggemmeans_predict_generic(model, data_grid, cleaned_terms, ci.lvl, pmode, type, ...) } } #' @importFrom stats formula .ggemmeans_MixMod <- function(model, data_grid, cleaned_terms, ...) { if (!requireNamespace("emmeans")) { stop("Package `emmeans` required to compute marginal effects for MixMod-models.", call. = FALSE) } x1 <- as.data.frame(suppressWarnings(emmeans::emmeans( model, specs = cleaned_terms, at = data_grid, ... ))) x2 <- as.data.frame(suppressWarnings(emmeans::emmeans( model, specs = all.vars(stats::formula(model, type = "zi_fixed")), at = data_grid, mode = "zero_part", ... ))) list(x1 = x1, x2 = x2) } .ggemmeans_glmmTMB <- function(model, data_grid, cleaned_terms, ...) { if (!requireNamespace("emmeans")) { stop("Package `emmeans` required to compute marginal effects for glmmTMB-models.", call. = FALSE) } x1 <- as.data.frame(suppressWarnings(emmeans::emmeans( model, specs = cleaned_terms, at = data_grid, component = "cond", ... ))) x2 <- as.data.frame(suppressWarnings(emmeans::emmeans( model, specs = cleaned_terms, at = data_grid, component = "zi", ... ))) list(x1 = x1, x2 = x2) } .ggemmeans_predict_ordinal <- function(model, data_grid, cleaned_terms, ci.lvl, type, ...) { tmp <- emmeans::emmeans( model, specs = c(insight::find_response(model, combine = FALSE), cleaned_terms), at = data_grid, mode = "prob", ... ) .ggemmeans_add_confint(model, tmp, ci.lvl, type, pmode = "prob") } .ggemmeans_predict_MCMCglmm <- function(model, data_grid, cleaned_terms, ci.lvl, pmode, type, ...) { tmp <- emmeans::emmeans( model, specs = cleaned_terms, at = data_grid, mode = pmode, data = insight::get_data(model), ... ) .ggemmeans_add_confint(model, tmp, ci.lvl, type, pmode) } .ggemmeans_predict_generic <- function(model, data_grid, cleaned_terms, ci.lvl, pmode, type, ...) { tmp <- tryCatch( { suppressWarnings( emmeans::emmeans( model, specs = cleaned_terms, at = data_grid, mode = pmode, ... ) ) }, error = function(e) { insight::print_color("Can't compute marginal effects, 'emmeans::emmeans()' returned an error.\n\n", "red") cat(sprintf("Reason: %s\n", e$message)) cat("You may try 'ggpredict()' or 'ggeffect()'.\n\n") NULL } ) if (!is.null(tmp)) .ggemmeans_add_confint(model, tmp, ci.lvl, type, pmode) else NULL } ggeffects/R/utils_select.R0000644000176200001440000000156613567446145015261 0ustar liggesusers#' @keywords internal string_contains <- function(pattern, x) { pattern <- paste0("\\Q", pattern, "\\E") grep(pattern, x, perl = TRUE) } #' @keywords internal string_ends_with <- function(pattern, x) { pattern <- paste0("\\Q", pattern, "\\E$") grep(pattern, x, perl = TRUE) } #' @keywords internal string_one_of <- function(pattern, x) { m <- unlist(lapply(pattern, function(.x) grep(.x, x, fixed = TRUE, useBytes = TRUE))) x[m] } #' @keywords internal .rownames_as_column <- function(x, var = "rowname") { rn <- data.frame(rn = rownames(x), stringsAsFactors = FALSE) x <- cbind(rn, x) colnames(x)[1] <- var rownames(x) <- NULL x } #' @keywords internal .obj_has_name <- function(x, name) { name %in% names(x) } #' @keywords internal obj_has_rownames <- function(x) { !identical(as.character(1:nrow(x)), rownames(x)) } ggeffects/R/get_predictions_MCMCglmm.R0000644000176200001440000000112513466500233017372 0ustar liggesusersget_predictions_MCMCglmm <- function(model, fitfram, ci.lvl, interval, ...) { if (!(interval %in% c("confidence", "prediction"))) { interval <- "confidence" } prdat <- stats::predict( model, newdata = fitfram, type = "response", interval = interval, level = ci.lvl, ... ) fitfram$predicted <- prdat[, 1] fitfram$conf.low <- prdat[, 2] fitfram$conf.high <- prdat[, 3] # copy standard errors attr(fitfram, "std.error") <- NULL attr(fitfram, "prediction.interval") <- interval == "prediction" fitfram } ggeffects/R/simulate_predictions.R0000644000176200001440000000475113575640521017000 0ustar liggesusers#' @importFrom stats simulate quantile sd complete.cases simulate_predictions <- function(model, nsim, clean_terms, ci) { fitfram <- insight::get_data(model) fam <- insight::model_info(model) if (fam$is_binomial | fam$is_ordinal | fam$is_categorical) stop("Can't simulate predictions from models with binary, categorical or ordinal outcome. Please use another option for argument `type`.", call. = FALSE) sims <- stats::simulate(model, nsim = nsim, re.form = NULL) fitfram$predicted <- apply(sims, 1, mean) fitfram$conf.low <- apply(sims, 1, stats::quantile, probs = 1 - ci) fitfram$conf.high <- apply(sims, 1, stats::quantile, probs = ci) fitfram$std.error <- apply(sims, 1, stats::sd) means_predicted <- tapply( fitfram$predicted, lapply(clean_terms, function(i) fitfram[[i]]), function(j) mean(j, na.rm = TRUE), simplify = FALSE ) means_conf_low <- tapply( fitfram$conf.low, lapply(clean_terms, function(i) fitfram[[i]]), function(j) mean(j, na.rm = TRUE), simplify = FALSE ) means_conf_high <- tapply( fitfram$conf.high, lapply(clean_terms, function(i) fitfram[[i]]), function(j) mean(j, na.rm = TRUE), simplify = FALSE ) means_se <- tapply( fitfram$std.error, lapply(clean_terms, function(i) fitfram[[i]]), function(j) mean(j, na.rm = TRUE), simplify = FALSE ) terms_df <- data.frame(expand.grid(attributes(means_predicted)$dimnames), stringsAsFactors = FALSE) colnames(terms_df) <- clean_terms terms_df <- .convert_numeric_factors(terms_df) fitfram <- cbind( terms_df, predicted = unlist(lapply(means_predicted, function(i) if (is.null(i)) NA else i)), conf.low = unlist(lapply(means_conf_low, function(i) if (is.null(i)) NA else i)), conf.high = unlist(lapply(means_conf_high, function(i) if (is.null(i)) NA else i)), std.error = unlist(lapply(means_se, function(i) if (is.null(i)) NA else i)) ) rownames(fitfram) <- NULL fitfram <- fitfram[stats::complete.cases(fitfram), ] if (length(clean_terms) == 1) { fitfram <- fitfram[order(fitfram[[1]]), ] } else if (length(clean_terms) == 2) { fitfram <- fitfram[order(fitfram[[1]], fitfram[[2]]), ] } else if (length(clean_terms) == 3) { fitfram <- fitfram[order(fitfram[[1]], fitfram[[2]], fitfram[[3]]), ] } else if (length(clean_terms) == 4) { fitfram <- fitfram[order(fitfram[[1]], fitfram[[2]], fitfram[[3]], fitfram[[4]]), ] } fitfram } ggeffects/R/utils_handle_labels.R0000644000176200001440000001364413577115754016557 0ustar liggesusers# add labels to grouping and facet variables, if these # variables come from labelled data #' @importFrom sjlabelled get_labels set_labels #' @importFrom stats na.omit .add_labels_to_groupvariable <- function(mydf, original_model_frame, terms) { grp.lbl <- sjlabelled::get_labels( original_model_frame[[terms[2]]], non.labelled = TRUE, values = "n", drop.unused = TRUE ) # no new labels for labelled factors if (is.factor(mydf$group) && !.is_numeric_factor(mydf$group)) grp.lbl <- NULL # drop levels, if necessary if (is.factor(mydf$group) && .n_distinct(mydf$group) < nlevels(mydf$group)) mydf$group <- droplevels(mydf$group) # check if vector has any labels if (!is.null(grp.lbl) && !is.null(names(grp.lbl))) { # get unique levels, and match levels with group labels # might be necessary, if user only wants to calculate effects # for specific factor levels - unused labels must be removed then values <- as.numeric(as.vector(unique(stats::na.omit(mydf$group)))) if (min(values) < 1) values <- round(.recode_to(values, lowest = 1)) grp.lbl <- grp.lbl[values] mydf$group <- sjlabelled::set_labels(mydf$group, labels = grp.lbl) # make sure values of labels match actual values in vector if (!all(mydf$group %in% sjlabelled::get_values(mydf$group))) attr(mydf$group, "labels") <- NULL } if (.obj_has_name(mydf, "facet")) { facet.lbl <- sjlabelled::get_labels( original_model_frame[[terms[3]]], non.labelled = TRUE, values = "n", drop.unused = TRUE ) # no new labels for labelled factors if (is.factor(mydf$facet) && !.is_numeric_factor(mydf$facet)) facet.lbl <- NULL # drop levels, if necessary if (is.factor(mydf$facet) && .n_distinct(mydf$facet) < nlevels(mydf$facet)) mydf$facet <- droplevels(mydf$facet) # check if vector has any labels if (!is.null(facet.lbl) && !is.null(names(facet.lbl))) { # get unique levels, and match levels with group labels # might be necessary, if user only wants to calculate effects # for specific factor levels - unused labels must be removed then values <- as.numeric(as.vector(unique(stats::na.omit(mydf$facet)))) if (min(values) < 1) values <- .recode_to(values, lowest = 1) facet.lbl <- facet.lbl[values] mydf$facet <- sjlabelled::set_labels(mydf$facet, labels = facet.lbl) # make sure values of labels match actual values in vector if (!all(mydf$facet %in% sjlabelled::get_values(mydf$facet))) attr(mydf$facet, "labels") <- NULL } } mydf } # this method converts lavelled group variables # into factors with labelled levels #' @importFrom sjlabelled as_label .groupvariable_to_labelled_factor <- function(mydf) { mydf$group <- sjlabelled::as_label( mydf$group, prefix = FALSE, drop.na = TRUE, drop.levels = !is.numeric(mydf$group) ) # make sure we have a facet-column at all if (.obj_has_name(mydf, "facet")) { # convert to factor mydf$facet <- sjlabelled::as_label( mydf$facet, prefix = TRUE, drop.na = TRUE, drop.levels = !is.numeric(mydf$facet) ) } mydf } # get labels from labelled data for axis titles and labels #' @importFrom sjlabelled get_label .get_axis_titles_and_labels <- function(original_model_frame, terms, fun, model_info, no.transform, type) { # Retrieve response for automatic title resp.col <- colnames(original_model_frame)[1] # check for family, and set appropriate scale-title # if we have transformation through effects-package, # check if data is on original or transformed scale ysc <- get_title_labels(fun, model_info, no.transform, type) # set plot-title t.title <- paste(sprintf("Predicted %s of", ysc), sjlabelled::get_label(original_model_frame[[1]], def.value = resp.col)) # axis titles x.title <- sjlabelled::get_label(original_model_frame[[terms[1]]], def.value = terms[1]) y.title <- sjlabelled::get_label(original_model_frame[[1]], def.value = resp.col) if (fun == "coxph") { if (!is.null(type) && type == "surv") { t.title <- y.title <- "Probability of Survival" } else if (!is.null(type) && type == "cumhaz") { t.title <- y.title <- "Cumulative Hazard" } else { t.title <- "Predicted risk scores" y.title <- "Risk Score" } } # legend title l.title <- sjlabelled::get_label(original_model_frame[[terms[2]]], def.value = terms[2]) # check if we have a categorical variable with value # labels at the x-axis. axis.labels <- sjlabelled::get_labels( original_model_frame[[terms[1]]], non.labelled = TRUE, drop.unused = TRUE ) list( t.title = t.title, x.title = x.title, y.title = y.title, l.title = l.title, axis.labels = axis.labels ) } get_title_labels <- function(fun, model_info, no.transform, type) { ysc <- "values" if (fun == "glm") { if (model_info$is_brms_trial) ysc <- "successes" else if (model_info$is_binomial || model_info$is_ordinal) ysc <- ifelse(isTRUE(no.transform), "log-odds", "probabilities") else if (model_info$is_count) ysc <- ifelse(isTRUE(no.transform), "log-mean", "counts") } else if (model_info$is_beta) { ysc <- "proportion" } else if (fun == "coxph") { if (!is.null(type) && type == "surv") ysc <- "survival probabilities" else if (!is.null(type) && type == "cumhaz") ysc <- "cumulative hazard" else ysc <- "risk scores" } ysc } .recode_to <- function(x, lowest, highest = -1) { if (is.factor(x)) { x <- as.numeric(as.character(x)) } minval <- min(x, na.rm = TRUE) downsize <- minval - lowest x <- sapply(x, function(y) y - downsize) if (highest > lowest) x[x > highest] <- NA x } ggeffects/R/get_predictions_vglm.R0000644000176200001440000000627013577107657016771 0ustar liggesusers#' @importFrom insight model_info get_response get_predictions_vglm <- function(model, fitfram, ci.lvl, linv, ...) { if (!requireNamespace("VGAM", quietly = TRUE)) { stop("Package `VGAM` needed to calculate marginal effects for a vector generalized linear model.", call. = FALSE) } se <- !is.null(ci.lvl) && !is.na(ci.lvl) mi <- insight::model_info(model) # compute ci, two-ways if (!is.null(ci.lvl) && !is.na(ci.lvl)) ci <- (1 + ci.lvl) / 2 else ci <- .975 if (mi$is_ordinal && !isTRUE(se)) { type <- "response" } else { type <- "link" } prdat <- VGAM::predictvglm( model, newdata = fitfram, type = type, se.fit = se, ... ) if (mi$is_ordinal) { # start here with cumulative link models resp <- insight::get_response(model) if (is.data.frame(resp)) resp.names <- colnames(resp) else resp.names <- levels(resp) if (se) { dat <- data.frame(predicted = prdat$fitted.values) resp.names <- resp.names[-1] } else { dat <- data.frame(predicted = prdat) linv <- function(mu) mu } colnames(dat) <- resp.names fitfram <- cbind(dat, fitfram) # for cumulative link models, we have predicted values for each response # category. Hence, gather columns fitfram <- .gather(fitfram, names_to = "response.level", values_to = "predicted", resp.names) fitfram$predicted <- linv(fitfram$predicted) if (is.matrix(fitfram$predicted)) fitfram$predicted <- as.vector(fitfram$predicted[, 2]) if (se) { d1 <- data.frame(ci.low = prdat$fitted.values - stats::qnorm(ci) * prdat$se.fit) d2 <- data.frame(ci.high = prdat$fitted.values + stats::qnorm(ci) * prdat$se.fit) d3 <- data.frame(se = prdat$se.fit) colnames(d1) <- sprintf("ci_low_%s", resp.names) colnames(d2) <- sprintf("ci_high_%s", resp.names) colnames(d3) <- sprintf("se_%s", resp.names) dat1 <- .gather(d1, names_to = "response.level", values_to = "conf.low") dat2 <- .gather(d2, names_to = "response.level", values_to = "conf.high") dat3 <- .gather(d3, names_to = "response.level", values_to = "se") fitfram$conf.low <- linv(dat1$conf.low) fitfram$conf.high <- linv(dat2$conf.high) if (is.matrix(fitfram$conf.low)) fitfram$conf.low <- as.vector(fitfram$conf.low[, 2]) if (is.matrix(fitfram$conf.high)) fitfram$conf.high <- as.vector(fitfram$conf.high[, 2]) attr(fitfram, "std.error") <- dat3$se fitfram$response.level <- sprintf("P[Y >= %s]", fitfram$response.level) } } else { # start here for other models prdat$fitted.values <- as.vector(prdat$fitted.values) fitfram$predicted <- suppressWarnings(linv(prdat$fitted.values)) # did user request standard errors? if yes, compute CI if (se) { # calculate CI fitfram$conf.low <- suppressWarnings(linv(prdat$fitted.values - stats::qnorm(ci) * prdat$se.fit)) fitfram$conf.high <- suppressWarnings(linv(prdat$fitted.values + stats::qnorm(ci) * prdat$se.fit)) } else { # no CI fitfram$conf.low <- NA fitfram$conf.high <- NA } } fitfram } ggeffects/R/get_predictions_svyglmnb.R0000644000176200001440000000107113565167350017650 0ustar liggesusersget_predictions_svyglmnb <- function(model, fitfram, ci.lvl, linv, model_class, value_adjustment, terms, vcov.fun, vcov.type, vcov.args, condition, interval, ...) { # does user want standard errors? se <- !is.null(ci.lvl) && !is.na(ci.lvl) prdat <- stats::predict( model, newdata = fitfram, type = "link", se.fit = se, ... ) # copy predictions .generic_prediction_data(model, fitfram, linv, prdat, se, ci.lvl, model_class, value_adjustment, terms, vcov.fun, vcov.type, vcov.args, condition, interval) } ggeffects/R/plot.R0000644000176200001440000006705713577130563013542 0ustar liggesusers#' @title Plot ggeffects-objects #' @name plot #' #' @description A generic plot-method for \code{ggeffects}-objects. #' #' @param x An object of class \code{ggeffects}, as returned by the functions #' from this package. #' @param ci Logical, if \code{TRUE}, confidence bands (for continuous variables #' at x-axis) resp. error bars (for factors at x-axis) are plotted. #' @param ci.style Character vector, indicating the style of the confidence #' bands. May be either \code{"ribbon"}, \code{"errorbar"}, \code{"dash"} or #' \code{"dot"}, to plot a ribbon, error bars, or dashed or dotted lines as #' confidence bands. #' @param facets,grid Logical, defaults to \code{TRUE}, if \code{x} has a column named #' \code{facet}, and defaults to \code{FALSE}, if \code{x} has no such #' column. Set \code{facets = TRUE} to wrap the plot into facets even #' for grouping variables (see 'Examples'). \code{grid} is an alias for #' \code{facets}. #' @param add.data,rawdata Logical, if \code{TRUE}, a layer with raw data from response by #' predictor on the x-axis, plotted as point-geoms, is added to the plot. #' @param colors Character vector with color values in hex-format, valid #' color value names (see \code{demo("colors")}) or a name of a #' ggeffects-color-palette. #' Following options are valid for \code{colors}: #' \itemize{ #' \item If not specified, the color brewer palette "Set1" will be used. #' \item If \code{"gs"}, a greyscale will be used. #' \item If \code{"bw"}, the plot is black/white and uses different line types to distinguish groups. #' \item There are some pre-defined color-palettes in this package that can be used, e.g. \code{colors = "metro"}. See \code{\link[=show_pals]{show_pals()}} to show all available palettes. #' \item Else specify own color values or names as vector (e.g. \code{colors = c("#f00000", "#00ff00")}). #' } #' @param alpha Alpha value for the confidence bands. #' @param line.size Numeric, size of the line geoms. #' @param dot.size Numeric, size of the point geoms. #' @param dodge Value for offsetting or shifting error bars, to avoid overlapping. #' Only applies, if a factor is plotted at the x-axis (in such cases, the #' confidence bands are replaced by error bars automatically), or if #' \code{ci.style = "errorbars"}. #' @param use.theme Logical, if \code{TRUE}, a slightly tweaked version of ggplot's #' minimal-theme, \code{theme_ggeffects()}, is applied to the plot. If #' \code{FALSE}, no theme-modifications are applied. #' @param dot.alpha Alpha value for data points, when \code{add.data = TRUE}. #' @param jitter Numeric, between 0 and 1. If not \code{NULL} and #' \code{add.data = TRUE}, adds a small amount of random variation to #' the location of data points dots, to avoid overplotting. Hence the #' points don't reflect exact values in the data. May also be a numeric #' vector of length two, to add different horizontal and vertical jittering. #' For binary outcomes, raw data is not jittered by default to avoid that #' data points exceed the axis limits. #' @param log.y Logical, if \code{TRUE}, the y-axis scale is log-transformed. #' This might be useful for binomial models with predicted probabilities on #' the y-axis. #' @param show.legend Logical, shows or hides the plot legend. #' @param show.title Logical, shows or hides the plot title- #' @param show.x.title Logical, shows or hides the plot title for the x-axis. #' @param show.y.title Logical, shows or hides the plot title for the y-axis. #' @param connect.lines Logical, if \code{TRUE} and plot has point-geoms with #' error bars (this is usually the case when the x-axis is discrete), points #' of same groups will be connected with a line. #' @param one.plot Logical, if \code{TRUE} and \code{x} has a \code{panel} column #' (i.e. when four \code{terms} were used), a single, integrated plot is produced. #' @param base_size Base font size. #' @param base_family Base font family. #' @param ... Further arguments passed down to \code{ggplot::scale_y*()}, to #' control the appearance of the y-axis. #' #' @inheritParams get_title #' #' @return A ggplot2-object. #' #' @note Load \code{library(ggplot2)} and use \code{theme_set(theme_ggeffects())} #' to set the \pkg{ggeffects}-theme as default plotting theme. You can then use #' further plot-modifiers from \pkg{sjPlot}, like \code{legend_style()} or #' \code{font_size()} without losing the theme-modifications. #' \cr \cr #' There are pre-defined colour palettes in this package. Use #' \code{show_pals()} to show all available colour palettes. #' #' @details For proportional odds logistic regression (see \code{\link[MASS]{polr}}) #' or cumulative link models in general, plots are automatically facetted #' by \code{response.level}, which indicates the grouping of predictions #' based on the level of the model's response. #' #' @examples #' library(sjlabelled) #' data(efc) #' efc$c172code <- as_label(efc$c172code) #' fit <- lm(barthtot ~ c12hour + neg_c_7 + c161sex + c172code, data = efc) #' #' dat <- ggpredict(fit, terms = "c12hour") #' plot(dat) #' #' \donttest{ #' # facet by group, use pre-defined color palette #' dat <- ggpredict(fit, terms = c("c12hour", "c172code")) #' plot(dat, facet = TRUE, colors = "hero") #' #' # don't use facets, b/w figure, w/o confidence bands #' dat <- ggpredict(fit, terms = c("c12hour", "c172code")) #' plot(dat, colors = "bw", ci = FALSE) #' #' # factor at x axis, plot exact data points and error bars #' dat <- ggpredict(fit, terms = c("c172code", "c161sex")) #' plot(dat) #' #' # for three variables, automatic facetting #' dat <- ggpredict(fit, terms = c("c12hour", "c172code", "c161sex")) #' plot(dat)} #' #' # show all color palettes #' show_pals() #' #' @importFrom stats binomial poisson gaussian Gamma inverse.gaussian quasi quasibinomial quasipoisson #' @importFrom sjlabelled as_numeric #' @export plot.ggeffects <- function(x, ci = TRUE, ci.style = c("ribbon", "errorbar", "dash", "dot"), facets, add.data = FALSE, colors = "Set1", alpha = .15, dodge = .25, use.theme = TRUE, dot.alpha = .35, jitter = .2, log.y = FALSE, case = NULL, show.legend = TRUE, show.title = TRUE, show.x.title = TRUE, show.y.title = TRUE, dot.size = NULL, line.size = NULL, connect.lines = FALSE, grid, one.plot = TRUE, rawdata, ...) { if (!requireNamespace("ggplot2", quietly = FALSE)) { stop("Package `ggplot2` needed to produce marginal effects plots. Please install it by typing `install.packages(\"ggplot2\", dependencies = TRUE)` into the console.", call. = FALSE) } # check alias if (missing(rawdata)) rawdata <- add.data # set some defaults jitter.miss <- missing(jitter) if (isTRUE(jitter)) jitter <- .2 else if (is.logical(jitter) && length(jitter) == 1L && !is.na(jitter) && !jitter) jitter <- NULL # make sure we have two values, one for horizontal and one for vertical jittering if (!is.null(jitter) && length(jitter) == 1 && is.numeric(jitter)) { jitter <- c(jitter, jitter) } y.breaks <- NULL y.limits <- NULL # is x a factor? xif <- attr(x, "x.is.factor", exact = TRUE) x_is_factor <- !is.null(xif) && xif == "1" if (is.null(dot.size)) dot.size <- 2 if (is.null(line.size)) line.size <- .7 if (!missing(grid)) facets <- grid if (missing(ci.style) && x_is_factor) ci.style <- "errorbar" ci.style <- match.arg(ci.style) add.args <- lapply(match.call(expand.dots = F)$`...`, function(x) x) if (!("breaks" %in% names(add.args)) && isTRUE(log.y)) { y.breaks <- unique(round(log2(pretty(c(min(x$conf.low), max(x$conf.high)))))) y.breaks[is.nan(y.breaks)] <- NA y.breaks[is.infinite(y.breaks)] <- NA y.breaks <- 2^y.breaks[!is.na(y.breaks)] y.limits <- c(min(y.breaks), max(y.breaks)) # this is a REALLY sloppy hack to avoid that axis limits are not 0 for # log-scale, and that axis limits cover the range of the plotted geoms # I think there's a more elegant solution, so please let me know... if (y.limits[1] > min(x$conf.low)) y.limits[1] <- y.limits[1] / 2 if (y.limits[2] < max(x$conf.high)) y.limits[2] <- y.limits[2] * 2 if (y.limits[1] > min(x$conf.low)) y.limits[1] <- y.limits[1] / 2 if (y.limits[2] < max(x$conf.high)) y.limits[2] <- y.limits[2] * 2 } # do we have groups and facets? has_groups <- .obj_has_name(x, "group") && length(unique(x$group)) > 1 has_facets <- .obj_has_name(x, "facet") && length(unique(x$facet)) > 1 has_panel <- .obj_has_name(x, "panel") && length(unique(x$panel)) > 1 # convert x back to numeric if (!is.numeric(x$x)) { if (x_is_factor && .is_numeric_factor(x$x)) levels(x$x) <- seq_len(nlevels(x$x)) x$x <- sjlabelled::as_numeric(x$x) } # special solution for polr facet_polr <- FALSE if (.obj_has_name(x, "response.level") && length(unique(x$response.level)) > 1) { has_facets <- TRUE facet_polr <- TRUE } # remember if we have a b/w plot is_black_white <- colors[1] == "bw" # set default, if argument not specified if (has_facets) facets <- TRUE else if (missing(facets) || is.null(facets)) facets <- has_facets # facets, but only groups? here the user wants to # plot facets for the grouping variable facets_grp <- facets && !has_facets # set CI to false if we don't have SE and CI if ("conf.low" %in% names(which(colSums(is.na(x)) == nrow(x))) || !.obj_has_name(x, "conf.low")) ci <- FALSE # if we have a numeric variable as facet, also add variable name for more # intuitive labelling if (facets) { if (is.numeric(x$facet) || isTRUE(attr(x, "numeric.facet", exact = TRUE))) { x$facet <- sprintf( "%s = %g", attr(x, "terms", exact = TRUE)[3], sjlabelled::as_numeric(x$facet) ) } } if (!has_panel) one.plot <- FALSE if (one.plot && !requireNamespace("see", quietly = TRUE)) { warning("Package `see` needed to plot multiple panels in one integrated figure. Please install it by typing `install.packages(\"see\", dependencies = TRUE)` into the console.", call. = FALSE) one.plot <- FALSE } if (has_panel) { panels <- unique(x$panel) p <- lapply(1:length(panels), function(.i) { .p <- panels[.i] attr(x, "panel.title") <- sprintf( "%s = %s", attr(x, "terms", exact = TRUE)[4], as.character(.p) ) if (one.plot && .i < length(panels)) { show_l <- FALSE } else { show_l <- show.legend } pl <- plot_panel( x = x[x$panel == .p, ], colors = colors, has_groups = has_groups, facets_grp = facets_grp, facets = facets, facet_polr = facet_polr, is_black_white = is_black_white, x_is_factor = x_is_factor, alpha = alpha, dot.alpha = dot.alpha, dodge = dodge, ci = ci, ci.style = ci.style, dot.size = dot.size, line.size = line.size, connect.lines = connect.lines, case = case, jitter = jitter, jitter.miss = jitter.miss, rawdata = rawdata, show.title = show.title, show.x.title = show.x.title, show.y.title = show.y.title, show.legend = show_l, log.y = log.y, y.breaks = y.breaks, y.limits = y.limits, use.theme = use.theme, ... ) if (one.plot) { if (.i < length(panels)) { pl <- pl + ggplot2::labs(x = NULL) } if (.i > 1) { pl <- pl + ggplot2::labs(title = NULL) } } pl }) } else { p <- plot_panel( x = x, colors = colors, has_groups = has_groups, facets_grp = facets_grp, facets = facets, facet_polr = facet_polr, is_black_white = is_black_white, x_is_factor = x_is_factor, alpha = alpha, dot.alpha = dot.alpha, dodge = dodge, ci = ci, ci.style = ci.style, dot.size = dot.size, line.size = line.size, connect.lines = connect.lines, case = case, jitter = jitter, jitter.miss = jitter.miss, rawdata = rawdata, show.title = show.title, show.x.title = show.x.title, show.y.title = show.y.title, show.legend = show.legend, log.y = log.y, y.breaks = y.breaks, y.limits = y.limits, use.theme = use.theme, ... ) } if (has_panel && one.plot && requireNamespace("see", quietly = TRUE)) { do.call(see::plots, p) } else { p } } plot_panel <- function(x, colors, has_groups, facets_grp, facets, facet_polr, is_black_white, x_is_factor, alpha, dot.alpha, dodge, ci, ci.style, dot.size, line.size, connect.lines, case, jitter, jitter.miss, rawdata, show.title, show.x.title, show.y.title, show.legend, log.y, y.breaks, y.limits, use.theme, ...) { if (.obj_has_name(x, "group") && is.character(x$group)) x$group <- factor(x$group, levels = unique(x$group)) if (.obj_has_name(x, "facet") && is.character(x$facet)) x$facet <- factor(x$facet, levels = unique(x$facet)) if (.obj_has_name(x, "response.level") && is.character(x$response.level)) x$response.level <- ordered(x$response.level, levels = unique(x$response.level)) if (rawdata & isTRUE(attr(x, "continuous.group"))) { x$group_col <- as.numeric(as.character(x$group)) } else { x$group_col <- x$group } # base plot, set mappings if (has_groups && !facets_grp && is_black_white && x_is_factor) p <- ggplot2::ggplot(x, ggplot2::aes_string(x = "x", y = "predicted", colour = "group_col", fill = "group_col", shape = "group")) else if (has_groups && !facets_grp && is_black_white && !x_is_factor) p <- ggplot2::ggplot(x, ggplot2::aes_string(x = "x", y = "predicted", colour = "group_col", fill = "group_col", linetype = "group")) else if (has_groups && !facets_grp && colors[1] == "gs" && x_is_factor) p <- ggplot2::ggplot(x, ggplot2::aes_string(x = "x", y = "predicted", colour = "group_col", fill = "group_col", shape = "group")) else if (has_groups && colors[1] != "bw") p <- ggplot2::ggplot(x, ggplot2::aes_string(x = "x", y = "predicted", colour = "group_col", fill = "group_col")) else p <- ggplot2::ggplot(x, ggplot2::aes_string(x = "x", y = "predicted")) # get color values colors <- .get_colors(colors, length(unique(x$group)), isTRUE(attr(x, "continuous.group"))) # now plot the geom. we use a smoother for a continuous x, and # a point-geom, if x was a factor. In this case, the x-value is still # numeric, but we need to plot exact data points between categories # and no smoothing across all x-values if (x_is_factor) { # for x as factor p <- p + ggplot2::geom_point( position = ggplot2::position_dodge(width = dodge), size = dot.size ) } else { # classical line p <- p + ggplot2::geom_line(size = line.size, ggplot2::aes_string(group = "group")) } # connect dots with lines... if (x_is_factor && connect.lines) { p <- p + ggplot2::geom_line( size = line.size, position = ggplot2::position_dodge(width = dodge) ) } # CI? if (ci) { # for a factor on x-axis, use error bars if (x_is_factor) { if (ci.style == "errorbar") { p <- p + ggplot2::geom_errorbar( ggplot2::aes_string(ymin = "conf.low", ymax = "conf.high"), position = ggplot2::position_dodge(width = dodge), width = .1, size = line.size ) } else { lt <- switch( ci.style, dash = 2, dot = 3, 2 ) p <- p + ggplot2::geom_errorbar( ggplot2::aes_string(ymin = "conf.low", ymax = "conf.high", linetype = NULL), position = ggplot2::position_dodge(width = dodge), width = .1, linetype = lt, size = line.size ) } } else { if (ci.style == "ribbon") { # for continuous x, use ribbons by default p <- p + ggplot2::geom_ribbon( ggplot2::aes_string(ymin = "conf.low", ymax = "conf.high", colour = NULL, linetype = NULL, shape = NULL, group = "group"), alpha = alpha ) } else if (ci.style == "errorbar") { p <- p + ggplot2::geom_point( position = ggplot2::position_dodge(width = dodge), size = dot.size ) + ggplot2::geom_errorbar( ggplot2::aes_string(ymin = "conf.low", ymax = "conf.high", shape = NULL), position = ggplot2::position_dodge(width = dodge), size = line.size, width = 0 ) } else { lt <- switch( ci.style, dash = 2, dot = 3, 2 ) p <- p + ggplot2::geom_line( ggplot2::aes_string(y = "conf.low", linetype = NULL), linetype = lt ) + ggplot2::geom_line( ggplot2::aes_string(y = "conf.high", linetype = NULL), linetype = lt ) } } } # If we have x-axis-labels, use these to label the axis x_lab <- get_x_labels(x, case) if (!is.null(x_lab)) { p <- p + ggplot2::scale_x_continuous(breaks = unique(x$x), labels = x_lab) } # facets? if (facets_grp) { # facet groups p <- p + ggplot2::facet_wrap(~group, scales = "free_x") # remove legends p <- p + ggplot2::guides(colour = "none", linetype = "none", shape = "none") } else if (facet_polr) { p <- p + ggplot2::facet_wrap(~response.level, scales = "free_x") } else if (facets) { p <- p + ggplot2::facet_wrap(~facet, scales = "free_x") } # plot raw data points. we need an own aes for this if (rawdata) { # get raw data and check, if any data available rawdat <- attr(x, "rawdata", exact = TRUE) if (!is.null(rawdat)) { # make sure response is numeric rawdat$response <- sjlabelled::as_numeric(rawdat$response) # check if we have a group-variable with at least two groups if (.obj_has_name(rawdat, "group")) { if (isTRUE(attr(x, "continuous.group"))) { rawdat$group_col <- as.numeric(as.character(rawdat$group)) } else { rawdat$group_col <- rawdat$group } rawdat$group <- as.factor(rawdat$group) # levels(rawdat$group) <- unique(x$group) grps <- .n_distinct(rawdat$group) > 1 } else { grps <- FALSE } # check if we have only selected values for groups, in this case # filter raw data to match grouping colours if (grps && isFALSE(attr(x, "continuous.group")) && .n_distinct(rawdat$group) > .n_distinct(x$group)) { rawdat <- rawdat[which(rawdat$group %in% x$group), ] } # if we have groups, add colour aes, to map raw data to # grouping variable if (grps) mp <- ggplot2::aes_string(x = "x", y = "response", colour = "group_col") else mp <- ggplot2::aes_string(x = "x", y = "response") # for binary response, no jittering by default if ((attr(x, "logistic", exact = TRUE) == "1" && jitter.miss) || is.null(jitter)) { p <- p + ggplot2::geom_point( data = rawdat, mapping = mp, alpha = dot.alpha, size = dot.size, show.legend = FALSE, inherit.aes = FALSE, shape = 16 ) } else { if (ci.style == "errorbar") { p <- p + ggplot2::geom_point( data = rawdat, mapping = mp, alpha = dot.alpha, size = dot.size, position = ggplot2::position_jitterdodge( jitter.width = jitter[1], jitter.height = jitter[2], dodge.width = dodge ), show.legend = FALSE, inherit.aes = FALSE, shape = 16 ) } else { p <- p + ggplot2::geom_jitter( data = rawdat, mapping = mp, alpha = dot.alpha, size = dot.size, width = jitter[1], height = jitter[2], show.legend = FALSE, inherit.aes = FALSE, shape = 16 ) } } } else { message("Raw data not available.") } } # set colors if(isTRUE(rawdata) && isTRUE(attr(x, "continuous.group"))) { p <- p + ggplot2::scale_color_gradientn(colors = colors, aesthetics = c("colour", "fill"), guide = "legend", breaks = as.numeric(levels(x$group)), limits = range(c(rawdat$group_col, x$group_col))) } else { p <- p + ggplot2::scale_color_manual(values = colors, aesthetics = c("colour", "fill")) } # show/hide titles if (!show.title) attr(x, "title") <- NULL if (!show.title) attr(x, "n.trials") <- NULL if (!show.x.title) attr(x, "x.title") <- NULL if (!show.y.title) attr(x, "y.title") <- NULL # set axis titles p <- p + ggplot2::labs( title = get_title(x, case), x = get_x_title(x, case), y = get_y_title(x, case), fill = NULL, subtitle = get_sub_title(x) ) if (has_groups && show.legend) p <- p + ggplot2::labs( colour = get_legend_title(x, case), linetype = get_legend_title(x, case), shape = get_legend_title(x, case) ) # no legend for fill-aes p <- p + ggplot2::guides(fill = "none") if (is_black_white) { p <- p + ggplot2::guides(colour = "none") + ggplot2::labs(colour = NULL) } # show or hide legend? if (!show.legend) { p <- p + ggplot2::labs( colour = NULL, linetype = NULL, shape = NULL ) + ggplot2::guides(colour = "none", linetype = "none", shape = "none") } # for binomial family, fix coord if (attr(x, "logistic", exact = TRUE) == "1" && attr(x, "is.trial", exact = TRUE) == "0") { if (!requireNamespace("scales", quietly = FALSE)) { warning("Package `scales` needed to use percentage values for the y-axis. Install it by typing `install.packages(\"scales\", dependencies = TRUE)` into the console.", call. = FALSE) if (log.y) { if (is.null(y.breaks)) p <- p + ggplot2::scale_y_log10(...) else p <- p + ggplot2::scale_y_log10(breaks = y.breaks, limits = y.limits, ...) } else p <- p + ggplot2::scale_y_continuous(...) } else { if (log.y) { if (is.null(y.breaks)) p <- p + ggplot2::scale_y_log10(labels = scales::percent, ...) else p <- p + ggplot2::scale_y_log10(labels = scales::percent, breaks = y.breaks, limits = y.limits, ...) } else p <- p + ggplot2::scale_y_continuous(labels = scales::percent, ...) } } else if (log.y) { if (is.null(y.breaks)) p <- p + ggplot2::scale_y_log10(...) else p <- p + ggplot2::scale_y_log10(breaks = y.breaks, limits = y.limits, ...) } else { p <- p + ggplot2::scale_y_continuous(...) } # tweak theme if (use.theme) p <- p + theme_ggeffects() p } #' @importFrom graphics plot #' @export plot.ggalleffects <- function(x, ci = TRUE, ci.style = c("ribbon", "errorbar", "dash", "dot"), facets, add.data = FALSE, colors = "Set1", alpha = .15, dodge = .25, use.theme = TRUE, dot.alpha = .5, jitter = .2, log.y = FALSE, case = NULL, show.legend = TRUE, show.title = TRUE, show.x.title = TRUE, show.y.title = TRUE, dot.size = NULL, line.size = NULL, connect.lines = FALSE, grid, one.plot = TRUE, rawdata, ...) { if (!missing(grid)) facets <- grid if (missing(facets)) facets <- NULL # check alias if (missing(rawdata)) rawdata <- add.data if (isTRUE(facets)) { # merge all effect-data frames into one dat <- get_complete_df(x) rawdat <- suppressWarnings( do.call(rbind, lapply(x, function(d) { tmp <- attr(d, "rawdata") tmp$group <- d$group[1] tmp })) ) # copy raw data attr(dat, "rawdata") <- rawdat # set various attributes attr(dat, "x.is.factor") <- attr(x[[1]], "x.is.factor", exact = T) attr(dat, "family") <- attr(x[[1]], "family", exact = T) attr(dat, "link") <- attr(x[[1]], "link", exact = T) attr(dat, "logistic") <- attr(x[[1]], "logistic", exact = T) attr(dat, "fitfun") <- attr(x[[1]], "fitfun", exact = T) graphics::plot( x = dat, ci = ci, ci.style = ci.style, facets = TRUE, add.data = rawdata, colors = colors, alpha = alpha, dodge = dodge, use.theme = use.theme, dot.alpha = dot.alpha, jitter = jitter, log.y = log.y, case = case, show.legend = show.legend, show.title = FALSE, show.x.title = show.x.title, show.y.title = FALSE, dot.size = dot.size, line.size = line.size, connect.lines = connect.lines, ... ) } else { lapply(x, function(.x) { graphics::plot( x = .x, ci = ci, facets = facets, rawdata = rawdata, colors = colors, alpha = alpha, dodge = dodge, use.theme = use.theme, dot.alpha = dot.alpha, jitter = jitter, log.y = log.y, case = case, show.legend = show.legend, show.title = show.title, show.x.title = show.x.title, show.y.title = show.y.title, dot.size = dot.size, line.size = line.size ) }) } } ggeffects/R/get_predictions_generic2.R0000644000176200001440000000337113573411636017510 0ustar liggesusers#' @importFrom stats qnorm predict get_predictions_generic2 <- function(model, fitfram, ci.lvl, linv, type, model_class, value_adjustment, terms, vcov.fun, vcov.type, vcov.args, condition, interval, ...) { # get prediction type. pt <- switch( model_class, "betareg" = , "vgam" = , "feglm" = , "glmx" = , "fixest" = "link", "response" ) se <- (!is.null(ci.lvl) && !is.na(ci.lvl)) || !is.null(vcov.fun) # compute ci, two-ways if (!is.null(ci.lvl) && !is.na(ci.lvl)) ci <- (1 + ci.lvl) / 2 else ci <- .975 # get predictions prdat <- stats::predict( model, newdata = fitfram, type = pt, ... ) fitfram$predicted <- as.vector(prdat) # get standard errors from variance-covariance matrix se.pred <- .standard_error_predictions( model = model, prediction_data = fitfram, value_adjustment = value_adjustment, type = type, terms = terms, model_class = model_class, vcov.fun = vcov.fun, vcov.type = vcov.type, vcov.args = vcov.args, condition = condition, interval = interval ) if (!is.null(se.pred) && isTRUE(se)) { se.fit <- se.pred$se.fit fitfram <- se.pred$prediction_data # CI fitfram$conf.low <- linv(fitfram$predicted - stats::qnorm(ci) * se.fit) fitfram$conf.high <- linv(fitfram$predicted + stats::qnorm(ci) * se.fit) # copy standard errors attr(fitfram, "std.error") <- se.fit attr(fitfram, "prediction.interval") <- attr(se.pred, "prediction_interval") } else { # CI fitfram$conf.low <- NA fitfram$conf.high <- NA } fitfram$predicted <- linv(fitfram$predicted) fitfram } ggeffects/R/get_predictions_gamlss.R0000644000176200001440000000322513565167350017300 0ustar liggesusers#' @importFrom insight link_inverse get_predictions_gamlss <- function(model, fitfram, ci.lvl, terms, model_class, value_adjustment, condition, ...) { se <- !is.null(ci.lvl) && !is.na(ci.lvl) # compute ci, two-ways if (!is.null(ci.lvl) && !is.na(ci.lvl)) ci <- (1 + ci.lvl) / 2 else ci <- .975 prdat <- suppressMessages( stats::predict( model, newdata = fitfram, type = "link", se.fit = FALSE, ... )) fitfram$predicted <- as.vector(prdat) # check whether prediction are requested for specific distribution parameter # and if so, use correct link-inverse function. add.args <- lapply(match.call(expand.dots = F)$`...`, function(x) x) if ("what" %in% names(add.args)) what <- eval(add.args[["what"]]) else what <- "mu" linv <- insight::link_inverse(model, what = what) # did user request standard errors? if yes, compute CI se.pred <- .standard_error_predictions( model = model, prediction_data = fitfram, value_adjustment = value_adjustment, terms = terms, model_class = model_class, condition = condition ) if (se && !is.null(se.pred)) { se.fit <- se.pred$se.fit fitfram <- se.pred$prediction_data # CI fitfram$conf.low <- linv(fitfram$predicted - stats::qnorm(ci) * se.fit) fitfram$conf.high <- linv(fitfram$predicted + stats::qnorm(ci) * se.fit) # copy standard errors attr(fitfram, "std.error") <- se.fit } else { # CI fitfram$conf.low <- NA fitfram$conf.high <- NA } fitfram$predicted <- linv(fitfram$predicted) fitfram } ggeffects/R/utils_typical_value.R0000644000176200001440000000475413577116121016632 0ustar liggesusers#' @importFrom stats median .typical_value <- function(x, fun = "mean", weights = NULL, ...) { # check if we have named vectors and find the requested function # for special functions for factors, convert to numeric first fnames <- names(fun) if (!is.null(fnames)) { if (is.integer(x)) { fun <- fun[which(fnames %in% c("integer", "i"))] x <- as.numeric(x) } else if (is.numeric(x)) { fun <- fun[which(fnames %in% c("numeric", "n"))] } else if (is.factor(x)) { fun <- fun[which(fnames %in% c("factor", "f"))] if (fun != "mode") x <- as.numeric(x, keep.labels = FALSE) } } # for weighted mean, check that weights are of same length as x if (fun == "weighted.mean" && !is.null(weights)) { # make sure weights and x have same length if (length(weights) != length(x)) { # if not, tell user and change function to mean warning("Vector of weights is of different length than `x`. Using `mean` as function for typical value.", call. = F) fun <- "mean" } # make sure weights are differen from 1 if (all(weights == 1)) { # if not, tell user and change function to mean warning("All weight values are `1`. Using `mean` as function for typical value.", call. = F) fun <- "mean" } } # no weights, than use normal mean function if (fun == "weighted.mean" && is.null(weights)) fun <- "mean" if (fun == "median") myfun <- get("median", asNamespace("stats")) else if (fun == "weighted.mean") myfun <- get("weighted.mean", asNamespace("stats")) else if (fun == "mode") myfun <- get(".mode_value", asNamespace("ggeffects")) else if (fun == "zero") return(0) else myfun <- get("mean", asNamespace("base")) if (is.integer(x)) { stats::median(x, na.rm = TRUE) } else if (is.numeric(x)) { if (fun == "weighted.mean") do.call(myfun, args = list(x = x, na.rm = TRUE, w = weights, ...)) else do.call(myfun, args = list(x = x, na.rm = TRUE, ...)) } else if (is.factor(x)) { if (fun != "mode") levels(x)[1] else .mode_value(x) } else { .mode_value(x) } } .mode_value <- function(x, ...) { # create frequency table, to find most common value counts <- table(x) modus <- names(counts)[max(counts) == counts] # in case all values appear equally often, use first value if (length(modus) > 1) modus <- modus[1] # check if it's numeric if (!is.na(suppressWarnings(as.numeric(modus)))) as.numeric(modus) else modus } ggeffects/R/ggemmeans_zi_predictions.R0000644000176200001440000000527313565176502017624 0ustar liggesusers.ggemmeans_zi_predictions <- function(model, model_frame, preds, ci.lvl, terms, cleaned_terms, value_adjustment, condition, nsim = 1000, type = "fe") { prdat <- exp(preds$x1$emmean) * (1 - stats::plogis(preds$x2$emmean)) # compute ci, two-ways if (!is.null(ci.lvl) && !is.na(ci.lvl)) ci <- (1 + ci.lvl) / 2 else ci <- .975 # data grid newdata <- .data_grid( model = model, model_frame = model_frame, terms = terms, value_adjustment = value_adjustment, factor_adjustment = FALSE, show_pretty_message = FALSE, condition = condition ) # 2nd data grid, reasons see below data_grid <- .data_grid( model = model, model_frame = model_frame, terms = terms, value_adjustment = value_adjustment, show_pretty_message = FALSE, condition = condition, emmeans.only = FALSE ) # Since the zero inflation and the conditional model are working in "opposite # directions", confidence intervals can not be derived directly from the # "predict()"-function. Thus, confidence intervals for type = "fe.zi" are # based on quantiles of simulated draws from a multivariate normal distribution # (see also _Brooks et al. 2017, pp.391-392_ for details). prdat.sim <- .simulate_predictions(model, newdata, nsim, terms, value_adjustment, condition) if (is.null(prdat.sim)) { stop("Predicted values could not be computed. Try reducing number of simulation, using argument `nsim` (e.g. `nsim = 100`)", call. = FALSE) } # we need two data grids here: one for all combination of levels from the # model predictors ("newdata"), and one with the current combinations only # for the terms in question ("data_grid"). "sims" has always the same # number of rows as "newdata", but "data_grid" might be shorter. So we # merge "data_grid" and "newdata", add mean and quantiles from "sims" # as new variables, and then later only keep the original observations # from "data_grid" - by this, we avoid unequal row-lengths. sims <- exp(prdat.sim$cond) * (1 - stats::plogis(prdat.sim$zi)) prediction_data <- .join_simulations(data_grid, newdata, prdat, sims, ci, cleaned_terms) if (type == "re.zi") { revar <- .get_random_effect_variance(model) # get link-function and back-transform fitted values # to original scale, so we compute proper CI lf <- insight::link_function(model) prediction_data$conf.low <- exp(lf(prediction_data$conf.low) - stats::qnorm(ci) * sqrt(revar)) prediction_data$conf.high <- exp(lf(prediction_data$conf.high) + stats::qnorm(ci) * sqrt(revar)) prediction_data$std.error <- sqrt(prediction_data$std.error^2 + revar) } prediction_data } ggeffects/R/standard_error_predictions.R0000644000176200001440000001260313604103340020143 0ustar liggesusers# get standard errors of predictions from model matrix and vcov ---- .standard_error_predictions <- function( model, prediction_data, value_adjustment, terms, model_class = NULL, type = "fe", vcov.fun = NULL, vcov.type = NULL, vcov.args = NULL, condition = NULL, interval = NULL) { se <- tryCatch( { .safe_se_from_vcov( model, prediction_data, value_adjustment, terms, model_class, type, vcov.fun, vcov.type, vcov.args, condition, interval ) }, error = function(x) { x }, warning = function(x) { NULL }, finally = function(x) { NULL } ) if (is.null(se) || inherits(se, c("error", "simpleError"))) { insight::print_color("Error: Confidence intervals could not be computed.\n", "red") if (inherits(se, c("error", "simpleError"))) { cat(sprintf("* Reason: %s\n", .safe_deparse(se[[1]]))) err.source <- .safe_deparse(se[[2]]) if (all(grepl("^(?!(safe_se_from_vcov))", err.source, perl = TRUE))) { cat(sprintf("* Source: %s\n", err.source)) } } se <- NULL } se } #' @importFrom stats model.matrix terms formula #' @importFrom insight find_random clean_names find_parameters get_varcov find_formula .safe_se_from_vcov <- function(model, prediction_data, value_adjustment, terms, model_class, type, vcov.fun, vcov.type, vcov.args, condition, interval) { model_frame <- insight::get_data(model) # check random effect terms. We can't compute SE if data has # factors with only one level, however, if user conditions on # random effects and only conditions on one level, it is indeed # possible to calculate SE - so, ignore random effects for the # check of one-level-factors only re.terms <- insight::find_random(model, split_nested = TRUE, flatten = TRUE) # we can't condition on categorical variables if (!is.null(condition)) { cn <- names(condition) cn.factors <- sapply(cn, function(.x) is.factor(model_frame[[.x]]) && !(.x %in% re.terms)) condition <- condition[!cn.factors] if (.is_empty(condition)) condition <- NULL } # copy data frame with predictions newdata <- .data_grid( model, model_frame, terms, value_adjustment = value_adjustment, factor_adjustment = FALSE, show_pretty_message = FALSE, condition = condition ) # make sure we have enough values to compute CI nlevels_terms <- sapply( colnames(newdata), function(.x) !(.x %in% re.terms) && is.factor(newdata[[.x]]) && nlevels(newdata[[.x]]) == 1 ) if (any(nlevels_terms)) { not_enough <- colnames(newdata)[which(nlevels_terms)[1]] remove_lvl <- paste0("[", gsub(pattern = "(.*)\\[(.*)\\]", replacement = "\\2", x = terms[which(.clean_terms(terms) == not_enough)]), "]", collapse = "") stop(sprintf("`%s` does not have enough factor levels. Try to remove `%s`.", not_enough, remove_lvl), call. = TRUE) } # add response to newdata. For models fitted with "glmmPQL", # the response variable is renamed internally to "zz". if (inherits(model, "glmmPQL")) { new.resp <- 0 names(new.resp) <- "zz" } else { fr <- insight::find_response(model, combine = FALSE) new.resp <- rep(0, length.out = length(fr)) names(new.resp) <- fr } new.resp <- new.resp[setdiff(names(new.resp), colnames(newdata))] newdata <- cbind(as.list(new.resp), newdata) # clean terms from brackets terms <- .clean_terms(terms) # sort data by grouping levels, so we have the correct order # to slice data afterwards if (length(terms) > 2) { trms <- terms[3] newdata <- newdata[order(newdata[[trms]]), ] prediction_data <- prediction_data[order(prediction_data[[trms]]), ] } if (length(terms) > 1) { trms <- terms[2] newdata <- newdata[order(newdata[[trms]]), ] prediction_data <- prediction_data[order(prediction_data[[trms]]), ] } trms <- terms[1] newdata <- newdata[order(newdata[[trms]]), ] prediction_data <- prediction_data[order(prediction_data[[trms]]), ] # rownames were resorted as well, which causes troubles in model.matrix rownames(newdata) <- NULL rownames(prediction_data) <- NULL vmatrix <- .vcov_helper(model, model_frame, model_class, newdata, vcov.fun, vcov.type, vcov.args, terms) pvar <- diag(vmatrix) pr_int <- FALSE # condition on random effect variances if (type == "re" || (!is.null(interval) && interval == "prediction")) { sig <- .get_random_effect_variance(model) if (sig > 0.0001) { pvar <- pvar + sig pr_int <- TRUE } } se.fit <- sqrt(pvar) # shorten to length of prediction_data if (!is.null(model_class) && model_class %in% c("polr", "multinom", "mixor")) se.fit <- rep(se.fit, each = .n_distinct(prediction_data$response.level)) else se.fit <- se.fit[1:nrow(prediction_data)] std_error <- list(prediction_data = prediction_data, se.fit = se.fit) attr(std_error, "prediction_interval") <- pr_int std_error } ggeffects/R/get_predictions_clmm.R0000644000176200001440000000324413577125176016746 0ustar liggesusers#' @importFrom stats confint #' @importFrom insight find_random find_predictors print_color get_predictions_clmm <- function(model, terms, value_adjustment, condition, ci.lvl, linv, ...) { if (!requireNamespace("emmeans")) { stop("Package `emmeans` required to compute marginal effects for clmm-models.", call. = FALSE) } values.at <- .data_grid( model = model, model_frame = insight::get_data(model), terms = terms, value_adjustment = value_adjustment, condition = condition, show_pretty_message = FALSE, emmeans.only = TRUE ) # no predicted values at random terms allowed re.terms <- insight::find_random(model, split_nested = TRUE, flatten = TRUE) fe.terms <- insight::find_predictors(model, flatten = TRUE) if (any(re.terms %in% names(values.at)) && !any(re.terms %in% fe.terms)) { insight::print_color("Predicted values can't be computed for levels of random effects from 'clmm' models.\n", "red") cat(sprintf("Please remove following variables from 'terms': %s\n", paste0(re.terms[which(re.terms %in% names(values.at))], collapse = ", "))) return(NULL) } emmpred <- emmeans::emmeans( object = model, spec = c(insight::find_response(model, combine = FALSE), .clean_terms(terms)), at = values.at, mode = "prob" ) fitfram <- as.data.frame(stats::confint(emmpred, level = ci.lvl)) fitfram <- .var_rename( fitfram, prob = "predicted", SE = "std.error", asymp.LCL = "conf.low", asymp.UCL = "conf.high" ) colnames(fitfram)[1] <- "response.level" # copy standard errors attr(fitfram, "std.error") <- fitfram$std.error fitfram } ggeffects/R/get_predictions_bayesx.R0000644000176200001440000000047013564761056017306 0ustar liggesusersget_predictions_bayesx <- function(model, data_grid, ...) { prdat <- suppressMessages( stats::predict( object = model, newdata = data_grid, type = "link" )) data_grid$predicted <- as.vector(prdat) data_grid$conf.low <- NA data_grid$conf.high <- NA data_grid } ggeffects/R/get_predictions_bamlss.R0000644000176200001440000000053113564761043017266 0ustar liggesusersget_predictions_bamlss <- function(model, data_grid, linv, ...) { prdat <- suppressMessages( stats::predict( object = model, newdata = data_grid, model = "mu", type = "link" )) data_grid$predicted <- linv(as.vector(prdat)) data_grid$conf.low <- NA data_grid$conf.high <- NA data_grid } ggeffects/R/get_predictions_lmrob_base.R0000644000176200001440000000114013451124203020071 0ustar liggesusersget_predictions_lmrob_base <- function(model, fitfram, ci.lvl, ...) { if (!is.null(ci.lvl) && !is.na(ci.lvl)) ci <- "confidence" else ci <- "none" prdat <- stats::predict( model, newdata = fitfram, type = "response", interval = ci, level = ci.lvl, ... ) # get predicted values, on link-scale fitfram$predicted <- prdat[, "fit"] if (ci == "none") { fitfram$conf.low <- NA fitfram$conf.high <- NA } else { fitfram$conf.low <- prdat[, "lwr"] fitfram$conf.high <- prdat[, "upr"] } fitfram } ggeffects/R/get_predictions_mixor.R0000644000176200001440000000275213577107534017155 0ustar liggesusersget_predictions_mixor <- function(model, fitfram, ci.lvl, linv, value_adjustment, terms, model_class, condition, ...) { se <- (!is.null(ci.lvl) && !is.na(ci.lvl)) # compute ci, two-ways if (!is.null(ci.lvl) && !is.na(ci.lvl)) ci <- (1 + ci.lvl) / 2 else ci <- .975 prdat <- stats::predict( model, newdata = fitfram, ... ) prdat <- as.data.frame(prdat$predicted) # bind predictions to model frame fitfram <- cbind(prdat, fitfram) # for proportional ordinal logistic regression (see MASS::polr), # we have predicted values for each response category. Hence, # gather columns fitfram <- .gather(fitfram, names_to = "response.level", values_to = "predicted", colnames(prdat)) se.pred <- .standard_error_predictions( model = model, prediction_data = fitfram, value_adjustment = value_adjustment, terms = terms, model_class = model_class, condition = condition ) if (!is.null(se.pred) && isTRUE(se)) { se.fit <- se.pred$se.fit fitfram <- se.pred$prediction_data # CI fitfram$conf.low <- linv(stats::qlogis(fitfram$predicted) - stats::qnorm(ci) * se.fit) fitfram$conf.high <- linv(stats::qlogis(fitfram$predicted) + stats::qnorm(ci) * se.fit) # copy standard errors attr(fitfram, "std.error") <- se.fit attr(fitfram, "prediction.interval") <- attr(se.pred, "prediction_interval") } else { # CI fitfram$conf.low <- NA fitfram$conf.high <- NA } fitfram } ggeffects/R/utils_get_data_grid.R0000644000176200001440000003636313577124137016555 0ustar liggesusers#' @importFrom stats terms median #' @importFrom sjlabelled as_numeric #' @importFrom insight find_predictors find_response find_random find_weights get_weights # factor_adjustment indicates if factors should be held constant or not # need to be false for computing std.error for merMod objects .data_grid <- function(model, model_frame, terms, value_adjustment, factor_adjustment = TRUE, show_pretty_message = TRUE, condition = NULL, emmeans.only = FALSE) { # special handling for coxph if (inherits(model, c("coxph", "coxme"))) { surv.var <- which(colnames(model_frame) == insight::find_response(model)) model_frame <- .remove_column(model_frame, surv.var) } model_info <- .get_model_info(model) # make sure we don't have arrays as variables model_frame[] <- lapply(model_frame, function(i) if (is.array(i)) as.data.frame(i) else i) # check for logical variables, might not work if (any(sapply(model_frame, is.logical))) { stop("Variables of type 'logical' do not work, please coerce to factor and fit the model again.", call. = FALSE) } # any weights? w <- insight::get_weights(model) if (is.null(w) || all(w == 1)) w <- NULL # get random effects (grouping factor) random_effect_terms <- insight::find_random(model, split_nested = TRUE, flatten = TRUE) ## TODO check for other panelr models # clean variable names # if (!inherits(model, "wbm")) { colnames(model_frame) <- insight::clean_names(colnames(model_frame)) # } # get specific levels focal_terms <- .get_representative_values(terms, model_frame) # and all specified variables all_terms <- .clean_terms(terms) # check if user has any predictors with log-transformatio inside # model formula, but *not* used back-transformation "exp". Tell user # so she's aware of the problem tryCatch( { if (!inherits(model, "brmsfit") && show_pretty_message && .has_log(model)) { clean.term <- insight::find_predictors(model, effects = "all", component = "all", flatten = FALSE) clean.term <- unlist(clean.term[c("conditional", "random", "instruments")])[.get_log_terms(model)] exp.term <- string_ends_with(pattern = "[exp]", x = terms) if (any(.is_empty(exp.term)) || any(.clean_terms(terms)[exp.term] != clean.term)) { message(sprintf("Model has log-transformed predictors. Consider using `terms=\"%s [exp]\"` to back-transform scale.", clean.term[1])) } } }, error = function(x) { NULL }, warning = function(x) { NULL }, finally = function(x) { NULL } ) # Check if model has splines, and if so, tell user that he may use # all values - except for gam and vgam models. "predict()" seems # stable even for large data frame for gam/vgam. Especially for # mixed models, computing SE and CI is very memory consuming, leading # to memory allocation errors. That's why by default values for continuous # variables are "prettified" to a smaller set of unique values. use_all_values <- FALSE # for these models, always all values are used all_values_models <- c("Gam", "gam", "vgam", "glm", "lm", "brmsfit", "bamlss", "gamlss", "glmx", "feglm") if (.has_splines(model) && !.uses_all_tag(terms)) { if (inherits(model, all_values_models)) { use_all_values <- TRUE } else if (show_pretty_message) { message(sprintf("Model contains splines or polynomial terms. Consider using `terms=\"%s [all]\"` to get smooth plots. See also package-vignette 'Marginal Effects at Specific Values'.", all_terms[1])) show_pretty_message <- FALSE } } if (.has_poly(model) && !.uses_all_tag(terms) && !use_all_values) { if (inherits(model, all_values_models)) { use_all_values <- TRUE } else if (show_pretty_message) { message(sprintf("Model contains polynomial or cubic / quadratic terms. Consider using `terms=\"%s [all]\"` to get smooth plots. See also package-vignette 'Marginal Effects at Specific Values'.", all_terms[1])) show_pretty_message <- FALSE } } # find terms for which no specific values are given conditional_terms <- which(!(all_terms %in% names(focal_terms))) # prettify numeric vectors, get representative values constant_levels <- .prettify_data( conditional_terms = conditional_terms, original_model_frame = model_frame, terms = all_terms, use_all_values = use_all_values, show_pretty_message = show_pretty_message && model_info$is_binomial ) names(constant_levels) <- all_terms[conditional_terms] focal_terms <- c(focal_terms, constant_levels) ## TODO check for other panelr models # get names of all predictor variable # if (inherits(model, "wbm")) { # model_predictors <- colnames(model_frame) # } else { # model_predictors <- insight::find_predictors(model, effects = "all", component = "all", flatten = TRUE) # } model_predictors <- insight::find_predictors(model, effects = "all", component = "all", flatten = TRUE) if (inherits(model, "wbm")) { model_predictors <- unique(c(insight::find_response(model), model_predictors, model@call_info$id, model@call_info$wave)) } # get count of terms, and number of columns n_predictors <- length(model_predictors) # remove NA from values, so we don't have expanded data grid # with missing values. this causes an error with predict() if (any(sapply(focal_terms, anyNA))) { focal_terms <- lapply(focal_terms, function(.x) as.vector(stats::na.omit(.x))) } ## TODO check, it should actually no longer happen that # the values of "model_predictors" are not in the column names of # the model frame "model_frame" # names of predictor variables may vary, e.g. if log(x) # or poly(x) etc. is used. so check if we have correct # predictor names that also appear in model frame ## TODO brms does currently not support "terms()" generic if (!inherits(model, "wbm")) { if (sum(!(model_predictors %in% colnames(model_frame))) > 0 && !inherits(model, "brmsfit")) { # get terms from model directly model_predictors <- attr(stats::terms(model), "term.labels", exact = TRUE) } # 2nd check if (is.null(model_predictors) || sum(!(model_predictors %in% colnames(model_frame))) > 0) { # get terms from model frame column names model_predictors <- colnames(model_frame) # we may have more terms now, e.g. intercept. remove those now if (length(model_predictors) > n_predictors) model_predictors <- model_predictors[2:(n_predictors + 1)] } } else { model_predictors <- model_predictors[model_predictors %in% colnames(model_frame)] } # keep those, which we did not process yet model_predictors <- model_predictors[!(model_predictors %in% names(focal_terms))] # if we have weights, and typical value is mean, use weighted means # as function for the typical values if (!.is_empty(w) && length(w) == nrow(model_frame) && value_adjustment == "mean") { value_adjustment <- "weighted.mean" } if (value_adjustment == "weighted.mean" && .is_empty(w)) { value_adjustment <- "mean" } # do we have variables that should be held constant at a # specific value? if (!is.null(condition) && !is.null(names(condition))) { focal_terms <- c(focal_terms, as.list(condition)) model_predictors <- model_predictors[!(model_predictors %in% names(condition))] } # add all constant values to list. For those predictors that have to be # held constant, use "typical" values - mean/median for numeric values, # reference level for factors and most common element for character vectors if (isTRUE(emmeans.only)) { # adjust constant values, special handling for emmeans only constant_values <- lapply(model_predictors, function(.x) { x <- model_frame[[.x]] if (!is.factor(x) && !.x %in% random_effect_terms) { .typical_value(x, fun = value_adjustment, weights = w) } }) names(constant_values) <- model_predictors constant_values <- .compact_list(constant_values) } else if (factor_adjustment) { # adjust constant values, factors set to reference level constant_values <- lapply(model_frame[model_predictors], function(x) { if (is.factor(x)) x <- droplevels(x) .typical_value(x, fun = value_adjustment, weights = w) }) } else { # adjust constant values, use all factor levels re.grp <- insight::find_random(model, split_nested = TRUE, flatten = TRUE) # if factors should not be held constant (needed when computing # std.error for merMod objects), we need all factor levels, # and not just the typical value constant_values <- lapply(model_predictors, function(.x) { # get group factors from random effects is.re.grp <- !is.null(re.grp) && .x %in% re.grp x <- model_frame[[.x]] # only get levels if not random effect if (is.factor(x) && !is.re.grp) { levels(droplevels(x)) } else { if (is.factor(x)) x <- droplevels(x) .typical_value(x, fun = value_adjustment, weights = w) } }) names(constant_values) <- model_predictors } # for brms-models with additional response information, we need # also the number of trials to calculate predictions n.trials <- NULL if (!is.null(model_info) && model_info$is_trial && inherits(model, "brmsfit")) { tryCatch( { rv <- insight::find_response(model, combine = FALSE) # check if trials-variable is held constant at another value already if (!(rv[2] %in% names(condition))) { n.trials <- as.integer(stats::median(model_frame[[rv[2]]])) if (!.is_empty(n.trials)) { constant_values <- c(constant_values, list(n.trials)) names(constant_values)[length(constant_values)] <- rv[2] } } }, error = function(x) { NULL } ) } # for MixMod, we need mean value of response as well... if (inherits(model, c("MixMod", "MCMCglmm"))) { constant_values <- c(constant_values, .typical_value(insight::get_response(model))) names(constant_values)[length(constant_values)] <- insight::find_response(model, combine = FALSE) } # add constant values. focal_terms <- c(focal_terms, constant_values) # stop here for emmeans-objects if (isTRUE(emmeans.only)) { # remove grouping factor of RE from constant values # only applicable for MixMod objects if (inherits(model, "MixMod") && !is.null(random_effect_terms) && !.is_empty(constant_values) && any(random_effect_terms %in% names(constant_values))) { constant_values <- constant_values[!(names(constant_values) %in% random_effect_terms)] } # save names focal_term_names <- names(focal_terms) # restore original type focal_terms <- lapply(focal_term_names, function(x) { # check for consistent vector type: numeric if (is.numeric(model_frame[[x]]) && !is.numeric(focal_terms[[x]])) return(sjlabelled::as_numeric(focal_terms[[x]])) # check for consistent vector type: factor if (is.factor(model_frame[[x]]) && !is.factor(focal_terms[[x]])) return(as.character(focal_terms[[x]])) # else return original vector return(focal_terms[[x]]) }) # add back names names(focal_terms) <- focal_term_names # save constant values as attribute attr(focal_terms, "constant.values") <- constant_values attr(focal_terms, "n.trials") <- n.trials # remember if grouping "factor" is numeric. this is possibly required # later when plotting data points for continuous predictors that are # held constant at their mean/sd values or similar. if (length(terms) > 1) { attr(focal_terms, "continuous.group") <- is.numeric(focal_terms[[2]]) } else { attr(focal_terms, "continuous.group") <- FALSE } return(focal_terms) } # create data frame with all unqiue combinations dat <- as.data.frame(expand.grid(focal_terms)) # we have to check type consistency. If user specified certain value # (e.g. "education [1,3]"), these are returned as string and coerced # to factor, even if original vector was numeric. In this case, we have # to coerce back these variables. Else, predict() complains that model # was fitted with numeric, but newdata has factor (or vice versa). datlist <- lapply(colnames(dat), function(x) { # check for consistent vector type: numeric if (is.numeric(model_frame[[x]]) && !is.numeric(dat[[x]])) return(sjlabelled::as_numeric(dat[[x]])) # check for consistent vector type: factor if (is.factor(model_frame[[x]]) && !is.factor(dat[[x]])) return(as.factor(dat[[x]])) # else return original vector return(dat[[x]]) }) # get list names. we need to remove patterns like "log()" etc. names(datlist) <- names(focal_terms) datlist <- as.data.frame(datlist) # in case we have variable names with white space, fix here if (any(names(focal_terms) != colnames(datlist))) { colnames(datlist) <- names(focal_terms) } if (inherits(model, "wbm")) { colnames(datlist) <- names(focal_terms) } # check if predictions should be conditioned on random effects, # but not on each group level. If so, set random effect to NA # which will return predictions on a population level. # See ?glmmTMB::predict if (inherits(model, c("glmmTMB", "merMod", "rlmerMod", "MixMod", "brmsfit", "lme"))) { cleaned_terms <- .clean_terms(terms) # check if we have fixed effects as grouping factor in random effects as well... cleaned_terms <- unique(c(cleaned_terms, insight::find_predictors(model, effects = "fixed", flatten = TRUE))) # if so, remove from random-effects here random_effect_terms <- random_effect_terms[!(random_effect_terms %in% cleaned_terms)] if (!.is_empty(random_effect_terms) && !.is_empty(constant_values)) { # need to check if predictions are conditioned on specific # value if random effect if (inherits(model, c("glmmTMB", "brmsfit", "MixMod"))) { for (i in random_effect_terms) { if (i %in% names(constant_values)) { datlist[[i]] <- NA constant_values[i] <- "NA (population-level)" } } } else if (inherits(model, c("merMod", "rlmerMod", "lme"))) { for (i in random_effect_terms) { if (i %in% names(constant_values)) { datlist[[i]] <- 0 constant_values[i] <- "0 (population-level)" } } } } } # save constant values as attribute attr(datlist, "constant.values") <- constant_values attr(datlist, "n.trials") <- n.trials # remember if grouping "factor" is numeric. this is possibly required # later when plotting data points for continuous predictors that are # held constant at their mean/sd values or similar. if (length(terms) > 1) { attr(datlist, "continuous.group") <- is.numeric(datlist[[2]]) } else { attr(datlist, "continuous.group") <- FALSE } w <- insight::find_weights(model) if (!is.null(w) && !inherits(model, "brmsfit")) { datlist$.w <- as.numeric(NA) colnames(datlist)[ncol(datlist)] <- w } datlist } ggeffects/R/get_predictions_tobit.R0000644000176200001440000000172313451124203017114 0ustar liggesusersget_predictions_tobit <- function(model, fitfram, ci.lvl, linv, ...) { # does user want standard errors? se <- !is.null(ci.lvl) && !is.na(ci.lvl) # compute ci, two-ways if (!is.null(ci.lvl) && !is.na(ci.lvl)) ci <- (1 + ci.lvl) / 2 else ci <- .975 prdat <- stats::predict( model, newdata = fitfram, type = "lp", se.fit = se, ... ) # did user request standard errors? if yes, compute CI if (se) { # copy predictions fitfram$predicted <- linv(prdat$fit) # calculate CI fitfram$conf.low <- linv(prdat$fit - stats::qnorm(ci) * prdat$se.fit) fitfram$conf.high <- linv(prdat$fit + stats::qnorm(ci) * prdat$se.fit) # copy standard errors attr(fitfram, "std.error") <- prdat$se.fit } else { # copy predictions fitfram$predicted <- linv(as.vector(prdat)) # no CI fitfram$conf.low <- NA fitfram$conf.high <- NA } fitfram } ggeffects/R/get_predictions_glm.R0000644000176200001440000000127613565167350016575 0ustar liggesusersget_predictions_glm <- function(model, fitfram, ci.lvl, linv, value_adjustment, model_class, terms, vcov.fun, vcov.type, vcov.args, condition, interval, ...) { # does user want standard errors? se <- !is.null(ci.lvl) && !is.na(ci.lvl) && is.null(vcov.fun) # for models from "robust"-pkg (glmRob) we need to # suppress warnings about fake models prdat <- suppressWarnings(stats::predict.glm( model, newdata = fitfram, type = "link", se.fit = se, ... )) # copy predictions .generic_prediction_data(model, fitfram, linv, prdat, se, ci.lvl, model_class, value_adjustment, terms, vcov.fun, vcov.type, vcov.args, condition, interval) } ggeffects/R/get_predictions_glmmTMB.R0000644000176200001440000001420513565176417017315 0ustar liggesusers#' @importFrom stats predict qnorm plogis #' @importFrom insight link_function print_color get_predictions_glmmTMB <- function(model, data_grid, ci.lvl, linv, type, terms, value_adjustment, condition, ...) { # does user want standard errors? se <- !is.null(ci.lvl) && !is.na(ci.lvl) # compute ci, two-ways if (!is.null(ci.lvl) && !is.na(ci.lvl)) ci <- (1 + ci.lvl) / 2 else ci <- .975 # copy object predicted_data <- data_grid model_info <- insight::model_info(model) clean_terms <- .clean_terms(terms) # check if we have zero-inflated model part if (!model_info$is_zero_inflated && type %in% c("fe.zi", "re.zi")) { if (type == "fe.zi") type <- "fe" else type <- "re" message(sprintf("Model has no zero-inflation part. Changing prediction-type to \"%s\".", type)) } # check whether predictions should be conditioned # on random effects (grouping level) or not. if (type %in% c("fe", "fe.zi")) ref <- NA else ref <- NULL additional_dot_args <- lapply(match.call(expand.dots = F)$`...`, function(x) x) if ("nsim" %in% names(additional_dot_args)) nsim <- eval(additional_dot_args[["nsim"]]) else nsim <- 1000 # predictions conditioned on zero-inflation component if (type %in% c("fe.zi", "re.zi")) { prdat <- as.vector(stats::predict( model, newdata = data_grid, type = "response", se.fit = FALSE, ## FIXME not implemented in glmmTMB <= 0.2.2 # re.form = ref, ... )) if (!se) { predicted_data$predicted <- prdat predicted_data$conf.low <- NA predicted_data$conf.high <- NA } else { model_frame <- insight::get_data(model) # we need a data grid with combination from *all* levels for # all model predictors, so the data grid has the same number # of rows as our simulated data from ".simulate_predictions" newdata <- .data_grid( model = model, model_frame = model_frame, terms = terms, value_adjustment = value_adjustment, factor_adjustment = FALSE, show_pretty_message = FALSE, condition = condition ) # Since the zero inflation and the conditional model are working in "opposite # directions", confidence intervals can not be derived directly from the # "predict()"-function. Thus, confidence intervals for type = "fe.zi" are # based on quantiles of simulated draws from a multivariate normal distribution # (see also _Brooks et al. 2017, pp.391-392_ for details). prdat.sim <- .simulate_predictions(model, newdata, nsim, terms, value_adjustment, condition) if (is.null(prdat.sim) || inherits(prdat.sim, c("error", "simpleError"))) { insight::print_color("Error: Confidence intervals could not be computed.\n", "red") if (inherits(prdat.sim, c("error", "simpleError"))) { cat(sprintf("* Reason: %s\n", .safe_deparse(prdat.sim[[1]]))) cat(sprintf("* Source: %s\n", .safe_deparse(prdat.sim[[2]]))) } predicted_data$predicted <- prdat predicted_data$conf.low <- NA predicted_data$conf.high <- NA } else { # we need two data grids here: one for all combination of levels from the # model predictors ("newdata"), and one with the current combinations only # for the terms in question ("data_grid"). "sims" has always the same # number of rows as "newdata", but "data_grid" might be shorter. So we # merge "data_grid" and "newdata", add mean and quantiles from "sims" # as new variables, and then later only keep the original observations # from "data_grid" - by this, we avoid unequal row-lengths. sims <- exp(prdat.sim$cond) * (1 - stats::plogis(prdat.sim$zi)) predicted_data <- .join_simulations(data_grid, newdata, prdat, sims, ci, clean_terms) if (type == "re.zi") { revar <- .get_random_effect_variance(model) # get link-function and back-transform fitted values # to original scale, so we compute proper CI lf <- insight::link_function(model) predicted_data$conf.low <- exp(lf(predicted_data$conf.low) - stats::qnorm(ci) * sqrt(revar)) predicted_data$conf.high <- exp(lf(predicted_data$conf.high) + stats::qnorm(ci) * sqrt(revar)) predicted_data$std.error <- sqrt(predicted_data$std.error^2 + revar) } } } } else if (type == "sim") { # predictions conditioned on zero-inflation component and random # effects, based on simulations predicted_data <- simulate_predictions(model, nsim, clean_terms, ci) } else { # predictions conditioned on count component only prdat <- stats::predict( model, newdata = data_grid, type = "link", se.fit = se, ## FIXME not implemented in glmmTMB <= 0.2.2 ## TODO once this is fixed, update docs in ggpredict, argument type # re.form = ref, ... ) # did user request standard errors? if yes, compute CI if (se) { predicted_data$predicted <- linv(prdat$fit) # add random effect uncertainty to s.e. if (type %in% c("re", "re.zi")) { pvar <- prdat$se.fit^2 prdat$se.fit <- sqrt(pvar + .get_random_effect_variance(model)) } # calculate CI predicted_data$conf.low <- linv(prdat$fit - stats::qnorm(ci) * prdat$se.fit) predicted_data$conf.high <- linv(prdat$fit + stats::qnorm(ci) * prdat$se.fit) predicted_data$std.error <- prdat$se.fit } else { # copy predictions predicted_data$predicted <- linv(as.vector(prdat)) # no CI predicted_data$conf.low <- NA predicted_data$conf.high <- NA } } if (.obj_has_name(predicted_data, "std.error")) { # copy standard errors attr(predicted_data, "std.error") <- predicted_data$std.error predicted_data <- .remove_column(predicted_data, "std.error") } attr(predicted_data, "prediction.interval") <- type %in% c("re", "re.zi") predicted_data } ggeffects/R/get_predictions_lrm.R0000644000176200001440000000175413451124203016571 0ustar liggesusers#' @importFrom stats plogis qnorm get_predictions_lrm <- function(model, fitfram, ci.lvl, linv, ...) { # does user want standard errors? se <- !is.null(ci.lvl) && !is.na(ci.lvl) # compute ci, two-ways if (!is.null(ci.lvl) && !is.na(ci.lvl)) ci <- (1 + ci.lvl) / 2 else ci <- .975 prdat <- stats::predict( model, newdata = fitfram, type = "lp", se.fit = se, ... ) # copy predictions fitfram$predicted <- stats::plogis(prdat$linear.predictors) # did user request standard errors? if yes, compute CI if (se) { # calculate CI fitfram$conf.low <- stats::plogis(prdat$linear.predictors - stats::qnorm(ci) * prdat$se.fit) fitfram$conf.high <- stats::plogis(prdat$linear.predictors + stats::qnorm(ci) * prdat$se.fit) # copy standard errors attr(fitfram, "std.error") <- prdat$se.fit } else { # No CI fitfram$conf.low <- NA fitfram$conf.high <- NA } fitfram } ggeffects/R/get_predictions_multinom.R0000644000176200001440000000247513577107552017665 0ustar liggesusersget_predictions_multinom <- function(model, fitfram, ci.lvl, linv, value_adjustment, terms, model_class, ...) { # compute ci, two-ways if (!is.null(ci.lvl) && !is.na(ci.lvl)) ci <- (1 + ci.lvl) / 2 else ci <- .975 prdat <- stats::predict( model, newdata = fitfram, type = "probs", ... ) if (is.data.frame(prdat) || is.matrix(prdat)) nc <- 1:ncol(prdat) else nc <- 1 # Matrix to vector tmp <- cbind(as.data.frame(prdat), fitfram) fitfram <- .gather(tmp, names_to = "response.level", values_to = "predicted", colnames(tmp)[nc]) # se.pred <- # .standard_error_predictions( # model = model, # prediction_data = fitfram, # value_adjustment = value_adjustment, # terms = terms, # model_class = model_class # ) # # if (!is.null(se.pred)) { # se.fit <- se.pred$se.fit # fitfram <- se.pred$prediction_data # # CI # fitfram$conf.low <- linv(stats::qlogis(fitfram$predicted) - stats::qnorm(ci) * se.fit) # fitfram$conf.high <- linv(stats::qlogis(fitfram$predicted) + stats::qnorm(ci) * se.fit) # } else { # # No CI # fitfram$conf.low <- NA # fitfram$conf.high <- NA # } # No CI fitfram$conf.low <- NA fitfram$conf.high <- NA fitfram } ggeffects/R/get_predictions_clm.R0000644000176200001440000000301213577110520016545 0ustar liggesusers#' @importFrom insight get_response get_predictions_clm <- function(model, data_grid, ci.lvl, linv, ...) { # does user want standard errors? se <- !is.null(ci.lvl) && !is.na(ci.lvl) # compute ci, two-ways if (!is.null(ci.lvl) && !is.na(ci.lvl)) ci <- (1 + ci.lvl) / 2 else ci <- .975 # prediction, with CI prdat <- stats::predict( model, newdata = data_grid, type = "prob", interval = se, level = ci, ... ) # convert to data frame. prdat <- as.data.frame(prdat) # bind predictions to model frame data_grid <- cbind(prdat, data_grid) # get levels of response lv <- levels(insight::get_response(model)) # for proportional ordinal logistic regression (see ordinal::clm), # we have predicted values for each response category. Hence, # gather columns. Since we also have conf. int. for each response # category, we need to gather multiple columns at once if (isTRUE(se)) { # length of each variable block l <- seq_len(ncol(prdat) / 3) colnames(data_grid)[l] <- lv data_grid <- .multiple_gather( data_grid, names_to = "response.level", values_to = c("predicted", "conf.low", "conf.high"), columns = list(l, l + length(l), l + 2 * length(l)) ) } else { data_grid <- .gather(data_grid, names_to = "response.level", values_to = "predicted", colnames(prdat)) # No CI data_grid$conf.low <- NA data_grid$conf.high <- NA } data_grid } ggeffects/R/get_predictions_wbm.R0000644000176200001440000000270513565167350016601 0ustar liggesusersget_predictions_wbm <- function(model, fitfram, ci.lvl, linv, type, terms, condition, ...) { # does user want standard errors? se <- !is.null(ci.lvl) && !is.na(ci.lvl) # compute ci, two-ways if (!is.null(ci.lvl) && !is.na(ci.lvl)) ci <- (1 + ci.lvl) / 2 else ci <- .975 # check whether predictions should be conditioned # on random effects (grouping level) or not. if (type == "fe") ref <- NA else ref <- NULL clean_terms <- .clean_terms(terms) if (type == "sim") { add.args <- lapply(match.call(expand.dots = F)$`...`, function(x) x) if ("nsim" %in% names(add.args)) nsim <- eval(add.args[["nsim"]]) else nsim <- 1000 fitfram <- simulate_predictions(model, nsim, clean_terms, ci) } else { pred <- suppressWarnings(stats::predict( model, newdata = fitfram, type = "link", re.form = ref, allow.new.levels = TRUE, use.re.var = type == "re", se.fit = se, ... )) if (se) { fitfram$predicted <- linv(pred$fit) fitfram$conf.low <- linv(pred$fit - stats::qnorm(ci) * pred$se.fit) fitfram$conf.high <- linv(pred$fit + stats::qnorm(ci) * pred$se.fit) # copy standard errors attr(fitfram, "std.error") <- pred$se.fit } else { fitfram$predicted <- linv(as.vector(pred)) fitfram$conf.low <- NA fitfram$conf.high <- NA } } fitfram } ggeffects/R/getter.R0000644000176200001440000001107013577106737014044 0ustar liggesusers#' @title Get titles and labels from data #' @name get_title #' #' @description Get variable and value labels from \code{ggeffects}-objects. Functions #' like \code{ggpredict()} or \code{ggeffect()} save #' information on variable names and value labels as additional attributes #' in the returned data frame. This is especially helpful for labelled #' data (see \CRANpkg{sjlabelled}), since these labels can be used to #' set axis labels and titles. #' #' @param x An object of class \code{ggeffects}, as returned by any ggeffects-function; #' for \code{get_complete_df()}, must be a list of \code{ggeffects}-objects. #' @param case Desired target case. Labels will automatically converted into the #' specified character case. See \code{\link[sjlabelled]{convert_case}} for #' more details on this argument. #' #' @return The titles or labels as character string, or \code{NULL}, if variables #' had no labels; \code{get_complete_df()} returns the input list \code{x} #' as single data frame, where the grouping variable indicates the #' marginal effects for each term. #' #' @examples #' library(sjmisc) #' data(efc) #' efc$c172code <- to_factor(efc$c172code) #' fit <- lm(barthtot ~ c12hour + neg_c_7 + c161sex + c172code, data = efc) #' #' mydf <- ggpredict(fit, terms = c("c12hour", "c161sex", "c172code")) #' #' library(ggplot2) #' ggplot(mydf, aes(x = x, y = predicted, colour = group)) + #' stat_smooth(method = "lm") + #' facet_wrap(~facet, ncol = 2) + #' labs( #' x = get_x_title(mydf), #' y = get_y_title(mydf), #' colour = get_legend_title(mydf) #' ) #' #' # get marginal effects, a list of data frames (one data frame per term) #' eff <- ggeffect(fit) #' eff #' get_complete_df(eff) #' #' # get marginal effects for education only, and get x-axis-labels #' mydat <- eff[["c172code"]] #' ggplot(mydat, aes(x = x, y = predicted, group = group)) + #' stat_summary(fun.y = sum, geom = "line") + #' scale_x_discrete(labels = get_x_labels(mydat)) #' #' @export get_title <- function(x, case = NULL) { if (.is_empty(x)) return(NULL) if (!inherits(x, "ggeffects")) stop("`x` must be of class `ggeffects`.", call. = F) sjlabelled::convert_case(attr(x, which = "title", exact = T), case) } #' @rdname get_title #' @export get_x_title <- function(x, case = NULL) { if (.is_empty(x)) return(NULL) if (!inherits(x, "ggeffects")) stop("`x` must be of class `ggeffects`.", call. = F) sjlabelled::convert_case(attr(x, which = "x.title", exact = T), case) } #' @rdname get_title #' @export get_y_title <- function(x, case = NULL) { if (.is_empty(x)) return(NULL) if (!inherits(x, "ggeffects")) stop("`x` must be of class `ggeffects`.", call. = F) sjlabelled::convert_case(attr(x, which = "y.title", exact = T), case) } #' @rdname get_title #' @export get_legend_title <- function(x, case = NULL) { if (.is_empty(x)) return(NULL) if (!inherits(x, "ggeffects")) stop("`x` must be of class `ggeffects`.", call. = F) sjlabelled::convert_case(attr(x, which = "legend.title", exact = T), case) } #' @rdname get_title #' @export get_legend_labels <- function(x, case = NULL) { if (.is_empty(x)) return(NULL) if (!inherits(x, "ggeffects")) stop("`x` must be of class `ggeffects`.", call. = F) sjlabelled::convert_case(attr(x, which = "legend.labels", exact = T), case) } #' @rdname get_title #' @export get_x_labels <- function(x, case = NULL) { if (.is_empty(x)) return(NULL) if (!inherits(x, "ggeffects")) stop("`x` must be of class `ggeffects`.", call. = F) labs <- attr(x, which = "x.axis.labels", exact = T) if (!is.numeric(labs)) { sjlabelled::convert_case(attr(x, which = "x.axis.labels", exact = T), case) } else { labs } } #' @rdname get_title #' @importFrom sjlabelled as_numeric #' @export get_complete_df <- function(x, case = NULL) { suppressWarnings(do.call(rbind, lapply(x, function(df) { df$x <- sjlabelled::as_numeric(df$x) df }))) } get_sub_title <- function(x, case = NULL) { if (.is_empty(x)) return(NULL) if (!inherits(x, "ggeffects")) stop("`x` must be of class `ggeffects`.", call. = F) st <- attr(x, which = "n.trials", exact = T) panel <- attr(x, which = "panel.title", exact = T) if (!is.null(panel)) sjlabelled::convert_case(panel, case) else if (!is.null(st)) sprintf("(for %s trials)", st) else NULL } ggeffects/R/utils_get_representative_values.R0000644000176200001440000000735513577120623021251 0ustar liggesusers# return levels, as list # c("age", "edu [1,3]", "sex [2]") would return a list: # $edu [1] 1 3; $sex [1] 2 #' @importFrom stats setNames sd #' @importFrom sjlabelled as_numeric .get_representative_values <- function(x, model_frame = NULL) { # get variable with suffix terms_with_suffix <- which(as.vector(regexpr(pattern = "([^\\]]*)\\]", text = x, perl = TRUE)) != -1) # is empty? if (.is_empty(terms_with_suffix)) return(NULL) # get variable names. needed later to set as # names attributes at_terms <- .clean_terms(x)[terms_with_suffix] # get levels inside brackets at_levels <- unlist(regmatches(x, gregexpr(pattern = "\\[(.*)\\]", text = x, perl = TRUE))) # remove brackets at_levels <- gsub("(\\[*)(\\]*)", "", at_levels) # see if we have multiple values, split at comma at_levels <- lapply(strsplit(at_levels, ",", fixed = TRUE), trimws) # moderator pattern at_pattern <- c("minmax", "meansd", "zeromax", "quart2", "all", "quart") # now check for ranges at_levels <- mapply(function(x, y) { # Here we may have a range of values. we then create the # sequence with all values from this range if (any(grepl(":", x, fixed = TRUE))) { # values at sequence (from to) ------------------------------------------ from_to_by <- s <- unlist(lapply(strsplit(x, ":", fixed = TRUE), trimws)) if (grepl("by", s[2], fixed = TRUE)) { from_to_by[2] <- sub("(.*)(\\s*)by(\\s*)=(.*)", "\\1", x = s[2]) from_to_by[3] <- sub("(.*)(\\s*)by(\\s*)=(.*)", "\\4", x = s[2]) } else { from_to_by[3] <- "1" } from_to_by <- as.numeric(from_to_by) x <- seq(from = from_to_by[1], to = from_to_by[2], by = from_to_by[3]) } else if (length(x) == 1 && grepl("^n(\\s*)=", x)) { # values at pretty range ----------------------------------------------- steps <- as.numeric(trimws(substring(gsub(" ", "", x), first = 3))) x <- pretty_range(model_frame[[y]], n = steps) } else if (length(x) == 1 && grepl("^sample(\\s*)=", x)) { # values at random samples --------------------------------------------- size <- as.numeric(trimws(substring(gsub(" ", "", x), first = 8))) lev <- stats::na.omit(unique(model_frame[[y]])) pos <- sample.int(n = length(lev), size = size, replace = FALSE) x <- lev[pos] if (is.factor(x)) { if (.is_numeric_factor(x)) { x <- sjlabelled::as_numeric( droplevels(x), keep.labels = FALSE ) } else { x <- as.character(x) } } } else if (length(x) == 1 && grepl("[[:alpha:]]", x)) { # values at function --------------------------------------------- # else, we also may have a character expression. This may # either be the name of a valid function. In this case, we # transform the values for predictions using this function. # Else, it also might be the name of a value labels, so no # valid function name. In this case, simply return the label. if (x == "pretty") { x <- pretty_range(model_frame[[y]]) } else if (x %in% at_pattern) { x <- values_at(model_frame[[y]], values = x) } else { funtrans <- try(match.fun(x), silent = TRUE) if (!inherits(funtrans, "try-error") && !is.null(model_frame)) { x <- funtrans(sort(unique(model_frame[[y]]))) } } } x }, at_levels, at_terms, SIMPLIFY = FALSE) # check if levels were numeric or not... suppressWarnings( if (!anyNA(unlist(lapply(at_levels, as.numeric)))) { at_levels <- lapply(at_levels, as.numeric) } ) stats::setNames(at_levels, at_terms) } ggeffects/R/themes.R0000644000176200001440000001057113560224775014037 0ustar liggesusers#' @rdname plot #' @export theme_ggeffects <- function(base_size = 11, base_family = "") { if (!requireNamespace("ggplot2", quietly = FALSE)) { stop("Package `ggplot2` needed to for this function.", call. = FALSE) } (ggplot2::theme_minimal(base_size = base_size, base_family = base_family) + ggplot2::theme( axis.line.x = ggplot2::element_line(colour = "grey80"), axis.line.y = ggplot2::element_line(colour = "grey80"), axis.text = ggplot2::element_text(colour = "grey50"), axis.title = ggplot2::element_text(colour = "grey30"), strip.background = ggplot2::element_rect(colour = "grey70", fill = "grey90"), strip.text = ggplot2::element_text(colour = "grey30"), legend.title = ggplot2::element_text(colour = "grey30"), legend.text = ggplot2::element_text(colour = "grey30") )) } ggeffects_colors <- list( `aqua` = c("#BAF5F3", "#46A9BE", "#8B7B88", "#BD7688", "#F2C29E"), `warm` = c("#072835", "#664458", "#C45B46", "#F1B749", "#F8EB85"), `dust` = c("#232126", "#7B5756", "#F7B98B", "#F8F7CF", "#AAAE9D"), `blambus` = c("#E02E1F", "#5D8191", "#BD772D", "#494949", "#F2DD26"), `simply` = c("#CD423F", "#0171D3", "#018F77", "#FCDA3B", "#F5C6AC"), `us` = c("#004D80", "#376C8E", "#37848E", "#9BC2B6", "#B5D2C0"), `reefs` = c("#43a9b6", "#218282", "#dbdcd1", "#44515c", "#517784"), `breakfast club` = c("#b6411a", "#4182dd", "#2d6328", "#eec3d8", "#ecf0c8"), `metro` = c("#d11141", "#00aedb", "#00b159", "#f37735", "#8c8c8c", "#ffc425", "#cccccc"), `viridis` = c("#440154", "#46337E", "#365C8D", "#277F8E", "#1FA187", "#4AC16D", "#9FDA3A", "#FDE725"), `ipsum` = c("#3f2d54", "#75b8d1", "#2d543d", "#d18975", "#8fd175", "#d175b8", "#758bd1", "#d1ab75", "#c9d175"), `quadro` = c("#ff0000", "#1f3c88", "#23a393", "#f79f24", "#625757"), `eight` = c("#003f5c", "#2f4b7c", "#665191", "#a05195", "#d45087", "#f95d6a", "#ff7c43", "#ffa600"), `circus` = c("#C1241E", "#0664C9", "#EBD90A", "#6F130D", "#111A79"), `system` = c("#0F2838", "#F96207", "#0DB0F3", "#04EC04", "#FCC44C"), `hero` = c("#D2292B", "#165E88", "#E0BD1C", "#D57028", "#A5CB39", "#8D8F70"), `flat` = c("#c0392b", "#2980b9", "#16a085", "#f39c12", "#8e44ad", "#7f8c8d", "#d35400"), `social` = c("#b92b27", "#0077B5", "#00b489", "#f57d00", "#410093", "#21759b", "#ff3300"), `set1` = c("#E41A1C", "#377EB8", "#4DAF4A", "#984EA3", "#FF7F00", "#FFFF33", "#A65628", "#F781BF", "#999999"), `greyscale` = c("#333333", "#5A5A5A", "#737373", "#878787", "#989898", "#A7A7A7", "#B4B4B4", "#C1C1C1", "#CCCCCC") ) #' @importFrom stats quantile ggeffects_pal <- function(palette = "metro", n = NULL) { pl <- ggeffects_colors[[palette]] if (!is.null(n) && n <= length(pl)) { if (.is_cont_scale(palette)) { pl <- pl[stats::quantile(1:length(pl), probs = seq(0, 1, length.out = n))] } else { pl <- pl[1:n] } } pl } # palettes with a continuous color scale .is_cont_scale <- function(p) { p %in% c("aqua", "dust", "eight", "greyscale", "us", "viridis", "warm") } #' @rdname plot #' @export show_pals <- function() { if (!requireNamespace("ggplot2", quietly = FALSE)) { stop("Package `ggplot2` needed to for this function.", call. = FALSE) } longest.pal <- max(sapply(ggeffects_colors, length)) color_pal <- lapply(ggeffects_colors, function(.x) { if (length(.x) == longest.pal) .x else c(.x, rep("#ffffff", times = longest.pal - length(.x))) }) x <- as.data.frame(color_pal) x <- .gather(x[nrow(x):1, ]) x <- x[order(x$key), ] x$y <- rep_len(1:longest.pal, nrow(x)) x$cols = as.factor(1:nrow(x)) x$key <- factor(x$key, levels = rev(unique(x$key))) x$group <- "Other Palettes" x$group[.is_cont_scale(x$key)] <- "Continuous Palettes" x$group[x$key %in% c("breakfast.club", "flat", "metro", "quadro", "set1", "simply", "social")] <- "Red-Blue-Green Palettes" ggplot2::ggplot(x, ggplot2::aes_string(x = "key", fill = "cols")) + ggplot2::geom_bar(width = .7) + ggplot2::scale_fill_manual(values = x$value) + ggplot2::scale_y_continuous(breaks = NULL, labels = NULL) + ggplot2::guides(fill = "none") + ggplot2::coord_flip() + ggplot2::theme_minimal() + ggplot2::labs(x = NULL, y = NULL) + ggplot2::facet_wrap(~group, ncol = 1, scales = "free") } ggeffects/R/get_predictions_gam2.R0000644000176200001440000000261013565167350016635 0ustar liggesusersget_predictions_Gam <- function(model, fitfram, ci.lvl, linv, value_adjustment, terms, model_class, condition, ...) { se <- !is.null(ci.lvl) && !is.na(ci.lvl) # compute ci, two-ways if (!is.null(ci.lvl) && !is.na(ci.lvl)) ci <- (1 + ci.lvl) / 2 else ci <- .975 prdat <- stats::predict( model, newdata = fitfram, type = "link", ## TODO currently not supported se.fit = FALSE ) # copy predictions fitfram$predicted <- linv(as.vector(prdat)) # did user request standard errors? if yes, compute CI if (se) { se.pred <- .standard_error_predictions( model = model, prediction_data = fitfram, value_adjustment = value_adjustment, terms = terms, model_class = model_class, condition = condition ) if (!is.null(se.pred)) { se.fit <- se.pred$se.fit fitfram <- se.pred$prediction_data # calculate CI fitfram$conf.low <- linv(as.vector(prdat) - stats::qnorm(ci) * se.fit) fitfram$conf.high <- linv(as.vector(prdat) + stats::qnorm(ci) * se.fit) # copy standard errors attr(fitfram, "std.error") <- se.fit } else { # no CI fitfram$conf.low <- NA fitfram$conf.high <- NA } } else { # no CI fitfram$conf.low <- NA fitfram$conf.high <- NA } fitfram } ggeffects/R/get_predictions_zeroinfl.R0000644000176200001440000001102213565176350017634 0ustar liggesusers#' @importFrom stats qlogis predict qnorm get_predictions_zeroinfl <- function(model, data_grid, ci.lvl, linv, type, model_class, value_adjustment, terms, vcov.fun, vcov.type, vcov.args, condition, ...) { # get prediction type. pt <- if (model_class == "zeroinfl" && type == "fe") "count" else if (model_class == "zeroinfl" && type == "fe.zi") "response" else if (model_class == "zerotrunc" && type == "fe") "count" else if (model_class == "zerotrunc" && type == "fe.zi") "response" else if (model_class == "hurdle" && type == "fe") "count" else if (model_class == "hurdle" && type == "fe.zi") "response" else "response" # compute ci, two-ways if (!is.null(ci.lvl) && !is.na(ci.lvl)) ci <- (1 + ci.lvl) / 2 else ci <- .975 # copy object predicted_data <- data_grid add.args <- lapply(match.call(expand.dots = F)$`...`, function(x) x) if ("nsim" %in% names(add.args)) nsim <- eval(add.args[["nsim"]]) else nsim <- 1000 # get predictions prdat <- stats::predict( model, newdata = data_grid, type = pt, ... ) # need back-transformation predicted_data$predicted <- log(as.vector(prdat)) if (type == "fe.zi") { model_frame <- insight::get_data(model) clean_terms <- .clean_terms(terms) newdata <- .data_grid( model, model_frame, terms, value_adjustment = value_adjustment, factor_adjustment = FALSE, show_pretty_message = FALSE, condition = condition ) # Since the zero inflation and the conditional model are working in "opposite # directions", confidence intervals can not be derived directly from the # "predict()"-function. Thus, confidence intervals for type = "fe.zi" are # based on quantiles of simulated draws from a multivariate normal distribution # (see also _Brooks et al. 2017, pp.391-392_ for details). prdat.sim <- .simulate_predictions(model, newdata, nsim, terms, value_adjustment, condition) if (is.null(prdat.sim) || inherits(prdat.sim, c("error", "simpleError"))) { insight::print_color("Error: Confidence intervals could not be computed.\n", "red") cat("Possibly a polynomial term is held constant (and does not appear in the `terms`-argument). Or try reducing number of simulation, using argument `nsim` (e.g. `nsim = 100`).\n") predicted_data$predicted <- as.vector(prdat) predicted_data$conf.low <- NA predicted_data$conf.high <- NA } else { # we need two data grids here: one for all combination of levels from the # model predictors ("newdata"), and one with the current combinations only # for the terms in question ("data_grid"). "sims" has always the same # number of rows as "newdata", but "data_grid" might be shorter. So we # merge "data_grid" and "newdata", add mean and quantiles from "sims" # as new variables, and then later only keep the original observations # from "data_grid" - by this, we avoid unequal row-lengths. sims <- exp(prdat.sim$cond) * (1 - stats::plogis(prdat.sim$zi)) predicted_data <- .join_simulations(data_grid, newdata, as.vector(prdat), sims, ci, clean_terms) if (.obj_has_name(predicted_data, "std.error")) { # copy standard errors attr(predicted_data, "std.error") <- predicted_data$std.error predicted_data <- .remove_column(predicted_data, "std.error") } } } else { # get standard errors from variance-covariance matrix se.pred <- .standard_error_predictions( model = model, prediction_data = data_grid, value_adjustment = value_adjustment, type = type, terms = terms, model_class = model_class, vcov.fun = vcov.fun, vcov.type = vcov.type, vcov.args = vcov.args, condition = condition ) if (!is.null(se.pred)) { se.fit <- se.pred$se.fit predicted_data <- se.pred$prediction_data # CI predicted_data$conf.low <- linv(predicted_data$predicted - stats::qnorm(ci) * se.fit) predicted_data$conf.high <- linv(predicted_data$predicted + stats::qnorm(ci) * se.fit) # copy standard errors attr(predicted_data, "std.error") <- se.fit } else { # CI predicted_data$conf.low <- NA predicted_data$conf.high <- NA } predicted_data$predicted <- linv(predicted_data$predicted) } predicted_data } ggeffects/NEWS.md0000644000176200001440000003012013614007511013277 0ustar liggesusers# ggeffects 0.14.1 ## General * Reduce package dependencies. * New package-vignette _(Cluster) Robust Standard Errors_. ## New supported models * `mixor` (package **mixor**), `cgam`, `cgamm` (package **cgam**) ## Bug fixes * Fix CRAN check issues due to latest *emmeans* update. # ggeffects 0.14.0 ## Breaking Changes * The argument `x.as.factor` is considered as less useful and was removed. ## New supported models * `fixest` (package **fixest**), `glmx` (package **glmx**). ## General * Reduce package dependencies. * `plot(rawdata = TRUE)` now also works for objects from `ggemmeans()`. * `ggpredict()` now computes confidence intervals for predictions from `geeglm` models. * For *brmsfit* models with `trials()` as response variable, `ggpredict()` used to choose the median value of trials were the response was hold constant. Now, you can use the `condition`-argument to hold the number of trials constant at different values. * Improve `print()`. ## Bug fixes * Fixed issue with `clmm`-models, when group factor in random effects was numeric. * Raw data is no longer omitted in plots when grouping variable is continuous and added raw data doesn't numerically match the grouping levels (e.g., mean +/- one standard deviation). * Fix CRAN check issues due to latest *geepack* update. # ggeffects 0.13.0 ## Breaking Changes * The use of `emm()` is discouraged, and so it was removed. ## New supported models * `bracl`, `brmultinom` (package **brglm2**) and models from packages **bamlss** and **R2BayesX**. ## General * Updated package dependencies. * `plot()` now uses dodge-position for raw data for categorical x-axis, to align raw data points with points and error bars geoms from predictions. * Updated and re-arranged internal color palette, especially to have a better behaviour when selecting colors from continuous palettes (see `show_pals()`). ## New functions * Added a `vcov()` function to calculate variance-covariance matrix for marginal effects. ## Changes to Functions * `ggemmeans()` now also accepts `type = "re"` and `type = "re.zi"`, to add random effects variances to prediction intervals for mixed models. * The ellipses-argument `...` is now passed down to the `predict()`-method for *gamlss*-objects, so predictions can be computed for sigma, nu and tau as well. ## Bug fixes * Fixed issue with wrong order of plot x-axis for `ggeffect()`, when one term was a character vector. # ggeffects 0.12.0 ## Breaking Changes * The use of `ggaverage()` is discouraged, and so it was removed. * The name `rprs_values()` is now deprecated, the function is named `values_at()`, and its alias is `representative_values()`. * The `x.as.factor`-argument defaults to `TRUE`. ## General * `ggpredict()` now supports cumulative link and ordinal *vglm* models from package **VGAM**. * More informative error message for *clmm*-models when `terms` included random effects. * `add.data` is an alias for the `rawdata`-argument in `plot()`. * `ggpredict()` and `ggemmeans()` now also support predictions for *gam* models from `ziplss` family. ## Changes to Functions * Improved `print()`-method for ordinal or cumulative link models. * The `plot()`-method no longer changes the order of factor levels for groups and facets. * `pretty_data()` gets a `length()` argument to define the length of intervals to be returned. ## Bug fixes * Added "population level" to output from print-method for *lme* objects. * Fixed issue with correct identification of gamm/gamm4 models. * Fixed issue with weighted regression models from *brms*. * Fixed broken tests due to changes of forthcoming *effects* update. # ggeffects 0.11.0 ## General * Revised docs and vignettes - the use of the term _average marginal effects_ was replaced by a less misleading wording, since the functions of **ggeffects** calculate marginal effects at the mean or at representative values, but not average marginal effects. * Replace references to internal vignettes in docstrings to website-vignettes, so links on website are no longer broken. * `values_at()` is an alias for `rprs_values()`. ## New supported models * `betabin`, `negbin` (package **aod**), `wbm` (package *panelr*) ## Changes to functions * `ggpredict()` now supports prediction intervals for models from *MCMCglmm*. * `ggpredict()` gets a `back.transform`-argument, to tranform predicted values from log-transformed responses back to their original scale (the default behaviour), or to allow predictions to remain on log-scale (new). * `ggpredict()` and `ggemmeans()` now can calculate marginal effects for specific values from up to three terms (i.e. `terms` can be of lenght four now). * The `ci.style`-argument from `plot()` now also applies to error bars for categorical variables on the x-axis. ## Bug fixes * Fixed issue with *glmmTMB* models that included model weights. # ggeffects 0.10.0 ## General * Better support, including confidence intervals, for some of the already supported model types. * New package-vignette _Logistic Mixed Effects Model with Interaction Term_. ## New supported models * `gamlss`, `geeglm` (package **geepack**), `lmrob` and `glmrob` (package **robustbase**), `ols` (package **rms**), `rlmer` (package **robustlmm**), `rq` and `rqss` (package **quantreg**), `tobit` (package **AER**), `survreg` (package **survival**) ## Changes to functions * The steps for specifying a range of values (e.g. `terms = "predictor [1:10]"`) can now be changed with `by`, e.g. `terms = "predictor [1:10 by=.5]"` (see also vignette _Marginal Effects at Specific Values_). * Robust standard errors for predictions (see argument `vcov.fun` in `ggpredict()`) now also works for following model-objects: `coxph`, `plm`, `polr` (and probably also `lme` and `gls`, not tested yet). * `ggpredict()` gets an `interval`-argument, to compute prediction intervals instead of confidence intervals. * `plot.ggeffects()` now allows different horizontal and vertical jittering for `rawdata` when `jitter` is a numeric vector of length two. ## Bug fixes * Models with `AsIs`-conversion from division of two variables as dependent variable, e.g. `I(amount/frequency)`, now should work. * `ggpredict()` failed for `MixMod`-objects when `ci.lvl=NA`. # ggeffects 0.9.0 ## General * Minor revisions to docs and vignettes. * Reduce package dependencies. * Better support, including confidence intervals, for some of the already supported model types. * New package-vignette _Customize Plot Appearance_. ## New supported models * `ggemmeans()` now supports `type = "fe.zi"` for **glmmTMB**-models, i.e. predicted values are conditioned on the fixed effects and the zero-inflation components of glmmTMB-models. * `ggpredict()` now supports **MCMCglmm**, **ivreg** and **MixMod** (package **GLMMadaptive**) models. * `ggemmeans()` now supports **MCMCglmm** and **MixMod** (package **GLMMadaptive**) models. * `ggpredict()` now computes confidence intervals for **gam** models (package **gam**). ## New functions * `new_data()`, to create a data frame from all combinations of predictor values. This data frame typically can be used for the `newdata`-argument in `predict()`, in case it is necessary to quickly create an own data frame for this argument. ## Changes to functions * `ggpredict()` no longer stops when predicted values with confidence intervals for **glmmTMB**- and other zero-inflated models can't be computed with `type = "fe.zi"`, and only returns the predicted values without confidence intervals. * When `ggpredict()` fails to compute confidence intervals, a more informative error message is given. * `plot()` gets a `connect.lines`-argument, to connect dots from plots with discrete x-axis. ## Bug fixes * `ggpredict()` did not work with **glmmTMB**- and other zero-inflated models, when `type = "fe.zi"` and model- or zero-inflation formula had a polynomial term that was held constant (i.e. not part of the `terms`-argument). * Confidence intervals for zero-inflated models and `type = "fe.zi"` could not be computed when the model contained polynomial terms and a _very_ long formula (issue with `deparse()`, cutting off very long formulas). * The `plot()`-method put different spacing between groups when a numeric factor was used along the x-axis, where the factor levels where non equal-spaced. * Minor fixes regarding calculation of predictions from some already supported models * Fixed issues with multiple response models of class `lm` in `ggeffects()`. * Fixed issues with encoding in help-files. # ggeffects 0.8.0 ## General * Minor changes to meet forthcoming changes in purrr. * For consistency reasons, both `type = "fe"` and `type = "re"` return population-level predictions for mixed effects models (**lme4**, **glmmTMB**). The difference is that `type = "re"` also takes the random effect variances for prediction intervals into account. Predicted values at specific levels of random effect terms is described in the package-vignettes _Marginal Effects for Random Effects Models_ and _Marginal Effects at Specific Values_. * Revised docs and vignettes. * Give more informative warning for misspelled variable names in `terms`-argument. * Added custom (pre-defined) color-palettes, that can be used with `plot()`. Use `show_pals()` to show all available palettes. * Use more appropriate calculation for confidence intervals of predictions for model with zero-inflation component. ## New supported models * `ggpredict()` and `ggeffect()` now support **brms**-models with additional response information (like `trial()`). * `ggpredict()` now supports **Gam**, **glmmPQL**, **clmm**, and **zerotrunc**-models. * All models supported by the **emmeans** should also work with the new `ggemmeans()`-function. Since this function is quite new, there still might be some bugs, though. ## New functions * `ggemmeans()` to compute marginal effects by calling `emmeans::emmeans()`. * `theme_ggeffects()`, which can be used with `ggplot2::theme_set()` to set the **ggeffects**-theme as default plotting theme. This makes it easier to add further theme-modifications like `sjPlot::legend_style()` or `sjPlot::font_size()`. ## Changes to functions * Added prediction-type based on simulations (`type = "sim"`) to `ggpredict()`, currently for models of class **glmmTMB** and **merMod**. * `x.cat` is a new alias for the argument `x.as.factor`. * The `plot()`-method gets a `ci.style`-argument, to define different styles for the confidence bands for numeric x-axis-terms. * The `print()`-method gets a `x.lab`-argument to print value labels instead of numeric values if `x` is categorical. * `emm()` now also supports all prediction-types, like `ggpredict()`. ## Bug fixes * Fixed issue where confidence intervals could not be computed for variables with very small values, that differ only after the second decimal part. * Fixed issue with `ggeffect()`, which did not work if data had variables with more that 8 digits (fractional part longer than 8 numbers). * Fixed issue with multivariate response models fitted with **brms** or **rstanarm** when argument `ppd = TRUE`. * Fixed issue with glmmTMB-models for `type = "fe.zi"`, which could mess up the correct order of predicted values for `x`. * Fixed minor issue with glmmTMB-models for `type = "fe.zi"` or `type = "re.zi"`, when first terms had the `[all]`-tag. * Fixed minor issue in the `print()`-method for mixed effects models, when predictions were conditioned on all model terms and adjustment was only done for random effects (output-line "adjusted for"). * Fixed issue for mixed models, where confidence intervals were not completely calculated, if `terms` included a factor and `contrasts` were set to other values than `contr.treatment`. * Fixed issue with messed up order of confidence intervals for `glm`-object and heteroskedasticity-consistent covariance matrix estimation. * Fixed issue for **glmmTMB**-models, when variables in dispersion or zero-inflation formula did not appear in the fixed effects formula. * The `condition`-argument was not always considered for some model types when calculating confidence intervals for predicted values. ggeffects/MD50000644000176200001440000002632213614042552012526 0ustar liggesusers8bc462ee6889dc646f339b1ac93e1127 *DESCRIPTION b95ff14f5294fd06ae33e0a058089e37 *NAMESPACE 771a3575b02ab0cc31fb8a15841f83bf *NEWS.md 9d3506a39cd9e2c627391a44e02037f7 *R/efc.R aa67d48ec7ebe8f115b8edb5f9d170bc *R/emmeans_prediction_data.R 7d31b00bdc5a1d1c46cc22d21bf363a2 *R/get_predictions_MCMCglmm.R 70f059291416c6fc9c8c28567cce5514 *R/get_predictions_MixMod.R a530ad66007409d30269da88ac455e77 *R/get_predictions_bamlss.R 23452a305005b962095b3cd40b7bd069 *R/get_predictions_bayesx.R 413dc50e5ac5e00303861077958bdd0c *R/get_predictions_cgam.R 6f1204388f62284f5ee7333193cad8b0 *R/get_predictions_clm.R f063d7f35951d0b85f22c37a516bb5ef *R/get_predictions_clm2.R 5ffc2ee3ed80a36483d29605ccd5c7af *R/get_predictions_clmm.R 5da14a2e436971107665ecbec6a0fbb0 *R/get_predictions_coxph.R 58cce941fc107a20dec162af6840761a *R/get_predictions_gam.R 8677b26dd1adbe7c2f36f6c7991e4748 *R/get_predictions_gam2.R 906b896a67e8823e0b6bd1ad37e36bc3 *R/get_predictions_gamlss.R a0cb92925c1a39593d4fae42159a2812 *R/get_predictions_gee.R 90fc390cf181549a28d543902d621c78 *R/get_predictions_geeglm.R c11049727f0ec86aa2ffa46f615483e6 *R/get_predictions_generic.R 3bde03e7d88dae45b9ba7737a5396b00 *R/get_predictions_generic2.R f1c8a81bfaac35be2d3cf8b96df1c201 *R/get_predictions_glimML.R cddeac56d4992b2126ca4edf3ec61fb9 *R/get_predictions_glm.R c322e50e0f03893664305ebe9219dadd *R/get_predictions_glmRob.R 3e20aa2674d00eaedbd46446ad2a8564 *R/get_predictions_glmmTMB.R 144de7af81df7e6ebfc5b3f30623c860 *R/get_predictions_glmrob_base.R aced259d3187ce8d51c88d16f01ab94a *R/get_predictions_lm.R 531599a1fcb136c63046e18eb036192b *R/get_predictions_lme.R fd46272aefefde94634cc0c5e17dc630 *R/get_predictions_lmrob_base.R e0e5680ed88f3d9a73ad568602cb2d10 *R/get_predictions_logistf.R 5eabdecd3d8a5e30200f74ef62f49c32 *R/get_predictions_lrm.R 35c5e85d5730a4f9d5c433174407cecd *R/get_predictions_merMod.R ffc6a0a0c201998d29b299c6ca00cbbf *R/get_predictions_mixor.R f1e463b6d4c7510bea4483306ece1b27 *R/get_predictions_multinom.R 6b28683a2e21987ea2067ebf63d16374 *R/get_predictions_ols.R 7820f0b5123f5cd5054aa6c36fc8ec40 *R/get_predictions_polr.R 288e2e8e56fe5bca1327b4e5e62ac0cf *R/get_predictions_rq.R 0f04a9da1c141da27b381e5549074507 *R/get_predictions_stan.R ab192eb0b1d70b7c1cbd9d20036c8903 *R/get_predictions_survival.R f534a7e29ea8a46b57eb246d21128f58 *R/get_predictions_svyglm.R f25bfa6089808d3a83f21388de7e24c2 *R/get_predictions_svyglmnb.R ffb9459276b66491acf0b7e53c7f9127 *R/get_predictions_tobit.R ebce20a36836572d31eab09ad1c66be0 *R/get_predictions_vgam.R f977968c63494a701bd2b3c124de6281 *R/get_predictions_vglm.R aa3c1f80d5fa648d889af8f4725753b0 *R/get_predictions_wbm.R 524e8eb149475dee6de94ac2678cc957 *R/get_predictions_zelig.R c58e6be4f779c81193c2e4a643c12cad *R/get_predictions_zeroinfl.R 8601415a547c3ee25eec3090f1e9d05d *R/getter.R 4575f7c27ca7e13515aed278379a6196 *R/ggeffect.R 8b17e676ade4169ac23c5c190679e4a1 *R/ggemmeans.R 9508aeccd4b7264d80187d2a91dfab3d *R/ggemmeans_add_confint.R 58c2ebd5242cfbaa633c1a754cc53101 *R/ggemmeans_zi_predictions.R b1b80dc1c9d04299c72e5df2af8532d8 *R/ggpredict.R e8ff57306d43af35fd33ea5dbf62e8a1 *R/moderator_pattern.R 4b3e51dd6afb5ec55bd1b7af207b65b2 *R/new_data.R 34d09938dbdf4ada03bd6b60235b709d *R/plot.R a93e01866076a4f71e1b9a1ad9568647 *R/post_processing_labels.R 6bbfe0db7fdde1aecc9068e3518435bd *R/post_processing_predictions.R 282fe6d8559124e435d0f4d4cd3241fc *R/predict_zero_inflation.R 4acc211d15551afbeb09c561b86c4b45 *R/predictions.R f83f9a6605188e4b691b2e091654ecc0 *R/pretty_range.R d502a4a60f6aa688f65171665cd4dade *R/print.R ec77556421267119009bd86321ee8df9 *R/simulate_predictions.R b3d7b696a7d32bc1888463733620466e *R/standard_error_predictions.R 9d09b64512cfe4891d366550362d2d78 *R/themes.R efdda2a43c872b934a10aa93ccdd6ffa *R/utils.R fbfce4538795236ab1e3993df0ebd6bd *R/utils_check_transformations.R 924caf59fe433df9c2040f006362da9a *R/utils_colors.R 07197ba423dc73c8c02cbfba0739b7a6 *R/utils_get_cleaned_terms.R a5b1d942ef7974e55495e945269a982d *R/utils_get_data_grid.R 09654250635e619357e1d7688509e890 *R/utils_get_representative_values.R edeebbec368b35539bd5de020dd727fe *R/utils_ggpredict.R c6778cf94ad0ec8af83e666accf348b5 *R/utils_handle_labels.R f2e9b3f8a5e20282d7e710b1de5d5c19 *R/utils_is_empty.R cf2ec9ece9cd1334ce65faadf59fa1d7 *R/utils_model_function.R 862c2a56041fd1ee7c3ecd4eff6c7c85 *R/utils_reshape.R 603ae7e0736031ce4dce87b0e5ede82f *R/utils_select.R 39164a4deaeb5aa4e4ed3dd41f025394 *R/utils_set_attr.R f88855f1ed0e99ae6b4850e47acd358c *R/utils_typical_value.R 4bb92a5accfed67252b67c9246e97e62 *R/vcov.R 88a1f4e68252faf72145100401d94c84 *README.md d387f362fb808b0bf4d5c19a794186e0 *build/partial.rdb 313535922d34900a197d8d6ad87a7860 *build/vignette.rds 031665e59ce51cb7336cfbe3fe470e32 *data/efc.RData 6aa2e9c8919bd029aadbaff3d9b7e246 *data/efc_test.RData 05025415ee1bb80b21c7cfb1bf21a1d2 *inst/CITATION 18c3c81bf9d747dbd65a002a65d32a17 *inst/doc/ggeffects.R 745e510b3d6adb31404194f71271ea50 *inst/doc/ggeffects.Rmd 6228e25447dca0dcc7493ba41c12b040 *inst/doc/ggeffects.html 7c3ed7a1a64e4b070c3b9dfe41092ea6 *inst/doc/introduction_effectsatvalues.R 2e0a76951185a0b92b0f3b78a44920e7 *inst/doc/introduction_effectsatvalues.Rmd 789b47a54077247e1f2bbead0f1dec68 *inst/doc/introduction_effectsatvalues.html e10c06bfe46c066709a952048ac133e6 *inst/doc/introduction_plotcustomize.R 14de874d1a0360e68781656d9cf0a214 *inst/doc/introduction_plotcustomize.Rmd b11dc83d3619df9a2f6bea14d7acbaaa *inst/doc/introduction_plotcustomize.html 00542485aed770e17b82f2749e71b5df *inst/doc/introduction_plotmethod.R e09919bd78beba380427b38d8102cb5b *inst/doc/introduction_plotmethod.Rmd cc0c549c9fc4a0b2c458d1341b24c2b5 *inst/doc/introduction_plotmethod.html 617f6ebb6298b1dc3b1d8821a2d29cac *inst/doc/introduction_randomeffects.R b00d3e2f70a1d1ddf2edd46dbd4a9f6e *inst/doc/introduction_randomeffects.Rmd 8cf230006c050370ca18fbc33615563f *inst/doc/introduction_randomeffects.html c47e0e443e29683474b46039ccc6fffe *inst/doc/practical_logisticmixedmodel.R e8b604e93f71df990018115ce403f7f6 *inst/doc/practical_logisticmixedmodel.Rmd a1b07f13b3ae82bfc82e1ab6adad19ff *inst/doc/practical_logisticmixedmodel.html bcbeb58a6f0376dd54769648ad992cf7 *inst/doc/practical_robustestimation.R 577fde9e29e060516e12a897f4b76872 *inst/doc/practical_robustestimation.Rmd fba5a4edc6175f62a7dc3e6bc78698de *inst/doc/practical_robustestimation.html 7e9fd46adc25f75718254c0f5c6803d1 *inst/doc/technical_differencepredictemmeans.R f9820fb69831a6020d4dd5d1db1e0fa1 *inst/doc/technical_differencepredictemmeans.Rmd b8c2ab00266b60c646f8fe2072726290 *inst/doc/technical_differencepredictemmeans.html 4d9f175e38c9766294f9f31bcbdba024 *inst/doc/technical_stata.R 914ffadd6d8add1227a3a113819df364 *inst/doc/technical_stata.Rmd cfa9d823c8e736aaa2abbf7bd084eb3f *inst/doc/technical_stata.html e793adc275c23b7f3951de100416849f *man/efc.Rd 97399fa9377e36d7c3e4bdf304d9eb32 *man/figures/logo.png 5d27388e9656495c159efe07141e5b6f *man/figures/unnamed-chunk-3-1.png 13c0c2659d1a8a6a5dab92dd9ffc9731 *man/figures/unnamed-chunk-4-1.png 82aa5715110fd5e2df4e02214a98800a *man/figures/unnamed-chunk-5-1.png 30ef1f1381f3967307dbb2ef34423a3f *man/figures/unnamed-chunk-6-1.png 86711b4fdf972450254f7544816ddbb0 *man/get_title.Rd 58d4944f4529537b608c61310a317a53 *man/ggpredict.Rd 41f2af5c587747eac239326a2c1ecadd *man/new_data.Rd 6daa9dc3c812e254fe11d83fa40b572b *man/plot.Rd 7cabbee04cf52dcd040e588c06ac9179 *man/pretty_range.Rd 1dde6904dc9a2f569478e2e9ea024e5d *man/values_at.Rd 9b63bd3c30cb8607c1b4302214423905 *man/vcov.Rd 95410f5171df7ba57bd8d21ba3ac0b8e *tests/testthat.R f0c0c1e3e209edeca7a6e5397ff33102 *tests/testthat/test-Gam2.R 4db09b1a95933c03c0243f7a69c70153 *tests/testthat/test-MCMCglmm.R 225605daee182bd5f65381687e3b3297 *tests/testthat/test-MixMod.R 2b08bc5259d6b2f2da36a1fd75c8e6b0 *tests/testthat/test-betareg.R 8c0b549b5ebff2a595468ee25c503a80 *tests/testthat/test-brms-ppd.R 5fdfe5f26f6d142f4b43dfeb5d169074 *tests/testthat/test-brms-trial.R f284cc77fa5581b72c9dfadffd4d8cd0 *tests/testthat/test-clean_vars.R 7a9de7a8e64007f612f9f2db62621306 *tests/testthat/test-clm.R aee80e3c57095be9cfa8e316cf9ed01e *tests/testthat/test-clm2.R 115736bcc0d7b613c94a309102669ba9 *tests/testthat/test-clmm.R 26cf3a208a818549a698a1f4459c88bf *tests/testthat/test-condition.R 4e9b5692f1c6639413522e85ed478198 *tests/testthat/test-contrasts.R fc058446c564d64921484f084c512772 *tests/testthat/test-contrasts2.R e4c78a6cef3ef24e9e137ab1a26a842b *tests/testthat/test-contrasts3.R e64b5feb3830843693e41ac6c853842e *tests/testthat/test-correct_se_sorting.R f814228a9b14e731da2bcabe6b77276e *tests/testthat/test-coxph.R eb0d12d42ab96888882aa4cb079e1c52 *tests/testthat/test-decimals.R 644cf9d9a28d087bb0a792be00bb4f93 *tests/testthat/test-gamm.R 9ab5f89627a42677d2d973ab4263cb0a *tests/testthat/test-gamm4.R 42a9f6b0c308eb9461fe1072e606dcba *tests/testthat/test-gee.R c611280418238ed0af822b3bde85108a *tests/testthat/test-geeglm.R 9831b8219b005d4f52f5e6f7e294f370 *tests/testthat/test-get_titles.R e2dc163e5c70a0f27abd322c25d9475e *tests/testthat/test-glm.R 03205f2091ab8ae3f61e585669783414 *tests/testthat/test-glmer.R 6d71506929cec6b79dca67fce8bba66c *tests/testthat/test-glmmTMB.R 4858e5c72a898d2173738b5937aca727 *tests/testthat/test-glmrob_base.R fc1419e29e33c7c62e0e1a35d1b598b9 *tests/testthat/test-gls.R 3944d1664d1eda581ee57c5b722957c7 *tests/testthat/test-ivreg.R 327e9feaaae1456ac9acbe022334a079 *tests/testthat/test-linear-models.R a4a0a0709bf8fc0eff15486c0692ebf7 *tests/testthat/test-lmer.R a8bb10ed5213daffc9baa0173871dae9 *tests/testthat/test-lmrob_base.R dfe27838adf4ca7f38ff78f7ead4b256 *tests/testthat/test-logistf.R 9ea70fb9996b16215c7f668d25236a59 *tests/testthat/test-lrm.R 37268c0360a350e53541185c86dedef2 *tests/testthat/test-negbin.R d15f8fde51ce5644eb78e4072947844c *tests/testthat/test-nlme.R a2d77f8f70668dbbccbb6032d20bc27a *tests/testthat/test-plot.R 18559825887af392ca778547c8d9abeb *tests/testthat/test-poisson.R 510e7401b01c3f9332898533585bbfa8 *tests/testthat/test-polr.R b285cb3556bbc3a7fd08f0f5f79cc6c6 *tests/testthat/test-poly-zeroinf.R da128463efb446ed9687f031fb467580 *tests/testthat/test-print.R 0a9ddbf61bb2e28e702ca0c6edb22ff5 *tests/testthat/test-rq.R 167d86e645dba96bc41935d3af7fce28 *tests/testthat/test-rstanarm-ppd.R a994e72feab87bacf95abea6430d2d6c *tests/testthat/test-rstanarm.R f422dac27f2a24c0cea58ad8ae4988b9 *tests/testthat/test-survey.R d2c5a4c2efccfea3289ca5fbcc5a0b52 *tests/testthat/test-survreg.R 4a0252897713a80f034bc96338b24529 *tests/testthat/test-svyglmnb.R 66f91e92e81c4df3dc4e4f8eec96a773 *tests/testthat/test-tobit.R a7a338fd007475d5624182f719b2cfd3 *tests/testthat/test-vgam.R 87cb11eea3a3676aa8fa11a3c38f6923 *tests/testthat/test-vglm.R e159b1479f5edd35f5600cafc4e29320 *tests/testthat/test-zeroinfl.R 745e510b3d6adb31404194f71271ea50 *vignettes/ggeffects.Rmd 2e0a76951185a0b92b0f3b78a44920e7 *vignettes/introduction_effectsatvalues.Rmd 14de874d1a0360e68781656d9cf0a214 *vignettes/introduction_plotcustomize.Rmd e09919bd78beba380427b38d8102cb5b *vignettes/introduction_plotmethod.Rmd b00d3e2f70a1d1ddf2edd46dbd4a9f6e *vignettes/introduction_randomeffects.Rmd e8b604e93f71df990018115ce403f7f6 *vignettes/practical_logisticmixedmodel.Rmd 577fde9e29e060516e12a897f4b76872 *vignettes/practical_robustestimation.Rmd f9820fb69831a6020d4dd5d1db1e0fa1 *vignettes/technical_differencepredictemmeans.Rmd 914ffadd6d8add1227a3a113819df364 *vignettes/technical_stata.Rmd ec171fd2d8c85a0d6cc23edeb28cfbde *vignettes/vignette-stata-1.png ggeffects/inst/0000755000176200001440000000000013614017661013171 5ustar liggesusersggeffects/inst/doc/0000755000176200001440000000000013614017661013736 5ustar liggesusersggeffects/inst/doc/technical_differencepredictemmeans.R0000644000176200001440000001141113614017660023103 0ustar liggesusers## ----set-options, echo = FALSE------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ knitr::opts_chunk$set(collapse = TRUE, comment = "#>", dev = "png", fig.width = 7, fig.height = 3.5, message = FALSE, warning = FALSE) options(width = 800) if (!requireNamespace("magrittr", quietly = TRUE) || !requireNamespace("sjmisc", quietly = TRUE)) { knitr::opts_chunk$set(eval = FALSE) } ## ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- library(magrittr) library(ggeffects) data(efc) fit <- lm(barthtot ~ c12hour + neg_c_7, data = efc) ggpredict(fit, terms = "c12hour") ggemmeans(fit, terms = "c12hour") ## ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- library(sjmisc) data(efc) efc$e42dep <- to_label(efc$e42dep) fit <- lm(barthtot ~ c12hour + neg_c_7 + e42dep, data = efc) ggpredict(fit, terms = "c12hour") ggemmeans(fit, terms = "c12hour") ## ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ggpredict(fit, terms = "c12hour") ggemmeans(fit, terms = "c12hour", condition = c(e42dep = "independent")) ## ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ggemmeans(fit, terms = c("c12hour", "e42dep")) %>% plot() ggeffects/inst/doc/practical_robustestimation.html0000644000176200001440000013633313614017657022277 0ustar liggesusers Practical example: (Cluster) Robust Standard Errors

Practical example: (Cluster) Robust Standard Errors

Daniel Lüdecke

2020-01-28

## Registered S3 method overwritten by 'clubSandwich':
##   method    from    
##   bread.mlm sandwich

This vignette demonstrate how to compute confidence intervals based on (cluster) robust variance-covariance matrices for standard errors.

First, we load the required packages and create a sample data set with a binomial and continuous variable as predictor as well as a group factor. To avoid convergence warnings, the continuous variable is standardized.

Predictions with cluster-robust standard errors

The last example shows how to define cluster-robust standard errors. These are based on clubSandwich::vcovCR(). Thus, vcov.fun = "vcovCR" is always required when estimating cluster robust standard errors. clubSandwich::vcovCR() has also different estimation types, which must be specified in vcov.type. Furthermore, clubSandwich::vcovCR() requires the cluster-argument, which must be specified in vcov.args:

ggeffects/inst/doc/technical_stata.R0000644000176200001440000001122713614017660017211 0ustar liggesusers## ----set-options, echo = FALSE------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ knitr::opts_chunk$set(collapse = TRUE, comment = "#>", dev = "png", fig.width = 7, fig.height = 5, message = FALSE, warning = FALSE) if (!requireNamespace("magrittr", quietly = TRUE)) { knitr::opts_chunk$set(eval = FALSE) } ## ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- library(magrittr) set.seed(5) data <- data.frame( outcome = rbinom(100, 1, 0.5), var1 = rbinom(100, 1, 0.1), var2 = rnorm(100, 10, 7) ) m <- glm( outcome ~ var1 * var2, data = data, family = binomial(link = "logit") ) ## ----eval=FALSE--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- # use data.dta, clear # quietly logit outcome c.var1##c.var2 # quietly margins, at(var2 = (-8(0.5)28) var1 = (0 1)) # marginsplot ## ----out.width="100%", echo=FALSE--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- knitr::include_graphics("vignette-stata-1.png", dpi = 72) ## ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- library(ggeffects) ggpredict(m, c("var2", "var1")) %>% plot() ggeffects/inst/doc/introduction_randomeffects.Rmd0000644000176200001440000002247213614010146022021 0ustar liggesusers--- title: "Introduction: Marginal Effects for Random Effects Models" author: "Daniel Lüdecke" date: "`r Sys.Date()`" output: rmarkdown::html_vignette vignette: > %\VignetteIndexEntry{Introduction: Marginal Effects for Random Effects Models} %\VignetteEngine{knitr::rmarkdown} %\VignetteEncoding{UTF-8} --- ```{r set-options, echo = FALSE} knitr::opts_chunk$set(collapse = TRUE, comment = "#>", dev = "png", fig.width = 7, fig.height = 4, message = FALSE, warning = FALSE) options(width = 800) if (!requireNamespace("ggplot2", quietly = TRUE) || !requireNamespace("lme4", quietly = TRUE) || !requireNamespace("glmmTMB", quietly = TRUE) || !requireNamespace("sjlabelled", quietly = TRUE)) { knitr::opts_chunk$set(eval = FALSE) } ``` This vignette shows how to calculate marginal effects that take the random-effect variances for mixed models into account. ## Marginal effects for mixed effects models Basically, the type of predictions, i.e. whether to account for the uncertainty of random effects or not, can be set with the `type`-argument. The default, `type = "fe"`, means that predictions are on the population-level and do not account for the random effect variances. Intervals are _confidence intervals_ for the predicted values. ```{r} library(ggeffects) library(lme4) data(sleepstudy) m <- lmer(Reaction ~ Days + (1 + Days | Subject), data = sleepstudy) pr <- ggpredict(m, "Days") pr plot(pr) ``` When `type = "re"`, the predicted values _are still on the population-level_. However, the random effect variances are taken into account, meaning that the intervals are actually _prediction intervals_ and become larger. More technically speaking, `type = "re"` accounts for the uncertainty of the fixed effects _conditional on the estimates_ of the random-effect variances and conditional modes (BLUPs). The random-effect variance is the _mean_ random-effect variance. Calculation is based on the proposal from _Johnson et al. 2014_, which is also implemented in functions like [`performance::r2()`](https://easystats.github.io/performance/reference/r2_nakagawa.html) or [`insight::get_variance()`](https://easystats.github.io/insight/reference/get_variance.html) to get r-squared values or random effect variances for mixed models with more complex random effects structures. As can be seen, compared to the previous example with `type = "fe"`, predicted values are identical (both on the population-level). However, standard errors, and thus the resulting confidence (or prediction) intervals are much larger . ```{r} pr <- ggpredict(m, "Days", type = "re") pr plot(pr) ``` The reason why both `type = "fe"` and `type = "re"` return predictions at population-level is because `ggpredict()` returns predicted values of the response _at specific levels_ of given model predictors, which are defined in the data frame that is passed to the `newdata`-argument (of `predict()`). The data frame requires data from _all_ model terms, including random effect terms. This again requires to choose certain levels or values also for each random effect term, or to set those terms to zero or `NA` (for population-level). Since there is no general rule, which level(s) of random effect terms to choose in order to represent the random effects structure in the data, using the population-level seems the most clear and consistent approach. To get predicted values for a specific level of the random effect term, simply define this level in the `condition`-argument. ```{r} ggpredict(m, "Days", type = "re", condition = c(Subject = 330)) ``` Finally, it is possible to obtain predicted values by simulating from the model, where predictions are based on `simulate()`. ```{r} ggpredict(m, "Days", type = "sim") ``` ## Marginal effects for zero-inflated mixed models For zero-inflated mixed effects models, typically fitted with the **glmmTMB** or **GLMMadaptive** packages, predicted values can be conditioned on * the fixed effects of the conditional model only (`type = "fe"`) * the fixed effects and zero-inflation component (`type = "fe.zi"`) * the fixed effects of the conditional model only (population-level), taking the random-effect variances into account (`type = "re"`) * the fixed effects and zero-inflation component (population-level), taking the random-effect variances into account (`type = "re.zi"`) * all model parameters (`type = "sim"`) ```{r} library(glmmTMB) data(Salamanders) m <- glmmTMB( count ~ spp + mined + (1 | site), ziformula = ~ spp + mined, family = truncated_poisson, data = Salamanders ) ``` Similar to mixed models without zero-inflation component, `type = "fe"` and `type = "re"` for **glmmTMB**-models (with zero-inflation) both return predictions on the population-level, where the latter option accounts for the uncertainty of the random effects. In short, `predict(..., type = "link")` is called (however, predicted values are back-transformed to the response scale). ```{r} ggpredict(m, "spp") ggpredict(m, "spp", type = "re") ``` For `type = "fe.zi"`, the predicted response value is the expected value `mu*(1-p)` _without conditioning_ on random effects. Since the zero inflation and the conditional model are working in "opposite directions", a higher expected value for the zero inflation means a lower response, but a higher value for the conditional model means a higher response. While it is possible to calculate predicted values with `predict(..., type = "response")`, standard errors and confidence intervals can not be derived directly from the `predict()`-function. Thus, confidence intervals for `type = "fe.zi"` are based on quantiles of simulated draws from a multivariate normal distribution (see also _Brooks et al. 2017, pp.391-392_ for details). ```{r} ggpredict(m, "spp", type = "fe.zi") ``` For `type = "re.zi"`, the predicted response value is the expected value `mu*(1-p)`, accounting for the random-effect variances. Intervals are calculated in the same way as for `type = "fe.zi"`, except that the mean random effect variance is considered and thus _prediction intervals_ rather than confidence intervals are returned. ```{r} ggpredict(m, "spp", type = "re.zi") ``` Finally, it is possible to obtain predicted values by simulating from the model, where predictions are based on `simulate()` (see _Brooks et al. 2017, pp.392-393_ for details). To achieve this, use `type = "sim"`. ```{r} ggpredict(m, "spp", type = "sim") ``` ## Marginal effects for each level of random effects Marginal effects can also be calculated for each group level in mixed models. Simply add the name of the related random effects term to the `terms`-argument, and set `type = "re"`. In the following example, we fit a linear mixed model and first simply plot the marginal effetcs, _not_ conditioned on random-effect variances. ```{r} library(sjlabelled) data(efc) efc$e15relat <- as_label(efc$e15relat) m <- lmer(neg_c_7 ~ c12hour + c160age + c161sex + (1 | e15relat), data = efc) me <- ggpredict(m, terms = "c12hour") plot(me) ``` Changing the type to `type = "re"` still returns population-level predictions by default. Recall that the major difference between `type = "fe"` and `type = "re"` is the uncertainty in the variance parameters. This leads to larger confidence intervals (i.e. prediction intervals) for marginal effects with `type = "re"`. ```{r} me <- ggpredict(m, terms = "c12hour", type = "re") plot(me) ``` To compute marginal effects for each grouping level, add the related random term to the `terms`-argument. In this case, confidence intervals are not calculated, but marginal effects are conditioned on each group level of the random effects. ```{r} me <- ggpredict(m, terms = c("c12hour", "e15relat"), type = "re") plot(me) ``` Marginal effects, conditioned on random effects, can also be calculated for specific levels only. Add the related values into brackets after the variable name in the `terms`-argument. ```{r} me <- ggpredict(m, terms = c("c12hour", "e15relat [child,sibling]"), type = "re") plot(me) ``` The most complex plot in this scenario would be a term (`c12hour`) at certain values of two other terms (`c161sex`, `c160age`) for specific levels of random effects (`e15relat`), so we have four variables in the `terms`-argument. ```{r fig.height=6} me <- ggpredict( m, terms = c("c12hour", "c161sex", "c160age", "e15relat [child,sibling]"), type = "re" ) plot(me) ``` If the group factor has too many levels, you can also take a random sample of all possible levels and plot the marginal effects for this subsample of group levels. To do this, use `term = " [sample=n]"`. ```{r} set.seed(123) m <- lmer(Reaction ~ Days + (1 + Days | Subject), data = sleepstudy) me <- ggpredict(m, terms = c("Days", "Subject [sample=7]"), type = "re") plot(me) ``` You can also add the observed data points for each group using `add.data = TRUE`. ```{r} plot(me, add.data = TRUE) ``` # References Brooks ME, Kristensen K, Benthem KJ van, Magnusson A, Berg CW, Nielsen A, et al. glmmTMB Balances Speed and Flexibility Among Packages for Zero-inflated Generalized Linear Mixed Modeling. The R Journal. 2017;9: 378–400. Johnson PC, O'Hara RB. 2014. Extension of Nakagawa & Schielzeth's R2GLMM to random slopes models. Methods Ecol Evol, 5: 944-946. (doi: 10.1111/2041-210X.12225) ggeffects/inst/doc/practical_logisticmixedmodel.R0000644000176200001440000002702613614017655022002 0ustar liggesusers## ----set-options, echo = FALSE------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ knitr::opts_chunk$set(collapse = TRUE, comment = "#>", dev = "png", fig.width = 7, fig.height = 5, message = FALSE, warning = FALSE) if (!requireNamespace("magrittr", quietly = TRUE) || !requireNamespace("lme4", quietly = TRUE) || !requireNamespace("splines", quietly = TRUE) || !requireNamespace("sjmisc", quietly = TRUE)) { knitr::opts_chunk$set(eval = FALSE) } ## ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- library(magrittr) library(ggeffects) library(sjmisc) library(lme4) library(splines) set.seed(123) dat <- data.frame( outcome = rbinom(n = 100, size = 1, prob = 0.35), var_binom = as.factor(rbinom(n = 100, size = 1, prob = 0.2)), var_cont = rnorm(n = 100, mean = 10, sd = 7), group = sample(letters[1:4], size = 100, replace = TRUE) ) dat$var_cont <- sjmisc::std(dat$var_cont) ## ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- m1 <- glmer( outcome ~ var_binom + var_cont + (1 | group), data = dat, family = binomial(link = "logit") ) ## ----message = TRUE----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ggpredict(m1, "var_binom") ggpredict(m1, "var_cont") ## ----message = FALSE---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- # save marginal effects in an object and plot me <- ggpredict(m1, "var_binom") plot(me) # plot using the pipe ggpredict(m1, "var_cont") %>% plot() ## ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- m2 <- glmer( outcome ~ var_binom * var_cont + (1 | group), data = dat, family = binomial(link = "logit") ) ## ----message = TRUE----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ggpredict(m2, c("var_cont", "var_binom")) %>% plot() ggpredict(m2, c("var_binom", "var_cont")) %>% plot() ## ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- m3 <- glmer( outcome ~ var_binom * poly(var_cont, degree = 2, raw = TRUE) + (1 | group), data = dat, family = binomial(link = "logit") ) ## ----message = TRUE----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ggpredict(m3, c("var_cont", "var_binom")) %>% plot() ## ----message = TRUE----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ggpredict(m3, c("var_cont [all]", "var_binom")) %>% plot() ## ----message = FALSE---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- set.seed(321) dat <- data.frame( outcome = rbinom(n = 100, size = 1, prob = 0.35), var_binom = rbinom(n = 100, size = 1, prob = 0.5), var_cont = rnorm(n = 100, mean = 10, sd = 7), var_cont2 = rnorm(n = 100, mean = 5, sd = 2), group = sample(letters[1:4], size = 100, replace = TRUE) ) m4 <- glmer( outcome ~ var_binom * poly(var_cont, degree = 2) * ns(var_cont2, df = 3) + (1 | group), data = dat, family = binomial(link = "logit") ) ## ----message = TRUE----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ggpredict(m4, c("var_cont [all]", "var_cont2", "var_binom")) %>% plot() ggeffects/inst/doc/introduction_plotmethod.html0000644000176200001440000107063513614017637021623 0ustar liggesusers Introduction: Plotting Marginal Effects

Introduction: Plotting Marginal Effects

Daniel Lüdecke

2020-01-28

plot()-method

This vignettes demonstrates the plot()-method of the ggeffects-package. It is recommended to read the general introduction first, if you haven’t done this yet.

If you don’t want to write your own ggplot-code, ggeffects has a plot()-method with some convenient defaults, which allows quickly creating ggplot-objects. plot() has some arguments to tweak the plot-appearance. For instance, ci allows you to show or hide confidence bands (or error bars, for discrete variables), facets allows you to create facets even for just one grouping variable, or colors allows you to quickly choose from some color-palettes, including black & white colored plots. Use add.data to add the raw data points to the plot.

ggeffects supports labelled data and the plot()-method automatically sets titles, axis - and legend-labels depending on the value and variable labels of the data.

Create Panel Plots for more than three Terms

For three grouping variable (i.e. if terms is of length four), one plot per panel (the values of the fourth variable in terms) is created, and a single, integrated plot is produced by default. Use one.plot = FALSE to return one plot per panel.

Change appearance of confidence bands

In some plots, the the confidence bands are not represented by a shaded area (ribbons), but rather by error bars (with line), dashed or dotted lines. Use ci.style = "errorbar", ci.style = "dash" or ci.style = "dot" to change the style of confidence bands.

Dotted Error Bars

The style of error bars for plots with categorical x-axis can also be changed. By default, these are “error bars”, but ci.style = "dot" or ci.style = "dashed" works as well

Log-transform y-axis for binomial models

For binomial models, the y-axis indicates the predicted probabilities of an event. In this case, error bars are not symmetrical.

Here you can use log.y to log-transform the y-axis. The plot()-method will automatically choose axis breaks and limits that fit well to the value range and log-scale.

Control y-axis appearance

Furthermore, arguments in ... are passed down to ggplot::scale_y_continuous() (resp. ggplot::scale_y_log10(), if log.y = TRUE), so you can control the appearance of the y-axis.

Survival models

ggpredict() also supports coxph-models from the survival-package and is able to either plot risk-scores (the default), probabilities of survival (type = "surv") or cumulative hazards (type = "cumhaz").

Since probabilities of survival and cumulative hazards are changing across time, the time-variable is automatically used as x-axis in such cases, so the terms-argument only needs up to two variables.

Custom color palettes

The ggeffects-package has a few pre-defined color-palettes that can be used with the colors-argument. Use show_pals() to see all available palettes.

Here are two examples showing how to use pre-defined colors:

ggeffects/inst/doc/technical_stata.html0000644000176200001440000031210113614017661017750 0ustar liggesusers Technical Details: Different Output between Stata and ggeffects

Technical Details: Different Output between Stata and ggeffects

Daniel Lüdecke

2020-01-28

Why is the output from Stata different from the output from ggeffect?

Stata’s equivalent to the marginal effects produced by ggeffects is the margins-command. However, the results are not always identical. For models from non-gaussian families, point estimates for the marginal effects are identical, but the confidence intervals differ.

Here is an explanation, why there is a difference. First, we fit a logistic regression model.

Example with graphical output

The Stata plot

This is the code in Stata to produce a marginal effects plot.

The resulting image looks like this.

The ggeffects plot

When we use ggeffects, the plot slighlty differs.

As we can see, the confidence intervals in the Stata plot are outside the plausible range of [0, 1], which means that the predicted uncertainty from the Stata output has a probability higher than 1 and lower than 0, while ggpredict() computes confidence intervals within the possible range.

Conclusion

It seems like Stata is getting the confidence intervals wrong. Predictions and standard errors returned in Stata are on the (transformed) response scale. Obviously, the confidence intervals are then computed by estimate +/- 1.96 * standard error, which may lead to confidence intervals that are out of reasonable bounds (e.g. above 1 or below 0 for predicted probabilities).

The transformed estimate (on the response scale) is always between 0 and 1, and the same is true for the transformed standard errors. However, adding or substracting approx. 2 * transformed SE to the transformed estimate does no longer ensure that the confidence intervals are within the correct range.

The more precise way to do the calculation is to calculate estimates, standard errors and confidence intervals on the (untransformed) scale of the linear predictor and then back-transform.

ggeffects/inst/doc/introduction_plotmethod.R0000644000176200001440000004664513614017636021062 0ustar liggesusers## ----set-options, echo = FALSE------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ knitr::opts_chunk$set(collapse = TRUE, comment = "#>", dev = "png", fig.width = 7, fig.height = 4, message = FALSE, warning = FALSE) options(width = 800) if (!requireNamespace("ggplot2", quietly = TRUE) || !requireNamespace("lme4", quietly = TRUE) || !requireNamespace("survival", quietly = TRUE) || !requireNamespace("sjmisc", quietly = TRUE)) { knitr::opts_chunk$set(eval = FALSE) } ## ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- library(ggeffects) library(sjmisc) data(efc) efc$c172code <- to_label(efc$c172code) fit <- lm(barthtot ~ c12hour + neg_c_7 + c161sex + c172code, data = efc) ## ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- dat <- ggpredict(fit, terms = c("c12hour", "c172code")) plot(dat, facet = TRUE) ## ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- # don't use facets, b/w figure, w/o confidence bands plot(dat, colors = "bw", ci = FALSE) ## ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- dat <- ggpredict(fit, terms = c("c12hour", "c172code")) plot(dat, add.data = TRUE) ## ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- # for three variables, automatic facetting dat <- ggpredict(fit, terms = c("c12hour", "c172code", "c161sex")) plot(dat) ## ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- # categorical variables have errorbars dat <- ggpredict(fit, terms = c("c172code", "c161sex")) plot(dat) ## ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- # point-geoms for discrete x-axis can be connected with lines plot(dat, connect.lines = TRUE) ## ----fig.height = 8----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- # for four variables, automatic facetting and integrated panel dat <- ggpredict(fit, terms = c("c12hour", "c172code", "c161sex", "neg_c_7")) # use 'one.plot = FALSE' for returning multiple single plots plot(dat, one.plot = TRUE) ## ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- # dashed lines for CI dat <- ggpredict(fit, terms = "c12hour") plot(dat, ci.style = "dash") ## ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- # facet by group dat <- ggpredict(fit, terms = c("c12hour", "c172code")) plot(dat, facet = TRUE, ci.style = "errorbar", dot.size = 1.5) ## ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- dat <- ggpredict(fit, terms = "c172code") plot(dat, ci.style = "dot") ## ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- library("lme4") m <- glm( cbind(incidence, size - incidence) ~ period, family = binomial, data = lme4::cbpp ) dat <- ggpredict(m, "period") # normal plot, asymmetrical error bars plot(dat) ## ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- # plot with log-transformed y-axis plot(dat, log.y = TRUE) ## ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- # plot with log-transformed y-axis, modify breaks plot( dat, log.y = TRUE, breaks = c(.05, .1, .15, .2, .25, .3), limits = c(.01, .3) ) ## ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- data("lung", package = "survival") # remove category 3 (outlier, not nice in the plot) lung <- subset(lung, subset = ph.ecog %in% 0:2) lung$sex <- factor(lung$sex, labels = c("male", "female")) lung$ph.ecog <- factor(lung$ph.ecog, labels = c("good", "ok", "limited")) m <- survival::coxph(survival::Surv(time, status) ~ sex + age + ph.ecog, data = lung) # predicted risk-scores pr <- ggpredict(m, c("sex", "ph.ecog")) plot(pr) ## ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- # probability of survival pr <- ggpredict(m, c("sex", "ph.ecog"), type = "surv") plot(pr) ## ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- # cumulative hazards pr <- ggpredict(m, c("sex", "ph.ecog"), type = "cumhaz") plot(pr) ## ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- show_pals() ## ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- dat <- ggpredict(fit, terms = c("c12hour", "c172code")) plot(dat, facet = TRUE, colors = "circus") ## ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- dat <- ggpredict(fit, terms = c("c172code", "c12hour [quart]")) plot(dat, colors = "hero", dodge = 0.4) # increase space between error bars ggeffects/inst/doc/introduction_plotmethod.Rmd0000644000176200001440000001554413614010064021361 0ustar liggesusers--- title: "Introduction: Plotting Marginal Effects" author: "Daniel Lüdecke" date: "`r Sys.Date()`" output: rmarkdown::html_vignette vignette: > %\VignetteIndexEntry{Introduction: Plotting Marginal Effects} %\VignetteEngine{knitr::rmarkdown} %\VignetteEncoding{UTF-8} --- ```{r set-options, echo = FALSE} knitr::opts_chunk$set(collapse = TRUE, comment = "#>", dev = "png", fig.width = 7, fig.height = 4, message = FALSE, warning = FALSE) options(width = 800) if (!requireNamespace("ggplot2", quietly = TRUE) || !requireNamespace("lme4", quietly = TRUE) || !requireNamespace("survival", quietly = TRUE) || !requireNamespace("sjmisc", quietly = TRUE)) { knitr::opts_chunk$set(eval = FALSE) } ``` # plot()-method This vignettes demonstrates the `plot()`-method of the **ggeffects**-package. It is recommended to read the [general introduction](ggeffects.html) first, if you haven't done this yet. If you don't want to write your own ggplot-code, **ggeffects** has a `plot()`-method with some convenient defaults, which allows quickly creating ggplot-objects. `plot()` has some arguments to tweak the plot-appearance. For instance, `ci` allows you to show or hide confidence bands (or error bars, for discrete variables), `facets` allows you to create facets even for just one grouping variable, or `colors` allows you to quickly choose from some color-palettes, including black & white colored plots. Use `add.data` to add the raw data points to the plot. **ggeffects** supports [labelled data](https://strengejacke.github.io/sjlabelled/) and the `plot()`-method automatically sets titles, axis - and legend-labels depending on the value and variable labels of the data. ```{r} library(ggeffects) library(sjmisc) data(efc) efc$c172code <- to_label(efc$c172code) fit <- lm(barthtot ~ c12hour + neg_c_7 + c161sex + c172code, data = efc) ``` ## Facet by Group ```{r} dat <- ggpredict(fit, terms = c("c12hour", "c172code")) plot(dat, facet = TRUE) ``` ## No Facets, in Black & White ```{r} # don't use facets, b/w figure, w/o confidence bands plot(dat, colors = "bw", ci = FALSE) ``` ## Add Data Points to Plot ```{r} dat <- ggpredict(fit, terms = c("c12hour", "c172code")) plot(dat, add.data = TRUE) ``` ## Automatic Facetting ```{r} # for three variables, automatic facetting dat <- ggpredict(fit, terms = c("c12hour", "c172code", "c161sex")) plot(dat) ``` ## Automatic Selection of Error Bars or Confidence Bands ```{r} # categorical variables have errorbars dat <- ggpredict(fit, terms = c("c172code", "c161sex")) plot(dat) ``` ## Connect Discrete Data Points with Lines ```{r} # point-geoms for discrete x-axis can be connected with lines plot(dat, connect.lines = TRUE) ``` ## Create Panel Plots for more than three Terms For three grouping variable (i.e. if `terms` is of length four), one plot per `panel` (the values of the fourth variable in `terms`) is created, and a single, integrated plot is produced by default. Use `one.plot = FALSE` to return one plot per panel. ```{r fig.height = 8} # for four variables, automatic facetting and integrated panel dat <- ggpredict(fit, terms = c("c12hour", "c172code", "c161sex", "neg_c_7")) # use 'one.plot = FALSE' for returning multiple single plots plot(dat, one.plot = TRUE) ``` # Change appearance of confidence bands In some plots, the the confidence bands are not represented by a shaded area (ribbons), but rather by error bars (with line), dashed or dotted lines. Use `ci.style = "errorbar"`, `ci.style = "dash"` or `ci.style = "dot"` to change the style of confidence bands. ## Dashed Lines for Confidence Intervals ```{r} # dashed lines for CI dat <- ggpredict(fit, terms = "c12hour") plot(dat, ci.style = "dash") ``` ## Error Bars for Continuous Variables ```{r} # facet by group dat <- ggpredict(fit, terms = c("c12hour", "c172code")) plot(dat, facet = TRUE, ci.style = "errorbar", dot.size = 1.5) ``` ## Dotted Error Bars The style of error bars for plots with categorical x-axis can also be changed. By default, these are "error bars", but `ci.style = "dot"` or `ci.style = "dashed"` works as well ```{r} dat <- ggpredict(fit, terms = "c172code") plot(dat, ci.style = "dot") ``` # Log-transform y-axis for binomial models For binomial models, the y-axis indicates the predicted probabilities of an event. In this case, error bars are not symmetrical. ```{r} library("lme4") m <- glm( cbind(incidence, size - incidence) ~ period, family = binomial, data = lme4::cbpp ) dat <- ggpredict(m, "period") # normal plot, asymmetrical error bars plot(dat) ``` Here you can use `log.y` to log-transform the y-axis. The `plot()`-method will automatically choose axis breaks and limits that fit well to the value range and log-scale. ```{r} # plot with log-transformed y-axis plot(dat, log.y = TRUE) ``` # Control y-axis appearance Furthermore, arguments in `...` are passed down to `ggplot::scale_y_continuous()` (resp. `ggplot::scale_y_log10()`, if `log.y = TRUE`), so you can control the appearance of the y-axis. ```{r} # plot with log-transformed y-axis, modify breaks plot( dat, log.y = TRUE, breaks = c(.05, .1, .15, .2, .25, .3), limits = c(.01, .3) ) ``` # Survival models `ggpredict()` also supports `coxph`-models from the **survival**-package and is able to either plot risk-scores (the default), probabilities of survival (`type = "surv"`) or cumulative hazards (`type = "cumhaz"`). Since probabilities of survival and cumulative hazards are changing across time, the time-variable is automatically used as x-axis in such cases, so the `terms`-argument only needs up to two variables. ```{r} data("lung", package = "survival") # remove category 3 (outlier, not nice in the plot) lung <- subset(lung, subset = ph.ecog %in% 0:2) lung$sex <- factor(lung$sex, labels = c("male", "female")) lung$ph.ecog <- factor(lung$ph.ecog, labels = c("good", "ok", "limited")) m <- survival::coxph(survival::Surv(time, status) ~ sex + age + ph.ecog, data = lung) # predicted risk-scores pr <- ggpredict(m, c("sex", "ph.ecog")) plot(pr) ``` ```{r} # probability of survival pr <- ggpredict(m, c("sex", "ph.ecog"), type = "surv") plot(pr) ``` ```{r} # cumulative hazards pr <- ggpredict(m, c("sex", "ph.ecog"), type = "cumhaz") plot(pr) ``` # Custom color palettes The **ggeffects**-package has a few pre-defined color-palettes that can be used with the `colors`-argument. Use `show_pals()` to see all available palettes. ```{r} show_pals() ``` Here are two examples showing how to use pre-defined colors: ```{r} dat <- ggpredict(fit, terms = c("c12hour", "c172code")) plot(dat, facet = TRUE, colors = "circus") ``` ```{r} dat <- ggpredict(fit, terms = c("c172code", "c12hour [quart]")) plot(dat, colors = "hero", dodge = 0.4) # increase space between error bars ``` ggeffects/inst/doc/introduction_effectsatvalues.Rmd0000644000176200001440000002574613614007734022405 0ustar liggesusers--- title: "Introduction: Marginal Effects at Specific Values" author: "Daniel Lüdecke" date: "`r Sys.Date()`" output: rmarkdown::html_vignette vignette: > %\VignetteIndexEntry{Introduction: Marginal Effects at Specific Values} %\VignetteEngine{knitr::rmarkdown} %\VignetteEncoding{UTF-8} --- ```{r set-options, echo = FALSE} knitr::opts_chunk$set(collapse = TRUE, comment = "#>", dev = "png", fig.width = 7, fig.height = 4, message = FALSE, warning = FALSE) options(width = 800) if (!requireNamespace("ggplot2", quietly = TRUE) || !requireNamespace("lme4", quietly = TRUE) || !requireNamespace("sjlabelled", quietly = TRUE) || !requireNamespace("sjmisc", quietly = TRUE)) { knitr::opts_chunk$set(eval = FALSE) } ``` # Marginal effects at specific values or levels This vignettes shows how to calculate marginal effects at specific values or levels for the terms of interest. It is recommended to read the [general introduction](ggeffects.html) first, if you haven't done this yet. The `terms`-argument not only defines the model terms of interest, but each model term can be limited to certain values. This allows to compute and plot marginal effects for (grouping) terms at specific values only, or to define values for the main effect of interest. There are several options to define these values, which always should be placed in square brackets directly after the term name and can vary for each model term. 1. Concrete values are separated by a comma: `terms = "c172code [1,3]"`. For factors, you could also use factor levels, e.g. `terms = "Species [setosa,versicolor]"`. 2. Ranges are specified with a colon: `terms = c("c12hour [30:80]", "c172code [1,3]")`. This would plot all values from 30 to 80 for the variable _c12hour_. By default, the step size is 1, i.e. `[1:4]` would create the range `1, 2, 3, 4`. You can choose different step sizes with `by`, e.g. `[1:4 by=.5]`. 3. Convenient shortcuts to calculate common values like mean +/- 1 SD (`terms = "c12hour [meansd]"`), quartiles (`terms = "c12hour [quart]"`) or minumum and maximum values (`terms = "c12hour [minmax]"`). See `values_at()` for the different options. 4. A function name. The function is then applied to all unique values of the indicated variable, e.g. `terms = "hp [exp]"`. You can also define own functions, and pass the name of it to the `terms`-values, e.g. `terms = "hp [own_function]"`. 5. If the _first_ variable specified in `terms` is a _numeric_ vector, for which no specific values are given, a "pretty range" is calculated (see `pretty_range()`), to avoid memory allocation problems for vectors with many unique values. To select all values, use the `[all]`-tag, e.g. `terms = "mpg [all]"`. If a _numeric_ vector is specified as _second_ or _third_ variable in `term` (i.e. if this vector represents a grouping structure), representative values (see `values_at()`) are chosen, which is typically mean +/- SD. 6. To create a pretty range that should be smaller or larger than the default range (i.e. if no specific values would be given), use the `n`-tag, e.g. `terms = "age [n=5]"` or `terms = "age [n = 12]"`. Larger values for `n` return a larger range of predicted values. 7. Especially useful for plotting group levels of random effects with many levels, is the `sample`-option, e.g. `terms = "Subject [sample=9]"`, which will sample nine values from all possible values of the variable `Subject`. ## Specific values and value range ```{r} library(ggeffects) library(ggplot2) data(efc) fit <- lm(barthtot ~ c12hour + neg_c_7 + c161sex + c172code, data = efc) mydf <- ggpredict(fit, terms = c("c12hour [30:80]", "c172code [1,3]")) mydf ggplot(mydf, aes(x, predicted, colour = group)) + geom_line() ``` Defining value ranges is especially useful when variables are, for instance, log-transformed. `ggpredict()` then typically only uses the range of the log-transformed variable, which is in most cases not what we want. In such situation, specify the range in the `terms`-argument. ```{r} data(mtcars) mpg_model <- lm(mpg ~ log(hp), data = mtcars) # x-values and predictions based on the log(hp)-values ggpredict(mpg_model, "hp") # x-values and predictions based on hp-values from 50 to 150 ggpredict(mpg_model, "hp [50:150]") ``` By default, the step size for a range is 1, like `50, 51, 52, ...`. If you need a different step size, use `by=` inside the brackets, e.g. `"hp [50:60 by=.5]"`. This would create a range from 50 to 60, with .5er steps. ```{r} # range for x-values with .5-steps ggpredict(mpg_model, "hp [50:60 by=.5]") ``` ## Choosing representative values Especially in situations where we have two continuous variables in interaction terms, or where the "grouping" variable is continuous, it is helpful to select representative values of the grouping variable - else, predictions would be made for too many groups, which is no longer helpful when interpreting marginal effects. You can use * `"minmax"`: minimum and maximum values (lower and upper bounds) of the variable are used. * `"meansd"`: uses the mean value as well as one standard deviation below and above mean value. * `"zeromax"`: is similar to the `"minmax"` option, however, 0 is always used as minimum value. This may be useful for predictors that don't have an empirical zero-value. * `"quart"` calculates and uses the quartiles (lower, median and upper), _including_ minimum and maximum value. * `"quart2"` calculates and uses the quartiles (lower, median and upper), _excluding_ minimum and maximum value. * `"all"` takes all values of the vector. ```{r} data(efc) # short variable label, for plot attr(efc$c12hour, "label") <- "hours of care" fit <- lm(barthtot ~ c12hour * c161sex + neg_c_7, data = efc) mydf <- ggpredict(fit, terms = c("c161sex", "c12hour [meansd]")) plot(mydf) mydf <- ggpredict(fit, terms = c("c161sex", "c12hour [quart]")) plot(mydf) ``` ## Transforming values with functions The brackets in the `terms`-argument also accept the name of a valid function, to (back-)transform predicted valued. In this example, an alternative would be to specify that values should be exponentiated, which is indicated by `[exp]` in the `terms`-argument: ```{r} # x-values and predictions based on exponentiated hp-values ggpredict(mpg_model, "hp [exp]") ``` It is possible to define any function, also custom functions: ```{r} # x-values and predictions based on doubled hp-values hp_double <- function(x) 2 * x ggpredict(mpg_model, "hp [hp_double]") ``` ## Pretty value ranges This section is intended to show some examples how the plotted output differs, depending on which value range is used. Some transformations, like polynomial or spline terms, but also quadratic or cubic terms, result in many predicted values. In such situation, predictions for some models lead to memory allocation problems. That is why `ggpredict()` "prettifies" certain value ranges by default, at least for some model types (like mixed models). To see the difference in the "curvilinear" trend, we use a quadratic term on a standardized variable. ```{r} library(sjmisc) library(sjlabelled) library(lme4) data(efc) efc$c12hour <- std(efc$c12hour) efc$e15relat <- as_label(efc$e15relat) m <- lmer( barthtot ~ c12hour + I(c12hour^2) + neg_c_7 + c160age + c172code + (1 | e15relat), data = efc ) me <- ggpredict(m, terms = "c12hour") plot(me) ``` ### Turn off "prettifying" As said above, `ggpredict()` "prettifies" the vector, resulting in a smaller set of unique values. This is less memory consuming and may be needed especially for more complex models. You can turn off automatic "prettifying" by adding the `"all"`-shortcut to the `terms`-argument. ```{r} me <- ggpredict(m, terms = "c12hour [all]") plot(me) ``` This results in a smooth plot, as all values from the term of interest are taken into account. ### Using different ranges for prettifying To modify the "prettifying", add the `"n"`-shortcut to the `terms`-argument. This allows you to select a feasible range of values that is smaller (and hence less memory consuming) them `"terms = ... [all]"`, but still produces smoother plots than the default prettyfing. ```{r} me <- ggpredict(m, terms = "c12hour [n=2]") plot(me) ``` ```{r} me <- ggpredict(m, terms = "c12hour [n=10]") plot(me) ``` ## Marginal effects conditioned on specific values of the covariates By default, the `typical`-argument determines the function that will be applied to the covariates to hold these terms at constant values. By default, this is the mean-value, but other options (like median or mode) are possible as well. Use the `condition`-argument to define other values at which covariates should be held constant. `condition` requires a named vector, with the name indicating the covariate. ```{r} data(mtcars) mpg_model <- lm(mpg ~ log(hp) + disp, data = mtcars) # "disp" is hold constant at its mean ggpredict(mpg_model, "hp [exp]") # "disp" is hold constant at value 200 ggpredict(mpg_model, "hp [exp]", condition = c(disp = 200)) ``` ## Marginal effects for each level of random effects Marginal effects can also be calculated for each group level in mixed models. Simply add the name of the related random effects term to the `terms`-argument, and set `type = "re"`. In the following example, we fit a linear mixed model and first simply plot the marginal effetcs, _not_ conditioned on random effects. ```{r} library(sjlabelled) library(lme4) data(efc) efc$e15relat <- as_label(efc$e15relat) m <- lmer(neg_c_7 ~ c12hour + c160age + c161sex + (1 | e15relat), data = efc) me <- ggpredict(m, terms = "c12hour") plot(me) ``` Changing the type to `type = "re"` still returns population-level predictions by default. The major difference between `type = "fe"` and `type = "re"` is the uncertainty in the variance parameters. This leads to larger confidence intervals for marginal effects with `type = "re"`. ```{r} me <- ggpredict(m, terms = "c12hour", type = "re") plot(me) ``` To compute marginal effects for each grouping level, add the related random term to the `terms`-argument. In this case, confidence intervals are not calculated, but marginal effects are conditioned on each group level of the random effects. ```{r} me <- ggpredict(m, terms = c("c12hour", "e15relat"), type = "re") plot(me) ``` Marginal effects, conditioned on random effects, can also be calculated for specific levels only. Add the related values into brackets after the variable name in the `terms`-argument. ```{r} me <- ggpredict(m, terms = c("c12hour", "e15relat [child,sibling]"), type = "re") plot(me) ``` If the group factor has too many levels, you can also take a random sample of all possible levels and plot the marginal effects for this subsample of group levels. To do this, use `term = " [sample=n]"`. ```{r} data("sleepstudy") m <- lmer(Reaction ~ Days + (1 + Days | Subject), data = sleepstudy) me <- ggpredict(m, terms = c("Days", "Subject [sample=8]"), type = "re") plot(me) ``` ggeffects/inst/doc/ggeffects.html0000644000176200001440000053042013614017616016565 0ustar liggesusers ggeffects: Marginal Effects of Regression Models

ggeffects: Marginal Effects of Regression Models

Daniel Lüdecke

2020-01-28

Aim of the ggeffects-package

Results of regression models are typically presented as tables that are easy to understand. For more complex models that include interaction or quadratic / spline terms, tables with numbers are less helpful and more difficult to interpret. In such cases, the visualization of marginal effects is far easier to understand and allows to intuitively get the idea of how predictors and outcome are associated, even for complex models.

ggeffects computes marginal effects (or: estimated marginal means) at the mean (MEM) or at representative values (MER) from statistical models, i.e. predictions generated by a model when one holds the non-focal variables constant and varies the focal variable(s). The result is returned as data frame with consistent structure, especially for further use with ggplot. Definitions of “marginal effects” can be found here.

Since the focus lies on plotting the data (the marginal effects), at least one model term needs to be specified for which the effects are computed. It is also possible to compute marginal effects for model terms, grouped by the levels of another model’s predictor. The package also allows plotting marginal effects for two-, three- or four-way-interactions, or for specific values of a model term only. Examples are shown below.

Short technical note

ggpredict(), ggemmeans() and ggeffect() always return predicted values for the response of a model (or response distribution for Bayesian models).

Typically, ggpredict() returns confidence intervals based on the standard errors as returned by the predict()-function, assuming normal distribution (+/- 1.96 * SE). If predict() for a certain class does not return standard errors (for example, merMod-objects), these are calculated manually, by following steps: matrix-multiply X by the parameter vector B to get the predictions, then extract the variance-covariance matrix V of the parameters and compute XVX' to get the variance-covariance matrix of the predictions. The square-root of the diagonal of this matrix represent the standard errors of the predictions, which are then multiplied by 1.96 for the confidence intervals.

For mixed models, if type = "re" or type = "re.zi", the uncertainty in the random effects is accounted for when calculating the standard errors. Hence, in such cases, the intervals may be considered as prediction intervals.

Consistent and tidy structure

The returned data frames always have the same, consistent structure and column names, so it’s easy to create ggplot-plots without the need to re-write the arguments to be mapped in each ggplot-call. x and predicted are the values for the x- and y-axis. conf.low and conf.high could be used as ymin and ymax aesthetics for ribbons to add confidence bands to the plot. group can be used as grouping-aesthetics, or for faceting.

The examples shown here mostly use ggplot2-code for the plots, however, there is also a plot()-method, which is described in the vignette Plotting Marginal Effects.

Marginal effects at the mean

ggpredict() computes predicted values for all possible levels and values from a model’s predictors. In the simplest case, a fitted model is passed as first argument, and the term in question as second argument. Use the raw name of the variable for the terms-argument only - you don’t need to write things like poly(term, 3) or I(term^2) for the terms-argument.

As you can see, ggpredict() (and ggeffect() or ggemmeans()) has a nice print()-method, which takes care of printing not too many rows (but always an equally spaced range of values, including minimum and maximum value of the term in question) and giving some extra information. This is especially useful when predicted values are shown depending on the levels of other terms (see below).

The output shows the predicted values for the response at each value from the term c12hour. The data is already in shape for ggplot:

Marginal effects at the mean by other predictors’ levels

The terms-argument accepts up to four model terms, where the second to fourth terms indicate grouping levels. This allows predictions for the term in question at different levels for other model terms:

Creating a ggplot is pretty straightforward: the colour-aesthetics is mapped with the group-column:

A second grouping structure can be defined, which will create another column named facet, which - as the name implies - might be used to create a facted plot:

mydf <- ggpredict(fit, terms = c("c12hour", "c172code", "c161sex"))
mydf
#> 
#> # Predicted values of Total score BARTHEL INDEX
#> # x = average number of hours of care per week
#> 
#> # c172code = low level of education
#> #  c161sex = [1] Male
#> 
#>   x | Predicted |   SE |         95% CI
#> ---------------------------------------
#>   0 |     73.95 | 2.35 | [69.35, 78.55]
#>  45 |     62.56 | 2.21 | [58.23, 66.88]
#>  85 |     52.42 | 2.31 | [47.90, 56.95]
#> 170 |     30.89 | 3.08 | [24.85, 36.94]
#> 
#> # c172code = intermediate level of education
#> #  c161sex = [1] Male
#> 
#>   x | Predicted |   SE |         95% CI
#> ---------------------------------------
#>   0 |     74.67 | 1.85 | [71.06, 78.29]
#>  45 |     63.27 | 1.73 | [59.88, 66.66]
#>  85 |     53.14 | 1.91 | [49.40, 56.89]
#> 170 |     31.61 | 2.87 | [25.98, 37.24]
#> 
#> # c172code = high level of education
#> #  c161sex = [1] Male
#> 
#>   x | Predicted |   SE |         95% CI
#> ---------------------------------------
#>   0 |     75.39 | 2.22 | [71.04, 79.74]
#>  45 |     63.99 | 2.18 | [59.73, 68.26]
#>  85 |     53.86 | 2.36 | [49.23, 58.49]
#> 170 |     32.33 | 3.26 | [25.95, 38.71]
#> 
#> # c172code = low level of education
#> #  c161sex = [2] Female
#> 
#>   x | Predicted |   SE |         95% CI
#> ---------------------------------------
#>   0 |     75.00 | 1.83 | [71.41, 78.59]
#>  45 |     63.60 | 1.60 | [60.46, 66.74]
#>  85 |     53.46 | 1.70 | [50.13, 56.80]
#> 170 |     31.93 | 2.61 | [26.83, 37.04]
#> 
#> # c172code = intermediate level of education
#> #  c161sex = [2] Female
#> 
#>   x | Predicted |   SE |         95% CI
#> ---------------------------------------
#>   0 |     75.71 | 1.23 | [73.31, 78.12]
#>  45 |     64.32 | 0.97 | [62.42, 66.21]
#>  85 |     54.18 | 1.21 | [51.81, 56.55]
#> 170 |     32.65 | 2.40 | [27.94, 37.36]
#> 
#> # c172code = high level of education
#> #  c161sex = [2] Female
#> 
#>   x | Predicted |   SE |         95% CI
#> ---------------------------------------
#>   0 |     76.43 | 1.81 | [72.89, 79.98]
#>  45 |     65.03 | 1.71 | [61.68, 68.39]
#>  85 |     54.90 | 1.91 | [51.16, 58.65]
#> 170 |     33.37 | 2.89 | [27.70, 39.05]
#> 
#> Adjusted for:
#> * neg_c_7 = 11.84
ggplot(mydf, aes(x, predicted, colour = group)) + 
  geom_line() + 
  facet_wrap(~facet)

Finally, a third differentation can be defined, creating another column named panel. In such cases, you may create multiple plots (for each value in panel). ggeffects takes care of this when you use plot() and automatically creates an integrated plot with all panels in one figure.

Marginal effects for each model term

If the term argument is either missing or NULL, marginal effects for each model term are calculated. The result is returned as a list, which can be plotted manually (or using the plot() function).

mydf <- ggpredict(fit)
mydf
#> $c12hour
#> 
#> # Predicted values of Total score BARTHEL INDEX
#> # x = average number of hours of care per week
#> 
#>   x | Predicted |   SE |         95% CI
#> ---------------------------------------
#>   0 |     75.44 | 1.12 | [73.26, 77.63]
#>  20 |     70.38 | 0.93 | [68.56, 72.19]
#>  45 |     64.05 | 0.84 | [62.39, 65.70]
#>  65 |     58.98 | 0.93 | [57.16, 60.80]
#>  85 |     53.91 | 1.12 | [51.71, 56.11]
#> 105 |     48.85 | 1.38 | [46.15, 51.55]
#> 125 |     43.78 | 1.67 | [40.52, 47.05]
#> 170 |     32.38 | 2.37 | [27.73, 37.03]
#> 
#> Adjusted for:
#> *  neg_c_7 = 11.84
#> *  c161sex =  1.76
#> * c172code =  1.97
#> 
#> 
#> $neg_c_7
#> 
#> # Predicted values of Total score BARTHEL INDEX
#> # x = Negative impact with 7 items
#> 
#>  x | Predicted |   SE |         95% CI
#> --------------------------------------
#>  6 |     78.17 | 1.56 | [75.11, 81.22]
#>  8 |     73.57 | 1.21 | [71.21, 75.94]
#> 12 |     64.38 | 0.84 | [62.73, 66.03]
#> 14 |     59.79 | 0.97 | [57.88, 61.69]
#> 16 |     55.19 | 1.26 | [52.73, 57.66]
#> 20 |     46.00 | 2.02 | [42.04, 49.97]
#> 22 |     41.41 | 2.44 | [36.63, 46.19]
#> 28 |     27.63 | 3.73 | [20.31, 34.95]
#> 
#> Adjusted for:
#> *  c12hour = 42.20
#> *  c161sex =  1.76
#> * c172code =  1.97
#> 
#> 
#> $c161sex
#> 
#> # Predicted values of Total score BARTHEL INDEX
#> # x = carer's gender
#> 
#> x | Predicted |   SE |         95% CI
#> -------------------------------------
#> 1 |     63.96 | 1.73 | [60.57, 67.35]
#> 2 |     65.00 | 0.97 | [63.11, 66.90]
#> 
#> Adjusted for:
#> *  c12hour = 42.20
#> *  neg_c_7 = 11.84
#> * c172code =  1.97
#> 
#> 
#> $c172code
#> 
#> # Predicted values of Total score BARTHEL INDEX
#> # x = carer's level of education
#> 
#> x | Predicted |   SE |         95% CI
#> -------------------------------------
#> 1 |     64.06 | 1.55 | [61.01, 67.10]
#> 2 |     64.78 | 0.84 | [63.12, 66.43]
#> 3 |     65.49 | 1.62 | [62.32, 68.67]
#> 
#> Adjusted for:
#> * c12hour = 42.20
#> * neg_c_7 = 11.84
#> * c161sex =  1.76
#> 
#> 
#> attr(,"class")
#> [1] "ggalleffects" "list"        
#> attr(,"model.name")
#> [1] "fit"

Two-Way, Three-Way- and Four-Way-Interactions

To plot the marginal effects of interaction terms, simply specify these terms in the terms-argument.

Since the terms-argument accepts up to four model terms, you can also compute marginal effects for a 3-way-interaction or 4-way-interaction. To plot the marginal effects of three interaction terms, just like before, specify all three terms in the terms-argument.

4-way-interactions are rather confusing to print and plot. When plotting, multiple plots (for each level of the fourth interaction term) are plotted for the remaining three interaction terms. This can easily be done using the plot()-method.

Polynomial terms and splines

ggpredict() also works for models with polynomial terms or splines. Following code reproduces the plot from ?splines::bs:

Survival models

ggpredict() also supports coxph-models from the survival-package and is able to either plot risk-scores (the default), probabilities of survival (type = "surv") or cumulative hazards (type = "cumhaz").

Since probabilities of survival and cumulative hazards are changing across time, the time-variable is automatically used as x-axis in such cases, so the terms-argument only needs up to two variables for type = "surv" or type = "cumhaz".

Labelling the data

ggeffects makes use of the sjlabelled-package and supports labelled data. If the data from the fitted models is labelled, the value and variable label attributes are usually copied to the model frame stored in the model object. ggeffects provides various getter-functions to access these labels, which are returned as character vector and can be used in ggplot’s lab()- or scale_*()-functions.

  • get_title() - a generic title for the plot, based on the model family, like “predicted values” or “predicted probabilities”
  • get_x_title() - the variable label of the first model term in terms.
  • get_y_title() - the variable label of the response.
  • get_legend_title() - the variable label of the second model term in terms.
  • get_x_labels() - value labels of the first model term in terms.
  • get_legend_labels() - value labels of the second model term in terms.

The data frame returned by ggpredict(), ggemmeans() or ggeffect() must be used as argument to one of the above function calls.

ggeffects/inst/doc/technical_differencepredictemmeans.html0000644000176200001440000010673013614017660023657 0ustar liggesusers Technical Details: Difference between ggpredict() and ggemmeans()

Technical Details: Difference between ggpredict() and ggemmeans()

Daniel Lüdecke

2020-01-28

ggpredict() and ggemmeans() compute predicted values for all possible levels or values from a model’s predictor. Basically, ggpredict() wraps the predict()-method for the related model, while ggemmeans() wraps the emmeans()-method from the emmeans-package. Both ggpredict() and ggemmeans() do some data-preparation to bring the data in shape for the newdata-argument (predict()) resp. the at-argument (emmeans()). It is recommended to read the general introduction first, if you haven’t done this yet.

For models without categorical predictors, the results from ggpredict() and ggemmeans() are identical (except some slight differences in the associated confidence intervals, which are, however, negligable).

As can be seen, the continuous predictor neg_c_7 is held constant at its mean value, 11.83. For categorical predictors, ggpredict() and ggemmeans() behave differently. While ggpredict() uses the reference level of each categorical predictor to hold it constant, ggemmeans() - like ggeffect() - averages over the proportions of the categories of factors.

In this case, one would obtain the same results for ggpredict() and ggemmeans() again, if condition is used to define specific levels at which variables, in our case the factor e42dep, should be held constant.

Creating plots is as simple as described in the vignette Plotting Marginal Effects.

ggeffects/inst/doc/introduction_randomeffects.html0000644000176200001440000050320213614017647022253 0ustar liggesusers Introduction: Marginal Effects for Random Effects Models

Introduction: Marginal Effects for Random Effects Models

Daniel Lüdecke

2020-01-28

This vignette shows how to calculate marginal effects that take the random-effect variances for mixed models into account.

Marginal effects for mixed effects models

Basically, the type of predictions, i.e. whether to account for the uncertainty of random effects or not, can be set with the type-argument. The default, type = "fe", means that predictions are on the population-level and do not account for the random effect variances. Intervals are confidence intervals for the predicted values.

When type = "re", the predicted values are still on the population-level. However, the random effect variances are taken into account, meaning that the intervals are actually prediction intervals and become larger. More technically speaking, type = "re" accounts for the uncertainty of the fixed effects conditional on the estimates of the random-effect variances and conditional modes (BLUPs).

The random-effect variance is the mean random-effect variance. Calculation is based on the proposal from Johnson et al. 2014, which is also implemented in functions like performance::r2() or insight::get_variance() to get r-squared values or random effect variances for mixed models with more complex random effects structures.

As can be seen, compared to the previous example with type = "fe", predicted values are identical (both on the population-level). However, standard errors, and thus the resulting confidence (or prediction) intervals are much larger .

The reason why both type = "fe" and type = "re" return predictions at population-level is because ggpredict() returns predicted values of the response at specific levels of given model predictors, which are defined in the data frame that is passed to the newdata-argument (of predict()). The data frame requires data from all model terms, including random effect terms. This again requires to choose certain levels or values also for each random effect term, or to set those terms to zero or NA (for population-level). Since there is no general rule, which level(s) of random effect terms to choose in order to represent the random effects structure in the data, using the population-level seems the most clear and consistent approach.

To get predicted values for a specific level of the random effect term, simply define this level in the condition-argument.

Finally, it is possible to obtain predicted values by simulating from the model, where predictions are based on simulate().

Marginal effects for zero-inflated mixed models

For zero-inflated mixed effects models, typically fitted with the glmmTMB or GLMMadaptive packages, predicted values can be conditioned on

  • the fixed effects of the conditional model only (type = "fe")
  • the fixed effects and zero-inflation component (type = "fe.zi")
  • the fixed effects of the conditional model only (population-level), taking the random-effect variances into account (type = "re")
  • the fixed effects and zero-inflation component (population-level), taking the random-effect variances into account (type = "re.zi")
  • all model parameters (type = "sim")

Similar to mixed models without zero-inflation component, type = "fe" and type = "re" for glmmTMB-models (with zero-inflation) both return predictions on the population-level, where the latter option accounts for the uncertainty of the random effects. In short, predict(..., type = "link") is called (however, predicted values are back-transformed to the response scale).

For type = "fe.zi", the predicted response value is the expected value mu*(1-p) without conditioning on random effects. Since the zero inflation and the conditional model are working in “opposite directions”, a higher expected value for the zero inflation means a lower response, but a higher value for the conditional model means a higher response. While it is possible to calculate predicted values with predict(..., type = "response"), standard errors and confidence intervals can not be derived directly from the predict()-function. Thus, confidence intervals for type = "fe.zi" are based on quantiles of simulated draws from a multivariate normal distribution (see also Brooks et al. 2017, pp.391-392 for details).

For type = "re.zi", the predicted response value is the expected value mu*(1-p), accounting for the random-effect variances. Intervals are calculated in the same way as for type = "fe.zi", except that the mean random effect variance is considered and thus prediction intervals rather than confidence intervals are returned.

Finally, it is possible to obtain predicted values by simulating from the model, where predictions are based on simulate() (see Brooks et al. 2017, pp.392-393 for details). To achieve this, use type = "sim".

Marginal effects for each level of random effects

Marginal effects can also be calculated for each group level in mixed models. Simply add the name of the related random effects term to the terms-argument, and set type = "re".

In the following example, we fit a linear mixed model and first simply plot the marginal effetcs, not conditioned on random-effect variances.

Changing the type to type = "re" still returns population-level predictions by default. Recall that the major difference between type = "fe" and type = "re" is the uncertainty in the variance parameters. This leads to larger confidence intervals (i.e. prediction intervals) for marginal effects with type = "re".

To compute marginal effects for each grouping level, add the related random term to the terms-argument. In this case, confidence intervals are not calculated, but marginal effects are conditioned on each group level of the random effects.

Marginal effects, conditioned on random effects, can also be calculated for specific levels only. Add the related values into brackets after the variable name in the terms-argument.

The most complex plot in this scenario would be a term (c12hour) at certain values of two other terms (c161sex, c160age) for specific levels of random effects (e15relat), so we have four variables in the terms-argument.

If the group factor has too many levels, you can also take a random sample of all possible levels and plot the marginal effects for this subsample of group levels. To do this, use term = "<groupfactor> [sample=n]".

You can also add the observed data points for each group using add.data = TRUE.

References

Brooks ME, Kristensen K, Benthem KJ van, Magnusson A, Berg CW, Nielsen A, et al. glmmTMB Balances Speed and Flexibility Among Packages for Zero-inflated Generalized Linear Mixed Modeling. The R Journal. 2017;9: 378–400.

Johnson PC, O’Hara RB. 2014. Extension of Nakagawa & Schielzeth’s R2GLMM to random slopes models. Methods Ecol Evol, 5: 944-946. (doi: 10.1111/2041-210X.12225)

ggeffects/inst/doc/introduction_plotcustomize.Rmd0000644000176200001440000001167013614007770022131 0ustar liggesusers--- title: "Introduction: Customize Plot Appearance" author: "Daniel Lüdecke" date: "`r Sys.Date()`" output: rmarkdown::html_vignette vignette: > %\VignetteIndexEntry{Introduction: Customize Plot Appearance} %\VignetteEngine{knitr::rmarkdown} %\VignetteEncoding{UTF-8} --- ```{r set-options, echo = FALSE} knitr::opts_chunk$set(collapse = TRUE, comment = "#>", dev = "png", fig.width = 7, fig.height = 4, message = FALSE, warning = FALSE) options(width = 800) if (!requireNamespace("ggplot2", quietly = TRUE)) { knitr::opts_chunk$set(eval = FALSE) } ``` This vignettes demonstrates how to customize plots created with the `plot()`-method of the **ggeffects**-package. `plot()` returns an object of class **ggplot**, so it is easy to apply further modifications to the resulting plot. You may want to load the ggplot2-package to do this: `library(ggplot2)`. Let's start with a default-plot: ```{r} library(ggeffects) library(ggplot2) data(mtcars) m <- lm(mpg ~ gear + as.factor(cyl) + wt, data = mtcars) # continuous x-axis dat <- ggpredict(m, terms = c("gear", "wt")) # discrete x-axis dat_categorical <- ggpredict(m, terms = c("cyl", "wt")) # default plot plot(dat) ``` ## Changing Plot and Axis Titles The simplest thing is to change the titles from the plot, x- and y-axis. This can be done with `ggplot2::labs()`: ```{r} plot(dat) + labs( x = "Number of forward gears", y = "Miles/(US) gallon", title = "Predicted mean miles per gallon" ) ``` ## Changing the Legend Title The legend-title can also be changed using the `labs()`-function. The legend in ggplot-objects refers to the aesthetic used for the grouping variable, which is by default the `colour`, i.e. the plot is constructed in the following way: ```{r eval=FALSE} ggplot(data, aes(x = x, y = predicted, colour = group)) ``` ### Plots with Default Colors Hence, using `colour` in `labs()` changes the legend-title: ```{r} plot(dat) + labs(colour = "Weight (1000 lbs)") ``` ### Black-and-White Plots For black-and-white plots, the group-aesthetic is mapped to different _linetypes_, not to different colours. Thus, the legend-title for black-and-white plots can be changed using `linetype` in `labs()`: ```{r} plot(dat, colors = "bw") + labs(linetype = "Weight (1000 lbs)") ``` ### Black-and-White Plots with Categorical Predictor If the variable on the x-axis is discrete for a black-and-white plot, the group-aesthetic is mapped to different _shapes_, so following code must be used to change the legend title: ```{r} plot(dat_categorical, colors = "bw") + labs(shape = "Weight (1000 lbs)") ``` ## Changing the x-Axis Appearance The x-axis for plots returned from `plot()` is always _continuous_, even for discrete x-axis-variables. The reason for this is that many users are used to plots that connect the data points with lines, which is only possible for continuous x-axes. You can do this using the `connect.lines`-argument: ```{r} plot(dat_categorical, connect.lines = TRUE) ``` ### Categorical Predictors Since the x-axis is continuous (i.e. `ggplot2::scale_x_continuous()`), you can use `scale_x_continuous()` to modify the x-axis, and change breaks, limits or labels. ```{r} plot(dat_categorical) + scale_x_continuous(labels = c("four", "six", "eight"), breaks = c(1, 2, 3)) ``` ### Continuous Predictors Or for continuous variables: ```{r} plot(dat) + scale_x_continuous(breaks = 3:5, limits = c(2, 6)) ``` ## Changing the y-Axis Appearance Arguments in `...` are passed down to `ggplot::scale_y_continuous()` (resp. `ggplot::scale_y_log10()`, if `log.y = TRUE`), so you can control the appearance of the y-axis by putting the arguments directly into the call to `plot()`: ```{r} plot(dat_categorical, breaks = seq(12, 30, 2), limits = c(12, 30)) ``` ## Changing the Legend Labels The legend labels can also be changed using a `scale_*()`-function from **ggplot**. Depending on the color-setting (see section **Changing the Legend Title**), following functions can be used to change the legend labels: * `scale_colour_manual()` resp. `scale_colour_brewer()` * `scale_linetype_manual()` * `scale_shape_manual()` Since you overwrite an exising "color" scale, you typically need to provide the `values` or `palette`-argument, to manuall set the colors, linetypes or shapes. ### Plots with Default Colors For plots using default colors: ```{r} plot(dat) + scale_colour_brewer(palette = "Set1", labels = c("-1 SD", "Mean", "+1 SD")) ``` ### Black-and-White Plots For black-and-white plots: ```{r} plot(dat, colors = "bw") + scale_linetype_manual(values = 15:17, labels = c("-1 SD", "Mean", "+1 SD")) ``` ### Black-and-White Plots with Categorical Predictor For black-and-white plots with categorical x-axis: ```{r} plot(dat_categorical, colors = "bw") + scale_shape_manual(values = 1:3, labels = c("-1 SD", "Mean", "+1 SD")) ``` ggeffects/inst/doc/ggeffects.Rmd0000644000176200001440000002643013614007662016344 0ustar liggesusers--- title: "ggeffects: Marginal Effects of Regression Models" author: "Daniel Lüdecke" date: "`r Sys.Date()`" output: rmarkdown::html_vignette vignette: > %\VignetteIndexEntry{ggeffects: Marginal Effects of Regression Models} %\VignetteEngine{knitr::rmarkdown} %\VignetteEncoding{UTF-8} --- ```{r set-options, echo = FALSE} knitr::opts_chunk$set( collapse = TRUE, comment = "#>", dev = "png", fig.width = 7, fig.height = 3.5, message = FALSE, warning = FALSE) options(width = 800) if (!requireNamespace("ggplot2", quietly = TRUE) || !requireNamespace("splines", quietly = TRUE) || !requireNamespace("sjmisc", quietly = TRUE)) { knitr::opts_chunk$set(eval = FALSE) } ``` # Aim of the ggeffects-package Results of regression models are typically presented as tables that are easy to understand. For more complex models that include interaction or quadratic / spline terms, tables with numbers are less helpful and more difficult to interpret. In such cases, the visualization of _marginal effects_ is far easier to understand and allows to intuitively get the idea of how predictors and outcome are associated, even for complex models. **ggeffects** computes marginal effects (or: _estimated marginal means_) at the mean (MEM) or at representative values (MER) from statistical models, i.e. predictions generated by a model when one holds the non-focal variables constant and varies the focal variable(s). The result is returned as data frame with consistent structure, especially for further use with [ggplot](https://cran.r-project.org/package=ggplot2). Definitions of "marginal effects" [can be found here](https://stats.stackexchange.com/tags/marginal-effect/info). Since the focus lies on plotting the data (the marginal effects), at least one model term needs to be specified for which the effects are computed. It is also possible to compute marginal effects for model terms, grouped by the levels of another model's predictor. The package also allows plotting marginal effects for two-, three- or four-way-interactions, or for specific values of a model term only. Examples are shown below. ## Short technical note `ggpredict()`, `ggemmeans()` and `ggeffect()` always return predicted values for the _response_ of a model (or _response distribution_ for Bayesian models). Typically, `ggpredict()` returns confidence intervals based on the standard errors as returned by the `predict()`-function, assuming normal distribution (`+/- 1.96 * SE`). If `predict()` for a certain class does _not_ return standard errors (for example, *merMod*-objects), these are calculated manually, by following steps: matrix-multiply `X` by the parameter vector `B` to get the predictions, then extract the variance-covariance matrix `V` of the parameters and compute `XVX'` to get the variance-covariance matrix of the predictions. The square-root of the diagonal of this matrix represent the standard errors of the predictions, which are then multiplied by 1.96 for the confidence intervals. For mixed models, if `type = "re"` or `type = "re.zi"`, the uncertainty in the random effects is accounted for when calculating the standard errors. Hence, in such cases, the intervals may be considered as _prediction intervals_. ## Consistent and tidy structure The returned data frames always have the same, consistent structure and column names, so it's easy to create ggplot-plots without the need to re-write the arguments to be mapped in each ggplot-call. `x` and `predicted` are the values for the x- and y-axis. `conf.low` and `conf.high` could be used as `ymin` and `ymax` aesthetics for ribbons to add confidence bands to the plot. `group` can be used as grouping-aesthetics, or for faceting. The examples shown here mostly use **ggplot2**-code for the plots, however, there is also a `plot()`-method, which is described in the vignette [Plotting Marginal Effects](introduction_plotmethod.html). # Marginal effects at the mean `ggpredict()` computes predicted values for all possible levels and values from a model's predictors. In the simplest case, a fitted model is passed as first argument, and the term in question as second argument. Use the raw name of the variable for the `terms`-argument only - you don't need to write things like `poly(term, 3)` or `I(term^2)` for the `terms`-argument. ```{r} library(ggeffects) data(efc) fit <- lm(barthtot ~ c12hour + neg_c_7 + c161sex + c172code, data = efc) ggpredict(fit, terms = "c12hour") ``` As you can see, `ggpredict()` (and `ggeffect()` or `ggemmeans()`) has a nice `print()`-method, which takes care of printing not too many rows (but always an equally spaced range of values, including minimum and maximum value of the term in question) and giving some extra information. This is especially useful when predicted values are shown depending on the levels of other terms (see below). The output shows the predicted values for the response at each value from the term _c12hour_. The data is already in shape for ggplot: ```{r} library(ggplot2) theme_set(theme_bw()) mydf <- ggpredict(fit, terms = "c12hour") ggplot(mydf, aes(x, predicted)) + geom_line() ``` # Marginal effects at the mean by other predictors' levels The `terms`-argument accepts up to four model terms, where the second to fourth terms indicate grouping levels. This allows predictions for the term in question at different levels for other model terms: ```{r} ggpredict(fit, terms = c("c12hour", "c172code")) ``` Creating a ggplot is pretty straightforward: the `colour`-aesthetics is mapped with the `group`-column: ```{r} mydf <- ggpredict(fit, terms = c("c12hour", "c172code")) ggplot(mydf, aes(x, predicted, colour = group)) + geom_line() ``` A second grouping structure can be defined, which will create another column named `facet`, which - as the name implies - might be used to create a facted plot: ```{r} mydf <- ggpredict(fit, terms = c("c12hour", "c172code", "c161sex")) mydf ggplot(mydf, aes(x, predicted, colour = group)) + geom_line() + facet_wrap(~facet) ``` Finally, a third differentation can be defined, creating another column named `panel`. In such cases, you may create multiple plots (for each value in `panel`). **ggeffects** takes care of this when you use `plot()` and automatically creates an integrated plot with all panels in one figure. ```{r fig.height = 8} mydf <- ggpredict(fit, terms = c("c12hour", "c172code", "c161sex", "neg_c_7")) plot(mydf) ``` # Marginal effects for each model term If the `term` argument is either missing or `NULL`, marginal effects for each model term are calculated. The result is returned as a list, which can be plotted manually (or using the `plot()` function). ```{r} mydf <- ggpredict(fit) mydf ``` # Two-Way, Three-Way- and Four-Way-Interactions To plot the marginal effects of interaction terms, simply specify these terms in the `terms`-argument. ```{r} library(sjmisc) data(efc) # make categorical efc$c161sex <- to_factor(efc$c161sex) # fit model with interaction fit <- lm(neg_c_7 ~ c12hour + barthtot * c161sex, data = efc) # select only levels 30, 50 and 70 from continuous variable Barthel-Index mydf <- ggpredict(fit, terms = c("barthtot [30,50,70]", "c161sex")) ggplot(mydf, aes(x, predicted, colour = group)) + geom_line() ``` Since the `terms`-argument accepts up to four model terms, you can also compute marginal effects for a 3-way-interaction or 4-way-interaction. To plot the marginal effects of three interaction terms, just like before, specify all three terms in the `terms`-argument. ```{r} # fit model with 3-way-interaction fit <- lm(neg_c_7 ~ c12hour * barthtot * c161sex, data = efc) # select only levels 30, 50 and 70 from continuous variable Barthel-Index mydf <- ggpredict(fit, terms = c("c12hour", "barthtot [30,50,70]", "c161sex")) ggplot(mydf, aes(x, predicted, colour = group)) + geom_line() + facet_wrap(~facet) ``` 4-way-interactions are rather confusing to print and plot. When plotting, multiple plots (for each level of the fourth interaction term) are plotted for the remaining three interaction terms. This can easily be done using the [`plot()`-method](https://strengejacke.github.io/ggeffects/articles/introduction_plotmethod.html). ```{r fig.height = 8} # fit model with 4-way-interaction fit <- lm(neg_c_7 ~ c12hour * barthtot * c161sex * c172code, data = efc) # marginal effects for all 4 interaction terms pr <- ggpredict(fit, c("c12hour", "barthtot", "c161sex", "c172code")) # use plot() method, easier than own ggplot-code from scratch plot(pr) ``` # Polynomial terms and splines `ggpredict()` also works for models with polynomial terms or splines. Following code reproduces the plot from `?splines::bs`: ```{r} library(splines) data(women) fm1 <- lm(weight ~ bs(height, df = 5), data = women) dat <- ggpredict(fm1, "height") ggplot(dat, aes(x, predicted)) + geom_line() + geom_point() ``` # Survival models `ggpredict()` also supports `coxph`-models from the **survival**-package and is able to either plot risk-scores (the default), probabilities of survival (`type = "surv"`) or cumulative hazards (`type = "cumhaz"`). Since probabilities of survival and cumulative hazards are changing across time, the time-variable is automatically used as x-axis in such cases, so the `terms`-argument only needs up to **two** variables for `type = "surv"` or `type = "cumhaz"`. ```{r} data("lung", package = "survival") # remove category 3 (outlier) lung <- subset(lung, subset = ph.ecog %in% 0:2) lung$sex <- factor(lung$sex, labels = c("male", "female")) lung$ph.ecog <- factor(lung$ph.ecog, labels = c("good", "ok", "limited")) m <- survival::coxph(survival::Surv(time, status) ~ sex + age + ph.ecog, data = lung) # predicted risk-scores ggpredict(m, c("sex", "ph.ecog")) ``` ```{r} # probability of survival ggpredict(m, c("sex", "ph.ecog"), type = "surv") ``` # Labelling the data **ggeffects** makes use of the [sjlabelled-package](https://cran.r-project.org/package=sjlabelled) and supports [labelled data](https://cran.r-project.org/package=sjlabelled/vignettes/intro_sjlabelled.html). If the data from the fitted models is labelled, the value and variable label attributes are usually copied to the model frame stored in the model object. **ggeffects** provides various _getter_-functions to access these labels, which are returned as character vector and can be used in ggplot's `lab()`- or `scale_*()`-functions. * `get_title()` - a generic title for the plot, based on the model family, like "predicted values" or "predicted probabilities" * `get_x_title()` - the variable label of the first model term in `terms`. * `get_y_title()` - the variable label of the response. * `get_legend_title()` - the variable label of the second model term in `terms`. * `get_x_labels()` - value labels of the first model term in `terms`. * `get_legend_labels()` - value labels of the second model term in `terms`. The data frame returned by `ggpredict()`, `ggemmeans()` or `ggeffect()` must be used as argument to one of the above function calls. ```{r} get_x_title(mydf) get_y_title(mydf) ggplot(mydf, aes(x, predicted, colour = group)) + geom_line() + facet_wrap(~facet) + labs( x = get_x_title(mydf), y = get_y_title(mydf), colour = get_legend_title(mydf) ) ``` ggeffects/inst/doc/practical_robustestimation.R0000644000176200001440000001661113614017657021530 0ustar liggesusers## ----set-options, echo = FALSE------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ knitr::opts_chunk$set(collapse = TRUE, comment = "#>", dev = "png", fig.width = 7, fig.height = 5, message = FALSE, warning = FALSE) if (!requireNamespace("sandwich", quietly = TRUE) || !requireNamespace("clubSandwich", quietly = TRUE)) { knitr::opts_chunk$set(eval = FALSE) } ## ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- library(ggeffects) set.seed(123) # example taken from "?clubSandwich::vcovCR" m <- 8 cluster <- factor(rep(LETTERS[1:m], 3 + rpois(m, 5))) n <- length(cluster) X <- matrix(rnorm(3 * n), n, 3) nu <- rnorm(m)[cluster] e <- rnorm(n) y <- X %*% c(.4, .3, -.3) + nu + e dat <- data.frame(y, X, cluster, row = 1:n) # fit linear model model <- lm(y ~ X1 + X2 + X3, data = dat) ## ----message = TRUE----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ggpredict(model, "X1") ## ----message = FALSE---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- me <- ggpredict(model, "X1") plot(me) ## ----message = TRUE----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ggpredict(model, "X1", vcov.fun = "vcovHC", vcov.type = "HC0") ## ----message = FALSE---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- me <- ggpredict(model, "X1", vcov.fun = "vcovHC", vcov.type = "HC0") plot(me) ## ----message = TRUE----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ggpredict( model, "X1", vcov.fun = "vcovCR", vcov.type = "CR0", vcov.args = list(cluster = dat$cluster) ) ## ----message = FALSE---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- me <- ggpredict( model, "X1", vcov.fun = "vcovCR", vcov.type = "CR0", vcov.args = list(cluster = dat$cluster) ) plot(me) ggeffects/inst/doc/introduction_randomeffects.R0000644000176200001440000003621613614017647021516 0ustar liggesusers## ----set-options, echo = FALSE------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ knitr::opts_chunk$set(collapse = TRUE, comment = "#>", dev = "png", fig.width = 7, fig.height = 4, message = FALSE, warning = FALSE) options(width = 800) if (!requireNamespace("ggplot2", quietly = TRUE) || !requireNamespace("lme4", quietly = TRUE) || !requireNamespace("glmmTMB", quietly = TRUE) || !requireNamespace("sjlabelled", quietly = TRUE)) { knitr::opts_chunk$set(eval = FALSE) } ## ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- library(ggeffects) library(lme4) data(sleepstudy) m <- lmer(Reaction ~ Days + (1 + Days | Subject), data = sleepstudy) pr <- ggpredict(m, "Days") pr plot(pr) ## ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- pr <- ggpredict(m, "Days", type = "re") pr plot(pr) ## ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ggpredict(m, "Days", type = "re", condition = c(Subject = 330)) ## ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ggpredict(m, "Days", type = "sim") ## ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- library(glmmTMB) data(Salamanders) m <- glmmTMB( count ~ spp + mined + (1 | site), ziformula = ~ spp + mined, family = truncated_poisson, data = Salamanders ) ## ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ggpredict(m, "spp") ggpredict(m, "spp", type = "re") ## ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ggpredict(m, "spp", type = "fe.zi") ## ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ggpredict(m, "spp", type = "re.zi") ## ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ggpredict(m, "spp", type = "sim") ## ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- library(sjlabelled) data(efc) efc$e15relat <- as_label(efc$e15relat) m <- lmer(neg_c_7 ~ c12hour + c160age + c161sex + (1 | e15relat), data = efc) me <- ggpredict(m, terms = "c12hour") plot(me) ## ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- me <- ggpredict(m, terms = "c12hour", type = "re") plot(me) ## ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- me <- ggpredict(m, terms = c("c12hour", "e15relat"), type = "re") plot(me) ## ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- me <- ggpredict(m, terms = c("c12hour", "e15relat [child,sibling]"), type = "re") plot(me) ## ----fig.height=6------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- me <- ggpredict( m, terms = c("c12hour", "c161sex", "c160age", "e15relat [child,sibling]"), type = "re" ) plot(me) ## ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- set.seed(123) m <- lmer(Reaction ~ Days + (1 + Days | Subject), data = sleepstudy) me <- ggpredict(m, terms = c("Days", "Subject [sample=7]"), type = "re") plot(me) ## ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- plot(me, add.data = TRUE) ggeffects/inst/doc/introduction_effectsatvalues.html0000644000176200001440000053042413614017623022617 0ustar liggesusers Introduction: Marginal Effects at Specific Values

Introduction: Marginal Effects at Specific Values

Daniel Lüdecke

2020-01-28

Marginal effects at specific values or levels

This vignettes shows how to calculate marginal effects at specific values or levels for the terms of interest. It is recommended to read the general introduction first, if you haven’t done this yet.

The terms-argument not only defines the model terms of interest, but each model term can be limited to certain values. This allows to compute and plot marginal effects for (grouping) terms at specific values only, or to define values for the main effect of interest.

There are several options to define these values, which always should be placed in square brackets directly after the term name and can vary for each model term.

  1. Concrete values are separated by a comma: terms = "c172code [1,3]". For factors, you could also use factor levels, e.g. terms = "Species [setosa,versicolor]".

  2. Ranges are specified with a colon: terms = c("c12hour [30:80]", "c172code [1,3]"). This would plot all values from 30 to 80 for the variable c12hour. By default, the step size is 1, i.e. [1:4] would create the range 1, 2, 3, 4. You can choose different step sizes with by, e.g. [1:4 by=.5].

  3. Convenient shortcuts to calculate common values like mean +/- 1 SD (terms = "c12hour [meansd]"), quartiles (terms = "c12hour [quart]") or minumum and maximum values (terms = "c12hour [minmax]"). See values_at() for the different options.

  4. A function name. The function is then applied to all unique values of the indicated variable, e.g. terms = "hp [exp]". You can also define own functions, and pass the name of it to the terms-values, e.g. terms = "hp [own_function]".

  5. If the first variable specified in terms is a numeric vector, for which no specific values are given, a “pretty range” is calculated (see pretty_range()), to avoid memory allocation problems for vectors with many unique values. To select all values, use the [all]-tag, e.g. terms = "mpg [all]". If a numeric vector is specified as second or third variable in term (i.e. if this vector represents a grouping structure), representative values (see values_at()) are chosen, which is typically mean +/- SD.

  6. To create a pretty range that should be smaller or larger than the default range (i.e. if no specific values would be given), use the n-tag, e.g. terms = "age [n=5]" or terms = "age [n = 12]". Larger values for n return a larger range of predicted values.

  7. Especially useful for plotting group levels of random effects with many levels, is the sample-option, e.g. terms = "Subject [sample=9]", which will sample nine values from all possible values of the variable Subject.

Specific values and value range

Defining value ranges is especially useful when variables are, for instance, log-transformed. ggpredict() then typically only uses the range of the log-transformed variable, which is in most cases not what we want. In such situation, specify the range in the terms-argument.

By default, the step size for a range is 1, like 50, 51, 52, .... If you need a different step size, use by=<stepsize> inside the brackets, e.g. "hp [50:60 by=.5]". This would create a range from 50 to 60, with .5er steps.

Choosing representative values

Especially in situations where we have two continuous variables in interaction terms, or where the “grouping” variable is continuous, it is helpful to select representative values of the grouping variable - else, predictions would be made for too many groups, which is no longer helpful when interpreting marginal effects.

You can use

  • "minmax": minimum and maximum values (lower and upper bounds) of the variable are used.
  • "meansd": uses the mean value as well as one standard deviation below and above mean value.
  • "zeromax": is similar to the "minmax" option, however, 0 is always used as minimum value. This may be useful for predictors that don’t have an empirical zero-value.
  • "quart" calculates and uses the quartiles (lower, median and upper), including minimum and maximum value.
  • "quart2" calculates and uses the quartiles (lower, median and upper), excluding minimum and maximum value.
  • "all" takes all values of the vector.

Pretty value ranges

This section is intended to show some examples how the plotted output differs, depending on which value range is used. Some transformations, like polynomial or spline terms, but also quadratic or cubic terms, result in many predicted values. In such situation, predictions for some models lead to memory allocation problems. That is why ggpredict() “prettifies” certain value ranges by default, at least for some model types (like mixed models).

To see the difference in the “curvilinear” trend, we use a quadratic term on a standardized variable.

Turn off “prettifying”

As said above, ggpredict() “prettifies” the vector, resulting in a smaller set of unique values. This is less memory consuming and may be needed especially for more complex models.

You can turn off automatic “prettifying” by adding the "all"-shortcut to the terms-argument.

This results in a smooth plot, as all values from the term of interest are taken into account.

Using different ranges for prettifying

To modify the “prettifying”, add the "n"-shortcut to the terms-argument. This allows you to select a feasible range of values that is smaller (and hence less memory consuming) them "terms = ... [all]", but still produces smoother plots than the default prettyfing.

Marginal effects conditioned on specific values of the covariates

By default, the typical-argument determines the function that will be applied to the covariates to hold these terms at constant values. By default, this is the mean-value, but other options (like median or mode) are possible as well.

Use the condition-argument to define other values at which covariates should be held constant. condition requires a named vector, with the name indicating the covariate.

Marginal effects for each level of random effects

Marginal effects can also be calculated for each group level in mixed models. Simply add the name of the related random effects term to the terms-argument, and set type = "re".

In the following example, we fit a linear mixed model and first simply plot the marginal effetcs, not conditioned on random effects.

Changing the type to type = "re" still returns population-level predictions by default. The major difference between type = "fe" and type = "re" is the uncertainty in the variance parameters. This leads to larger confidence intervals for marginal effects with type = "re".

To compute marginal effects for each grouping level, add the related random term to the terms-argument. In this case, confidence intervals are not calculated, but marginal effects are conditioned on each group level of the random effects.

Marginal effects, conditioned on random effects, can also be calculated for specific levels only. Add the related values into brackets after the variable name in the terms-argument.

If the group factor has too many levels, you can also take a random sample of all possible levels and plot the marginal effects for this subsample of group levels. To do this, use term = "<groupfactor> [sample=n]".

ggeffects/inst/doc/practical_logisticmixedmodel.Rmd0000644000176200001440000001470513614010244022306 0ustar liggesusers--- title: "Practical example: Logistic Mixed Effects Model with Interaction Term" author: "Daniel Lüdecke" date: "`r Sys.Date()`" output: rmarkdown::html_vignette vignette: > %\VignetteIndexEntry{Practical example: Logistic Mixed Effects Model with Interaction Term} %\VignetteEngine{knitr::rmarkdown} %\VignetteEncoding{UTF-8} --- ```{r set-options, echo = FALSE} knitr::opts_chunk$set(collapse = TRUE, comment = "#>", dev = "png", fig.width = 7, fig.height = 5, message = FALSE, warning = FALSE) if (!requireNamespace("magrittr", quietly = TRUE) || !requireNamespace("lme4", quietly = TRUE) || !requireNamespace("splines", quietly = TRUE) || !requireNamespace("sjmisc", quietly = TRUE)) { knitr::opts_chunk$set(eval = FALSE) } ``` This vignette demonstrate how to use *ggeffects* to compute and plot marginal effects of a logistic regression model. To cover some frequently asked questions by users, we'll fit a mixed model, inlcuding an interaction term and a quadratic resp. spline term. A general introduction into the package usage can be found in the vignette [marginal effects of regression model](ggeffects.html). First, we load the required packages and create a sample data set with a binomial and continuous variable as predictor as well as a group factor. To avoid convergence warnings, the continuous variable is standardized. ```{r} library(magrittr) library(ggeffects) library(sjmisc) library(lme4) library(splines) set.seed(123) dat <- data.frame( outcome = rbinom(n = 100, size = 1, prob = 0.35), var_binom = as.factor(rbinom(n = 100, size = 1, prob = 0.2)), var_cont = rnorm(n = 100, mean = 10, sd = 7), group = sample(letters[1:4], size = 100, replace = TRUE) ) dat$var_cont <- sjmisc::std(dat$var_cont) ``` ## Simple Logistic Mixed Effects Model We start by fitting a simple mixed effects model. ```{r} m1 <- glmer( outcome ~ var_binom + var_cont + (1 | group), data = dat, family = binomial(link = "logit") ) ``` For a discrete variable, marginal effects for all levels are calculated by default. For continuous variables, a pretty range of values is generated. See more details about value ranges in the vignette [marginal effects at specific values](introduction_effectsatvalues.html). For logistic regression models, since *ggeffects* returns marginal effects on the response scale, the predicted values are predicted _probabilities_. Furthermore, for mixed models, the predicted values are typically at the _population_ level, not group-specific. ```{r message = TRUE} ggpredict(m1, "var_binom") ggpredict(m1, "var_cont") ``` To plot marginal effects, simply plot the returned results or use the pipe. ```{r message = FALSE} # save marginal effects in an object and plot me <- ggpredict(m1, "var_binom") plot(me) # plot using the pipe ggpredict(m1, "var_cont") %>% plot() ``` ## Logistic Mixed Effects Model with Interaction Term Next, we fit a model with an interaction between the binomial and continuous variable. ```{r} m2 <- glmer( outcome ~ var_binom * var_cont + (1 | group), data = dat, family = binomial(link = "logit") ) ``` To compute or plot marginal effects of interaction terms, simply specify these terms, i.e. the names of the variables, as character vector in the `terms`-argument. Since we have an interaction between `var_binom` and `var_cont`, the argument would be `terms = c("var_binom", "var_cont")`. However, the _first_ variable in the `terms`-argument is used as predictor along the x-axis. Marginal effects are then plotted for specific values or at specific levels from the _second_ variable. If the second variable is a factor, marginal effects for each level are plotted. If the second variable is continuous, representative values are chosen (typically, mean +/- one SD, see [marginal effects at specific values](introduction_effectsatvalues.html)). ```{r message = TRUE} ggpredict(m2, c("var_cont", "var_binom")) %>% plot() ggpredict(m2, c("var_binom", "var_cont")) %>% plot() ``` ## Logistic Mixed Effects Model with quadratic Interaction Term Now we fit a model with interaction term, where the continuous variable is modelled as quadratic term. ```{r} m3 <- glmer( outcome ~ var_binom * poly(var_cont, degree = 2, raw = TRUE) + (1 | group), data = dat, family = binomial(link = "logit") ) ``` Again, *ggeffect* automatically plots all high-order terms when these are specified in the `terms`-argument. Hence, the function call is identical to the previous examples with interaction terms, which had no polynomial term included. ```{r message = TRUE} ggpredict(m3, c("var_cont", "var_binom")) %>% plot() ``` As you can see, *ggeffects* also returned a message indicated that the plot may not look very smooth due to the involvement of polynomial or spline terms: > Model contains splines or polynomial terms. Consider using `terms="var_cont [all]"` to get smooth plots. See also package-vignette 'Marginal Effects at Specific Values'. This is because for mixed models, computing marginal effects with spline or polynomial terms may lead to memory allocation problems. If you are sure that this will not happen, add the `[all]`-tag to the `terms`-argument, as described in the message: ```{r message = TRUE} ggpredict(m3, c("var_cont [all]", "var_binom")) %>% plot() ``` The above plot produces much smoother curves. ## Logistic Mixed Effects Model with Three-Way Interaction The last model does not produce very nice plots, but for the sake of demonstration, we fit a model with three interaction terms, including polynomial and spline terms. ```{r message = FALSE} set.seed(321) dat <- data.frame( outcome = rbinom(n = 100, size = 1, prob = 0.35), var_binom = rbinom(n = 100, size = 1, prob = 0.5), var_cont = rnorm(n = 100, mean = 10, sd = 7), var_cont2 = rnorm(n = 100, mean = 5, sd = 2), group = sample(letters[1:4], size = 100, replace = TRUE) ) m4 <- glmer( outcome ~ var_binom * poly(var_cont, degree = 2) * ns(var_cont2, df = 3) + (1 | group), data = dat, family = binomial(link = "logit") ) ``` Since we have marginal effects for *var_cont* at the levels of *var_cont2* and *var_binom*, we not only have groups, but also facets to plot all three "dimensions". Three-way interactions are plotted simply by speficying all terms in question in the `terms`-argument. ```{r message = TRUE} ggpredict(m4, c("var_cont [all]", "var_cont2", "var_binom")) %>% plot() ``` ggeffects/inst/doc/practical_logisticmixedmodel.html0000644000176200001440000032564313614017655022553 0ustar liggesusers Practical example: Logistic Mixed Effects Model with Interaction Term

Practical example: Logistic Mixed Effects Model with Interaction Term

Daniel Lüdecke

2020-01-28

This vignette demonstrate how to use ggeffects to compute and plot marginal effects of a logistic regression model. To cover some frequently asked questions by users, we’ll fit a mixed model, inlcuding an interaction term and a quadratic resp. spline term. A general introduction into the package usage can be found in the vignette marginal effects of regression model.

First, we load the required packages and create a sample data set with a binomial and continuous variable as predictor as well as a group factor. To avoid convergence warnings, the continuous variable is standardized.

Simple Logistic Mixed Effects Model

We start by fitting a simple mixed effects model.

For a discrete variable, marginal effects for all levels are calculated by default. For continuous variables, a pretty range of values is generated. See more details about value ranges in the vignette marginal effects at specific values.

For logistic regression models, since ggeffects returns marginal effects on the response scale, the predicted values are predicted probabilities. Furthermore, for mixed models, the predicted values are typically at the population level, not group-specific.

To plot marginal effects, simply plot the returned results or use the pipe.

Logistic Mixed Effects Model with Interaction Term

Next, we fit a model with an interaction between the binomial and continuous variable.

To compute or plot marginal effects of interaction terms, simply specify these terms, i.e. the names of the variables, as character vector in the terms-argument. Since we have an interaction between var_binom and var_cont, the argument would be terms = c("var_binom", "var_cont"). However, the first variable in the terms-argument is used as predictor along the x-axis. Marginal effects are then plotted for specific values or at specific levels from the second variable.

If the second variable is a factor, marginal effects for each level are plotted. If the second variable is continuous, representative values are chosen (typically, mean +/- one SD, see marginal effects at specific values).

Logistic Mixed Effects Model with quadratic Interaction Term

Now we fit a model with interaction term, where the continuous variable is modelled as quadratic term.

Again, ggeffect automatically plots all high-order terms when these are specified in the terms-argument. Hence, the function call is identical to the previous examples with interaction terms, which had no polynomial term included.

As you can see, ggeffects also returned a message indicated that the plot may not look very smooth due to the involvement of polynomial or spline terms:

Model contains splines or polynomial terms. Consider using terms="var_cont [all]" to get smooth plots. See also package-vignette ‘Marginal Effects at Specific Values’.

This is because for mixed models, computing marginal effects with spline or polynomial terms may lead to memory allocation problems. If you are sure that this will not happen, add the [all]-tag to the terms-argument, as described in the message:

The above plot produces much smoother curves.

Logistic Mixed Effects Model with Three-Way Interaction

The last model does not produce very nice plots, but for the sake of demonstration, we fit a model with three interaction terms, including polynomial and spline terms.

Since we have marginal effects for var_cont at the levels of var_cont2 and var_binom, we not only have groups, but also facets to plot all three “dimensions”. Three-way interactions are plotted simply by speficying all terms in question in the terms-argument.

ggeffects/inst/doc/ggeffects.R0000644000176200001440000003543313614017616016026 0ustar liggesusers## ----set-options, echo = FALSE------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ knitr::opts_chunk$set( collapse = TRUE, comment = "#>", dev = "png", fig.width = 7, fig.height = 3.5, message = FALSE, warning = FALSE) options(width = 800) if (!requireNamespace("ggplot2", quietly = TRUE) || !requireNamespace("splines", quietly = TRUE) || !requireNamespace("sjmisc", quietly = TRUE)) { knitr::opts_chunk$set(eval = FALSE) } ## ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- library(ggeffects) data(efc) fit <- lm(barthtot ~ c12hour + neg_c_7 + c161sex + c172code, data = efc) ggpredict(fit, terms = "c12hour") ## ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- library(ggplot2) theme_set(theme_bw()) mydf <- ggpredict(fit, terms = "c12hour") ggplot(mydf, aes(x, predicted)) + geom_line() ## ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ggpredict(fit, terms = c("c12hour", "c172code")) ## ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- mydf <- ggpredict(fit, terms = c("c12hour", "c172code")) ggplot(mydf, aes(x, predicted, colour = group)) + geom_line() ## ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- mydf <- ggpredict(fit, terms = c("c12hour", "c172code", "c161sex")) mydf ggplot(mydf, aes(x, predicted, colour = group)) + geom_line() + facet_wrap(~facet) ## ----fig.height = 8----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- mydf <- ggpredict(fit, terms = c("c12hour", "c172code", "c161sex", "neg_c_7")) plot(mydf) ## ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- mydf <- ggpredict(fit) mydf ## ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- library(sjmisc) data(efc) # make categorical efc$c161sex <- to_factor(efc$c161sex) # fit model with interaction fit <- lm(neg_c_7 ~ c12hour + barthtot * c161sex, data = efc) # select only levels 30, 50 and 70 from continuous variable Barthel-Index mydf <- ggpredict(fit, terms = c("barthtot [30,50,70]", "c161sex")) ggplot(mydf, aes(x, predicted, colour = group)) + geom_line() ## ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- # fit model with 3-way-interaction fit <- lm(neg_c_7 ~ c12hour * barthtot * c161sex, data = efc) # select only levels 30, 50 and 70 from continuous variable Barthel-Index mydf <- ggpredict(fit, terms = c("c12hour", "barthtot [30,50,70]", "c161sex")) ggplot(mydf, aes(x, predicted, colour = group)) + geom_line() + facet_wrap(~facet) ## ----fig.height = 8----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- # fit model with 4-way-interaction fit <- lm(neg_c_7 ~ c12hour * barthtot * c161sex * c172code, data = efc) # marginal effects for all 4 interaction terms pr <- ggpredict(fit, c("c12hour", "barthtot", "c161sex", "c172code")) # use plot() method, easier than own ggplot-code from scratch plot(pr) ## ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- library(splines) data(women) fm1 <- lm(weight ~ bs(height, df = 5), data = women) dat <- ggpredict(fm1, "height") ggplot(dat, aes(x, predicted)) + geom_line() + geom_point() ## ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- data("lung", package = "survival") # remove category 3 (outlier) lung <- subset(lung, subset = ph.ecog %in% 0:2) lung$sex <- factor(lung$sex, labels = c("male", "female")) lung$ph.ecog <- factor(lung$ph.ecog, labels = c("good", "ok", "limited")) m <- survival::coxph(survival::Surv(time, status) ~ sex + age + ph.ecog, data = lung) # predicted risk-scores ggpredict(m, c("sex", "ph.ecog")) ## ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- # probability of survival ggpredict(m, c("sex", "ph.ecog"), type = "surv") ## ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- get_x_title(mydf) get_y_title(mydf) ggplot(mydf, aes(x, predicted, colour = group)) + geom_line() + facet_wrap(~facet) + labs( x = get_x_title(mydf), y = get_y_title(mydf), colour = get_legend_title(mydf) ) ggeffects/inst/doc/technical_stata.Rmd0000644000176200001440000000603513614010353017523 0ustar liggesusers--- title: "Technical Details: Different Output between Stata and ggeffects" author: "Daniel Lüdecke" date: "`r Sys.Date()`" output: rmarkdown::html_vignette vignette: > %\VignetteIndexEntry{Technical Details: Different Output between Stata and ggeffects} %\VignetteEngine{knitr::rmarkdown} %\VignetteEncoding{UTF-8} --- ```{r set-options, echo = FALSE} knitr::opts_chunk$set(collapse = TRUE, comment = "#>", dev = "png", fig.width = 7, fig.height = 5, message = FALSE, warning = FALSE) if (!requireNamespace("magrittr", quietly = TRUE)) { knitr::opts_chunk$set(eval = FALSE) } ``` # Why is the output from Stata different from the output from ggeffect? Stata's equivalent to the marginal effects produced by _ggeffects_ is the `margins`-command. However, the results are not always identical. For models from non-gaussian families, point estimates for the marginal effects are identical, but the confidence intervals differ. Here is an explanation, why there is a difference. First, we fit a logistic regression model. ```{r} library(magrittr) set.seed(5) data <- data.frame( outcome = rbinom(100, 1, 0.5), var1 = rbinom(100, 1, 0.1), var2 = rnorm(100, 10, 7) ) m <- glm( outcome ~ var1 * var2, data = data, family = binomial(link = "logit") ) ``` ## Example with graphical output ### The Stata plot This is the code in Stata to produce a marginal effects plot. ```{r eval=FALSE} use data.dta, clear quietly logit outcome c.var1##c.var2 quietly margins, at(var2 = (-8(0.5)28) var1 = (0 1)) marginsplot ``` The resulting image looks like this. ```{r out.width="100%", echo=FALSE} knitr::include_graphics("vignette-stata-1.png", dpi = 72) ``` ### The ggeffects plot When we use _ggeffects_, the plot slighlty differs. ```{r} library(ggeffects) ggpredict(m, c("var2", "var1")) %>% plot() ``` As we can see, the confidence intervals in the Stata plot are outside the plausible range of `[0, 1]`, which means that the predicted uncertainty from the Stata output has a probability higher than 1 and lower than 0, while `ggpredict()` computes confidence intervals _within_ the possible range. ## Conclusion It seems like Stata is getting the confidence intervals wrong. Predictions and standard errors returned in Stata are on the (transformed) response scale. Obviously, the confidence intervals are then computed by `estimate +/- 1.96 * standard error`, which may lead to confidence intervals that are out of reasonable bounds (e.g. above 1 or below 0 for predicted probabilities). The _transformed estimate_ (on the response scale) is always between 0 and 1, and the same is true for the _transformed standard errors_. However, adding or substracting approx. 2 * _transformed_ SE to the _transformed_ estimate does no longer ensure that the confidence intervals are within the correct range. The more precise way to do the calculation is to calculate estimates, standard errors and confidence intervals on the (untransformed) scale of the linear predictor and then back-transform. ggeffects/inst/doc/technical_differencepredictemmeans.Rmd0000644000176200001440000000532413614010324023420 0ustar liggesusers--- title: "Technical Details: Difference between ggpredict() and ggemmeans()" author: "Daniel Lüdecke" date: "`r Sys.Date()`" output: rmarkdown::html_vignette vignette: > %\VignetteIndexEntry{Technical Details: Difference between ggpredict() and ggemmeans()} %\VignetteEngine{knitr::rmarkdown} %\VignetteEncoding{UTF-8} --- ```{r set-options, echo = FALSE} knitr::opts_chunk$set(collapse = TRUE, comment = "#>", dev = "png", fig.width = 7, fig.height = 3.5, message = FALSE, warning = FALSE) options(width = 800) if (!requireNamespace("magrittr", quietly = TRUE) || !requireNamespace("sjmisc", quietly = TRUE)) { knitr::opts_chunk$set(eval = FALSE) } ``` `ggpredict()` and `ggemmeans()` compute predicted values for all possible levels or values from a model's predictor. Basically, `ggpredict()` wraps the `predict()`-method for the related model, while `ggemmeans()` wraps the `emmeans()`-method from the **emmeans**-package. Both `ggpredict()` and `ggemmeans()` do some data-preparation to bring the data in shape for the `newdata`-argument (`predict()`) resp. the `at`-argument (`emmeans()`). It is recommended to read the [general introduction](ggeffects.html) first, if you haven't done this yet. For models without categorical predictors, the results from `ggpredict()` and `ggemmeans()` are identical (except some _slight_ differences in the associated confidence intervals, which are, however, negligable). ```{r} library(magrittr) library(ggeffects) data(efc) fit <- lm(barthtot ~ c12hour + neg_c_7, data = efc) ggpredict(fit, terms = "c12hour") ggemmeans(fit, terms = "c12hour") ``` As can be seen, the continuous predictor `neg_c_7` is held constant at its mean value, 11.83. For categorical predictors, `ggpredict()` and `ggemmeans()` behave differently. While `ggpredict()` uses the reference level of each categorical predictor to hold it constant, `ggemmeans()` - like `ggeffect()` - averages over the proportions of the categories of factors. ```{r} library(sjmisc) data(efc) efc$e42dep <- to_label(efc$e42dep) fit <- lm(barthtot ~ c12hour + neg_c_7 + e42dep, data = efc) ggpredict(fit, terms = "c12hour") ggemmeans(fit, terms = "c12hour") ``` In this case, one would obtain the same results for `ggpredict()` and `ggemmeans()` again, if `condition` is used to define specific levels at which variables, in our case the factor `e42dep`, should be held constant. ```{r} ggpredict(fit, terms = "c12hour") ggemmeans(fit, terms = "c12hour", condition = c(e42dep = "independent")) ``` Creating plots is as simple as described in the vignette [Plotting Marginal Effects](introduction_plotmethod.html). ```{r} ggemmeans(fit, terms = c("c12hour", "e42dep")) %>% plot() ``` ggeffects/inst/doc/introduction_effectsatvalues.R0000644000176200001440000004024413614017623022050 0ustar liggesusers## ----set-options, echo = FALSE------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ knitr::opts_chunk$set(collapse = TRUE, comment = "#>", dev = "png", fig.width = 7, fig.height = 4, message = FALSE, warning = FALSE) options(width = 800) if (!requireNamespace("ggplot2", quietly = TRUE) || !requireNamespace("lme4", quietly = TRUE) || !requireNamespace("sjlabelled", quietly = TRUE) || !requireNamespace("sjmisc", quietly = TRUE)) { knitr::opts_chunk$set(eval = FALSE) } ## ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- library(ggeffects) library(ggplot2) data(efc) fit <- lm(barthtot ~ c12hour + neg_c_7 + c161sex + c172code, data = efc) mydf <- ggpredict(fit, terms = c("c12hour [30:80]", "c172code [1,3]")) mydf ggplot(mydf, aes(x, predicted, colour = group)) + geom_line() ## ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- data(mtcars) mpg_model <- lm(mpg ~ log(hp), data = mtcars) # x-values and predictions based on the log(hp)-values ggpredict(mpg_model, "hp") # x-values and predictions based on hp-values from 50 to 150 ggpredict(mpg_model, "hp [50:150]") ## ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- # range for x-values with .5-steps ggpredict(mpg_model, "hp [50:60 by=.5]") ## ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- data(efc) # short variable label, for plot attr(efc$c12hour, "label") <- "hours of care" fit <- lm(barthtot ~ c12hour * c161sex + neg_c_7, data = efc) mydf <- ggpredict(fit, terms = c("c161sex", "c12hour [meansd]")) plot(mydf) mydf <- ggpredict(fit, terms = c("c161sex", "c12hour [quart]")) plot(mydf) ## ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- # x-values and predictions based on exponentiated hp-values ggpredict(mpg_model, "hp [exp]") ## ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- # x-values and predictions based on doubled hp-values hp_double <- function(x) 2 * x ggpredict(mpg_model, "hp [hp_double]") ## ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- library(sjmisc) library(sjlabelled) library(lme4) data(efc) efc$c12hour <- std(efc$c12hour) efc$e15relat <- as_label(efc$e15relat) m <- lmer( barthtot ~ c12hour + I(c12hour^2) + neg_c_7 + c160age + c172code + (1 | e15relat), data = efc ) me <- ggpredict(m, terms = "c12hour") plot(me) ## ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- me <- ggpredict(m, terms = "c12hour [all]") plot(me) ## ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- me <- ggpredict(m, terms = "c12hour [n=2]") plot(me) ## ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- me <- ggpredict(m, terms = "c12hour [n=10]") plot(me) ## ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- data(mtcars) mpg_model <- lm(mpg ~ log(hp) + disp, data = mtcars) # "disp" is hold constant at its mean ggpredict(mpg_model, "hp [exp]") # "disp" is hold constant at value 200 ggpredict(mpg_model, "hp [exp]", condition = c(disp = 200)) ## ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- library(sjlabelled) library(lme4) data(efc) efc$e15relat <- as_label(efc$e15relat) m <- lmer(neg_c_7 ~ c12hour + c160age + c161sex + (1 | e15relat), data = efc) me <- ggpredict(m, terms = "c12hour") plot(me) ## ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- me <- ggpredict(m, terms = "c12hour", type = "re") plot(me) ## ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- me <- ggpredict(m, terms = c("c12hour", "e15relat"), type = "re") plot(me) ## ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- me <- ggpredict(m, terms = c("c12hour", "e15relat [child,sibling]"), type = "re") plot(me) ## ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- data("sleepstudy") m <- lmer(Reaction ~ Days + (1 + Days | Subject), data = sleepstudy) me <- ggpredict(m, terms = c("Days", "Subject [sample=8]"), type = "re") plot(me) ggeffects/inst/doc/practical_robustestimation.Rmd0000644000176200001440000000573113614010221022026 0ustar liggesusers--- title: "Practical example: (Cluster) Robust Standard Errors" author: "Daniel Lüdecke" date: "`r Sys.Date()`" output: rmarkdown::html_vignette vignette: > %\VignetteIndexEntry{Practical example: (Cluster) Robust Standard Errors} %\VignetteEngine{knitr::rmarkdown} %\VignetteEncoding{UTF-8} --- ```{r set-options, echo = FALSE} knitr::opts_chunk$set(collapse = TRUE, comment = "#>", dev = "png", fig.width = 7, fig.height = 5, message = FALSE, warning = FALSE) if (!requireNamespace("sandwich", quietly = TRUE) || !requireNamespace("clubSandwich", quietly = TRUE)) { knitr::opts_chunk$set(eval = FALSE) } ``` This vignette demonstrate how to compute confidence intervals based on (cluster) robust variance-covariance matrices for standard errors. First, we load the required packages and create a sample data set with a binomial and continuous variable as predictor as well as a group factor. To avoid convergence warnings, the continuous variable is standardized. ```{r} library(ggeffects) set.seed(123) # example taken from "?clubSandwich::vcovCR" m <- 8 cluster <- factor(rep(LETTERS[1:m], 3 + rpois(m, 5))) n <- length(cluster) X <- matrix(rnorm(3 * n), n, 3) nu <- rnorm(m)[cluster] e <- rnorm(n) y <- X %*% c(.4, .3, -.3) + nu + e dat <- data.frame(y, X, cluster, row = 1:n) # fit linear model model <- lm(y ~ X1 + X2 + X3, data = dat) ``` ## Predictions with normal standard errors In this example, we use the normal standard errors, as returned by `predict()`, to compute confidence intervals. ```{r message = TRUE} ggpredict(model, "X1") ``` ```{r message = FALSE} me <- ggpredict(model, "X1") plot(me) ``` ## Predictions with HC-estimated standard errors Now, we use `sandwich::vcovHC()` to estimate heteroskedasticity-consistent standard errors. To do so, first the function name, `vcovHC()`, must be supplied to the `vcov.fun`-argument. `sandwich::vcovHC()`, in turn, has different types of estimation. This must be specified in `vcov.type`. ```{r message = TRUE} ggpredict(model, "X1", vcov.fun = "vcovHC", vcov.type = "HC0") ``` ```{r message = FALSE} me <- ggpredict(model, "X1", vcov.fun = "vcovHC", vcov.type = "HC0") plot(me) ``` ## Predictions with cluster-robust standard errors The last example shows how to define cluster-robust standard errors. These are based on `clubSandwich::vcovCR()`. Thus, `vcov.fun = "vcovCR"` is always required when estimating cluster robust standard errors. `clubSandwich::vcovCR()` has also different estimation types, which must be specified in `vcov.type`. Furthermore, `clubSandwich::vcovCR()` *requires* the `cluster`-argument, which must be specified in `vcov.args`: ```{r message = TRUE} ggpredict( model, "X1", vcov.fun = "vcovCR", vcov.type = "CR0", vcov.args = list(cluster = dat$cluster) ) ``` ```{r message = FALSE} me <- ggpredict( model, "X1", vcov.fun = "vcovCR", vcov.type = "CR0", vcov.args = list(cluster = dat$cluster) ) plot(me) ``` ggeffects/inst/doc/introduction_plotcustomize.R0000644000176200001440000003075713614017627021621 0ustar liggesusers## ----set-options, echo = FALSE------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ knitr::opts_chunk$set(collapse = TRUE, comment = "#>", dev = "png", fig.width = 7, fig.height = 4, message = FALSE, warning = FALSE) options(width = 800) if (!requireNamespace("ggplot2", quietly = TRUE)) { knitr::opts_chunk$set(eval = FALSE) } ## ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- library(ggeffects) library(ggplot2) data(mtcars) m <- lm(mpg ~ gear + as.factor(cyl) + wt, data = mtcars) # continuous x-axis dat <- ggpredict(m, terms = c("gear", "wt")) # discrete x-axis dat_categorical <- ggpredict(m, terms = c("cyl", "wt")) # default plot plot(dat) ## ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- plot(dat) + labs( x = "Number of forward gears", y = "Miles/(US) gallon", title = "Predicted mean miles per gallon" ) ## ----eval=FALSE--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- # ggplot(data, aes(x = x, y = predicted, colour = group)) ## ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- plot(dat) + labs(colour = "Weight (1000 lbs)") ## ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- plot(dat, colors = "bw") + labs(linetype = "Weight (1000 lbs)") ## ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- plot(dat_categorical, colors = "bw") + labs(shape = "Weight (1000 lbs)") ## ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- plot(dat_categorical, connect.lines = TRUE) ## ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- plot(dat_categorical) + scale_x_continuous(labels = c("four", "six", "eight"), breaks = c(1, 2, 3)) ## ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- plot(dat) + scale_x_continuous(breaks = 3:5, limits = c(2, 6)) ## ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- plot(dat_categorical, breaks = seq(12, 30, 2), limits = c(12, 30)) ## ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- plot(dat) + scale_colour_brewer(palette = "Set1", labels = c("-1 SD", "Mean", "+1 SD")) ## ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- plot(dat, colors = "bw") + scale_linetype_manual(values = 15:17, labels = c("-1 SD", "Mean", "+1 SD")) ## ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- plot(dat_categorical, colors = "bw") + scale_shape_manual(values = 1:3, labels = c("-1 SD", "Mean", "+1 SD")) ggeffects/inst/doc/introduction_plotcustomize.html0000644000176200001440000033434613614017627022365 0ustar liggesusers Introduction: Customize Plot Appearance

Introduction: Customize Plot Appearance

Daniel Lüdecke

2020-01-28

This vignettes demonstrates how to customize plots created with the plot()-method of the ggeffects-package.

plot() returns an object of class ggplot, so it is easy to apply further modifications to the resulting plot. You may want to load the ggplot2-package to do this: library(ggplot2).

Let’s start with a default-plot:

Changing Plot and Axis Titles

The simplest thing is to change the titles from the plot, x- and y-axis. This can be done with ggplot2::labs():

Changing the Legend Title

The legend-title can also be changed using the labs()-function. The legend in ggplot-objects refers to the aesthetic used for the grouping variable, which is by default the colour, i.e. the plot is constructed in the following way:

Plots with Default Colors

Hence, using colour in labs() changes the legend-title:

Black-and-White Plots

For black-and-white plots, the group-aesthetic is mapped to different linetypes, not to different colours. Thus, the legend-title for black-and-white plots can be changed using linetype in labs():

Black-and-White Plots with Categorical Predictor

If the variable on the x-axis is discrete for a black-and-white plot, the group-aesthetic is mapped to different shapes, so following code must be used to change the legend title:

Changing the x-Axis Appearance

The x-axis for plots returned from plot() is always continuous, even for discrete x-axis-variables. The reason for this is that many users are used to plots that connect the data points with lines, which is only possible for continuous x-axes. You can do this using the connect.lines-argument:

Categorical Predictors

Since the x-axis is continuous (i.e. ggplot2::scale_x_continuous()), you can use scale_x_continuous() to modify the x-axis, and change breaks, limits or labels.

Continuous Predictors

Or for continuous variables:

Changing the y-Axis Appearance

Arguments in ... are passed down to ggplot::scale_y_continuous() (resp. ggplot::scale_y_log10(), if log.y = TRUE), so you can control the appearance of the y-axis by putting the arguments directly into the call to plot():

Changing the Legend Labels

The legend labels can also be changed using a scale_*()-function from ggplot. Depending on the color-setting (see section Changing the Legend Title), following functions can be used to change the legend labels:

  • scale_colour_manual() resp. scale_colour_brewer()
  • scale_linetype_manual()
  • scale_shape_manual()

Since you overwrite an exising “color” scale, you typically need to provide the values or palette-argument, to manuall set the colors, linetypes or shapes.

Plots with Default Colors

For plots using default colors:

Black-and-White Plots with Categorical Predictor

For black-and-white plots with categorical x-axis:

ggeffects/inst/CITATION0000644000176200001440000000046413451124203014320 0ustar liggesusersbibentry( bibtype = "article", title = "ggeffects: Tidy Data Frames of Marginal Effects from Regression Models.", volume = "3", doi = "10.21105/joss.00772", number = "26", journal = "Journal of Open Source Software", author = person("Daniel", "Lüdecke"), year = "2018", pages = "772" )