psychometric/0000755000176200001440000000000014522003402012753 5ustar liggesuserspsychometric/NAMESPACE0000644000176200001440000000052714243507446014217 0ustar liggesusersexportPattern("^[^\\.]") import(multilevel, dplyr) importFrom("purrr", "reduce") importFrom("utils", "edit") importFrom("nlme", "VarCorr", "lme") importFrom("graphics", "abline") importFrom("stats", "aov", "cor", "na.exclude", "na.omit", "pchisq", "pnorm", "pt", "qf", "qnorm", "sd", "var", "weighted.mean")psychometric/README.md0000644000176200001440000000231714511313200014233 0ustar liggesusers# READ ME The following changes have been made since version 0.1.0 of Applied Psychometric Theory Changes in 0.1.1 1. There was an error in alpha.CI (error fixed) 2. Defaults in alpha.CI have been changed (level = .90, onesided=FALSE) 3. A new function 'MetaTable' has been added to summarize various MetaAnalysis functions. Changes in 0.1.2 1. There was an error in the df of the calculation of the upper CI for ICC1 and ICC2 Changes in 2.0 1. My affiliation has changed from @umsl.edu to @statefarm.com 2. Item.Exam() was updated to include item.total correlation without item included Changes in 2.1 1. in the help and R files, na.rm=T is replaced with na.rm=TRUE 2. an extra '(' was removed from the help file description of ICC1.lme() 3. a grammar error was corrected in help file for cRR() resulting in error in reading in newer versions. Changes in 2.2 1. revised help and Rd files 2. re-compiled to work under R 2.11.1 Changes in 2.3 1. changed affiliation to @gmail.com 2. used both purrr and dplyr functions to replace attach/detach of data 3. updated function "GmeanRel" from nlme to "gmeanrel" - a dependency for functions ICC1, ICC2 Changes in 2.4 1. label items standardized for compatibility with R 4.3.0 psychometric/data/0000755000176200001440000000000013243467742013710 5ustar liggesuserspsychometric/data/HSJt35.rda0000644000176200001440000000051511427121000015331 0ustar liggesusers r0b```b`bfb H020pi6`cSfa Hi J fɣZS d @uD]a 71``()<G59'f #T[ZbrI~@r"\}KCu"_|c`u.Bc4*C@@?KEGqTs6wcP?/17g-.)Mr*`LqC'V<搪X_RC+x'6ǥX@E* ܁Iu7.@iyAl>%]ĺ:tq\|Bʓ (-Ipڿ90h|=cʗ~*#d  z؁ uK'Dg\ `:PZ( U' Aֽnu`)s_ د,`a/`jC ].` 8Y Bkq2d)^PAєp%悚 PuX)2,82, 8b442+6L8J psychometric/man/0000755000176200001440000000000014521766147013553 5ustar liggesuserspsychometric/man/CIrb.Rd0000644000176200001440000000410714511312614014644 0ustar liggesusers\name{CIrb} \alias{CIrb} \alias{CIrbar} \title{ Confidence Interval about Sample Weighted Mean Correlation} \description{ Produces a CI for the desired level of the sample weighted mean correlation using the appropriate standard error. } \usage{ CIrb(x, LEVEL = 0.95, homogenous = TRUE) } \arguments{ \item{x}{ A matrix or data.frame with columns Rxy and n: see \code{\link{EnterMeta}}} \item{LEVEL}{ Significance Level for constructing the CI, default is .95} \item{homogenous}{ Whether or not to use homogenous or heterogenous SE } } \details{ The CI is constructed based on the uncorrected mean correlation. It is corrected for sampling error only. To get the CI for the mean correlation corrected for artifacts, use \code{\link{CredIntRho}}, but this is a credibility interval rather than a confidence interval. See Hunter & Schmidt (2004) for more details on the interpretation of the differences. If the CI is computed about a heterogenous mean correlation, one is implying that moderators are present, but that one can't determine what those moderators might be. Otherwise, strive to parse the studies into homogenous subsets and create CI about those means within the subsets. } \value{ A list containing: \item{LCL }{ Lower Confidence Limit of the CI} \item{UCL }{ Upper Confidence Limit of the CI} } \references{ Arthur, Jr., W., Bennett, Jr., W., and Huffcutt, A. I. (2001) \emph{Conducting Meta-analysis using SAS.} Mahwah, NJ: Erlbaum. Hunter, J.E. and Schmidt, F.L. (2004). \emph{Methods of meta-analysis: Correcting error and bias in research findings (2nd ed.).} Thousand Oaks: Sage Publications. Hunter, J.E., Schmidt, F.L., and Jackson, G.B. (1982). \emph{Meta-analysis: Cumulating research findings across studies.} Beverly Hills: Sage Publications. } \author{ Thomas D. Fletcher \email{t.d.fletcher05@gmail.com} } \seealso{ \code{\link{SErbar}}, \code{\link{rbar}} } \examples{ #From Arthur et al data(ABHt32) rbar(ABHt32) CIrb(ABHt32) # From Hunter et al data(HSJt35) rbar(HSJt35) CIrb(HSJt35) } \keyword{ univar } \keyword{ models } \keyword{ htest }psychometric/man/CI.tscore.Rd0000644000176200001440000000502114243712606015621 0ustar liggesusers\name{CI.tscore} \alias{CI.tscore} \alias{CI.obs} \title{ Confidence Intervals for Test Scores } \description{ Computes the CI for a desired level for observed scores and estimated true scores} \usage{ CI.tscore(obs, mx, s, rxx, level = 0.95) CI.obs(obs, s, rxx, level = 0.95) } \arguments{ \item{obs}{ Observed test score on test x} \item{mx}{ mean of test x } \item{s}{ standard deviation of test x } \item{rxx}{ reliability of test x} \item{level}{ Significance Level for constructing the CI, default is .95} } \details{ \code{CI.tscore} makes use of \code{\link{Est.true}} to correct the observed score for regression to the mean and \code{\link{SE.Est}} for the correct standard error. \code{CI.tscore} also requires entry of the mean of the test scores for correcting for regression to the mean. \cr \code{CI.obs} is much simpler in construction as it only makes use of the observed score without any corrections. \code{CI.obs} uses \code{\link{SE.Meas}}, the SEM that appears in most test manuals and text books. } \value{ Both functions return a table with 4 elements \item{SE. }{ Standard Error of the Estimate or SE of Measurement} \item{LCL }{ lower confidence limit of the CIDescription of 'comp2'} \item{T.Score }{ (or OBS) Estimate True Score or Observed score} \item{UCL }{ upper confidence limit of the CI} } \references{ Dudek, F. J. (1979). The continuing misinterpretation of the standard error of measurement. \emph{Psychological Bulletin, 86}, 335-337. } \author{ Thomas D. Fletcher \email{t.d.fletcher05@gmail.com} } \note{ It is not in error to report any one of these. The misinterpretation is in taking the observed score and making inferences about the true score without (1) using the correct standard error and (2) correcting for regression toward the mean of the observed scores. } \section{Warning }{ Be Cautious in construction and interpretation of CIs \cr To obtain percent for 1 SEM \cr 1-((1-pnorm(1))*2) \cr To obtain percent for 2 SEM \cr 1-((1-pnorm(2))*2) \cr 95 percent CI corresponds to 1.96 * SE \cr 1 * SE corresponds to .6827 \cr 2 * SE corresponds to 0.9772499 \cr so, for two-sided, 2 * SE corresponds to 0.9544997 \cr } \seealso{ \code{\link{SE.Meas}} } \examples{ # Examples from Dudek (1979) # Suppose a test has mean = 500, SD = 100 rxx = .9 # If an individual scores 700 on the test CI.tscore (700, 500, 100, .9, level=.68) CI.obs(700, 100,.9, level=.68) } \keyword{ models } \keyword{ htest }psychometric/man/CAFAA.Rd0000644000176200001440000000334714511312611014622 0ustar liggesusers\name{CAFAA} \alias{CAFAA} \title{ Compound Attenuation Factor for Meta-Analytic Artifact Corrections } \description{ The compound attenuation factor is computed as the product of the mean for each artifact distribution (square root of artifact) when correcting for attenuation in a correlation coefficient. } \usage{ CAFAA(x) } \arguments{ \item{x}{ A matrix or data.frame with columns Rxx, Ryy, and u: see \code{\link{EnterMeta}}} } \details{ The compound attenuation factor is computed as the product of mean(a)*mean(b)*mean(c) where \cr a = sqrt(Rxx) and is computed with the function \code{\link{aRxx}} \cr b = sqrt(Ryy) and is computed with the function \code{\link{bRyy}} \cr c = \eqn{sqrt((1-u^2)*rbar^2+u^2)} and is computed with the function \code{\link{cRR}} } \value{ A numeric value representing the compound attenuation factor } \references{ Arthur, Jr., W., Bennett, Jr., W., and Huffcutt, A. I. (2001) \emph{Conducting Meta-analysis using SAS.} Mahwah, NJ: Erlbaum. Hunter, J.E. and Schmidt, F.L. (2004). \emph{Methods of meta-analysis: Correcting error and bias in research findings (2nd ed.).} Thousand Oaks: Sage Publications. Hunter, J.E., Schmidt, F.L., and Jackson, G.B. (1982). \emph{Meta-analysis: Cumulating research findings across studies.} Beverly Hills: Sage Publications. } \author{ Thomas D. Fletcher \email{t.d.fletcher05@gmail.com}} \note{ This value is used in the correction for artifacts of a correlation coefficient } \seealso{ \code{\link{rhoCA}}, \code{\link{aRxx}}, \code{\link{bRyy}}, \code{\link{cRR}} } \examples{ #From Arthur et al data(ABHt32) CAFAA(ABHt32) rhoCA(ABHt32) # From Hunter et al data(HSJt35) CAFAA(HSJt35) rhoCA(HSJt35) } \keyword{ univar } \keyword{ models } psychometric/man/varResT.Rd0000644000176200001440000000272114243712470015421 0ustar liggesusers\name{varResT} \alias{varResT} \title{ True residual variance in correlations } \description{ Residual variance attributed to both the variance due to sampling error and artifacts. } \usage{ varResT(x, aprox = FALSE) } \arguments{ \item{x}{ A matrix or data.frame with columns Rxy, n and artifacts (Rxx, Ryy, u): see \code{\link{EnterMeta}}} \item{aprox}{ Logical test to determine if the approximate or exact var e is used } } \details{ \code{varResT} <- \code{varr} - \code{vare} - \code{varAV} varResT is used in the compution of the variance in rho, \code{varRCA} } \value{ A numeric value representing the True residual variance } \references{ Arthur, Jr., W., Bennett, Jr., W., and Huffcutt, A. I. (2001) \emph{Conducting Meta-analysis using SAS.} Mahwah, NJ: Erlbaum. Hunter, J.E. and Schmidt, F.L. (2004). \emph{Methods of meta-analysis: Correcting error and bias in research findings (2nd ed.).} Thousand Oaks: Sage Publications. Hunter, J.E., Schmidt, F.L., and Jackson, G.B. (1982). \emph{Meta-analysis: Cumulating research findings across studies.} Beverly Hills: Sage Publications. } \author{ Thomas D. Fletcher \email{t.d.fletcher05@gmail.com} } \seealso{ \code{\link{varr}}, \code{\link{vare}}, \code{\link{varAV}}, \code{\link{varRCA}} } \examples{ # From Arthur et al data(ABHt32) varResT(ABHt32) # From Hunter et al data(HSJt35) varResT(HSJt35) } \keyword{ univar } \keyword{ models } psychometric/man/r.nil.Rd0000644000176200001440000000155214511011611015041 0ustar liggesusers\name{r.nil} \alias{r.nil} \alias{r.null} \title{ Nil hypothesis for a correlation } \description{ Performs a two-tailed t-test of the H0 that r = 0 } \usage{ r.nil(r, n) } \arguments{ \item{r}{ Correlation coefficient} \item{n}{ Sample Size} } \value{ Returns a table with 4 elements \describe{ \item{H0:rNot0}{correlation to be tested} } \item{t }{ t value for the H0} \item{df }{ degrees of freedom} \item{p }{ p value} } \references{ Cohen, J., Cohen, P., West, S. G., & Aiken, L. S. (2003). \emph{Applied multiple regression/correlation analysis for the behavioral sciences (3rd ed.).} Mahwah, NJ: Lawrence Erlbaum. } \author{ Thomas D. Fletcher \email{t.d.fletcher05@gmail.com} } \seealso{ \code{\link{rdif.nul}}, \code{\link{CIrdif}} } \examples{ # From ch. 2 in Cohen et al (2003) r.nil(.657, 15) } \keyword{ htest } \keyword{ models } psychometric/man/CIr.Rd0000644000176200001440000000176114511312557014513 0ustar liggesusers\name{CIr} \alias{CIr} \title{ Confidence Interval for a Correlation Coefficient } \description{ Will construct the CI for a desired level given a correlation and sample size } \usage{ CIr(r, n, level = 0.95) } \arguments{ \item{r}{ Correlation Coefficient} \item{n}{ Sample Size } \item{level}{ Significance Level for constructing the CI, default is .95} } \value{ \item{LCL }{ Lower Confidence Limit of the CI} \item{UCL }{ Upper Confidence Limit of the CI} } \references{ Cohen, J., Cohen, P., West, S. G., & Aiken, L. S. (2003). \emph{Applied multiple regression/correlation analysis for the behavioral sciences (3rd ed.).} Mahwah, NJ: Lawrence Erlbaum. } \author{ Thomas D. Fletcher \email{t.d.fletcher05@gmail.com} } \note{ Does not compute r, you must enter it into the function} \seealso{ \code{\link{r2z}}, \code{\link{CIz}}, \code{\link{SEz}}, \code{\link{z2r}} } \examples{ # From ch. 2 in Cohen et al (2003) CIr (.657, 15) } \keyword{ htest } \keyword{ models } psychometric/man/Est.true.Rd0000644000176200001440000000245614243712676015561 0ustar liggesusers\name{Est.true} \alias{Est.true} \title{ Estimation of a True Score } \description{ Given the mean and reliability of a test, this function estimates the true score based on an observed score. The estimation is accounting for regression to the mean } \usage{ Est.true(obs, mx, rxx) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{obs}{ an observed score on test x} \item{mx}{ mean of test x } \item{rxx}{ reliability of test x} } \details{ The estimated true score (that) is computed as \cr that <- mx*(1-rxx)+rxx*obs \cr When the obs score is much higher than the mean, the that < obs \cr When the obs score is much lower than the mean, that > obs } \value{ Estimated True score } \references{ Dudek, F. J. (1979). The continuing misinterpretation of the standard error of measurement. \emph{Psychological Bulletin, 86}, 335-337. } \author{ Thomas D. Fletcher \email{t.d.fletcher05@gmail.com} } \seealso{ \code{\link{CI.tscore}}, \code{\link{SE.Est}} } \examples{ # Examples from Dudek (1979) # Suppose a test has mean = 500, SD = 100 rxx = .9 # If an individual scores 700 on the test Est.true(700, 500, .9) # If an individual scores 400 on the test Est.true(400, 500, .9) } \keyword{ models } \keyword{ distribution }psychometric/man/ClassUtil.Rd0000644000176200001440000000474714511312424015741 0ustar liggesusers\name{ClassUtil} \alias{ClassUtil} \title{ Classical Utility of a Test } \description{ Calculate the classical utility of a test given a correlation, base-rate and selection ratio.} \usage{ ClassUtil(rxy = 0, BR = 0.5, SR = 0.5) } \arguments{ \item{rxy}{ Correlation of Test X with Outcome Y } \item{BR}{ Base Rate or prevalence without use of a test} \item{SR}{ Selection Ratio: Number selected out of those tested } } \details{ The degree of utility of using a test as a selection instrument over randomly selecting individuals can be reflected in the decision outcomes expected by using the selection instrument. Suppose you have a predictor (selection instrument) and a criterion (job performance). By regressing the criterion on the predictor, and selecting individuals based on some cut-off value, we have 4 possible outcomes. A = True Positives, B = True Negatives, C = False Negatives, and D = False Positives. The classical utility of using the test over current procedures (random selection) is: [A / (A+D)] - [(A + C) / (A + B + C + D)] Various manipulations of these relationships can be used to assist in decision making. } \value{ Returns a table with the following elements reflecting decision outcomes: \item{True Positives}{ Probability of correctly selecting a successful candidate } \item{False Negatives}{ Probability of incorrectly not selecting a successful candidate } \item{False Positives}{ Probability of incorrectly selecting an unsuccessful candidate } \item{True Negatives}{ Probability of correctly not selecting an unsuccessful candidate } \item{Sensitivity}{ True Positives / (True Positives + False Negatives)} \item{Specificity}{ True Negatives / (True Negatives + False Positives)} \item{\% of Decisions Correct}{ Percentage of correct decisions} \item{Proportion Selected Succesful}{ Proportion of those selected expected to be successful} \item{\% Improvement over BR}{ Percentage of improvement using the test over random selection} } \references{ Murphy, K. R. & Davidshofer, C. O. (2005). \emph{Psychological testing: Principles and applications (5th ed.).} Saddle River, NJ: Prentice Hall. } \author{ Thomas D. Fletcher \email{t.d.fletcher05@gmail.com} } \seealso{ \code{\link{Utility}} } \examples{ # 50 percent of those randomly selected are expected to be successful # A company need only select 1/10 applicants # The correlation between test scores and performance is .35 ClassUtil(.35, .5, .1) } \keyword{ univar } psychometric/man/rbar.Rd0000644000176200001440000000276314243712026014764 0ustar liggesusers\name{rbar} \alias{rbar} \title{ Sample size weighted mean correlation} \description{ Computes the weighted mean correlation from a data object of the general format found in \code{\link{EnterMeta}}} \usage{ rbar(x) } \arguments{ \item{x}{ A matrix or data.frame with columns Rxy and n: see \code{\link{EnterMeta}}} } \details{ For a set of correlations for each study (i), rbar is computed as: sum(Ni*ri)/sum(Ni) where, Ni is the sample size of study i and ri is the correlation in study i. } \value{ Sample Weighted Average Correlation: uncorrected for artifacts other than sampling error } \references{ Arthur, Jr., W., Bennett, Jr., W., and Huffcutt, A. I. (2001) \emph{Conducting Meta-analysis using SAS.} Mahwah, NJ: Erlbaum. Hunter, J.E. and Schmidt, F.L. (2004). \emph{Methods of meta-analysis: Correcting error and bias in research findings (2nd ed.).} Thousand Oaks: Sage Publications. Hunter, J.E., Schmidt, F.L., and Jackson, G.B. (1982). \emph{Meta-analysis: Cumulating research findings across studies.} Beverly Hills: Sage Publications. } \author{ Thomas D. Fletcher \email{t.d.fletcher05@gmail.com} } \note{ This is the mean correlation across studies corrected for sampling error. It is also known as bare-bones meta-analysis.} \seealso{ \code{\link{varr}}, \code{\link{rhoCA}} } \examples{ # From Arthur et al data(ABHt32) rbar(ABHt32) # From Hunter et al data(HSJt35) rbar(HSJt35) } \keyword{ univar } \keyword{ models } psychometric/man/SBrel.Rd0000644000176200001440000000376614243712062015051 0ustar liggesusers\name{SpearmanBrown} \alias{SBrel} \alias{SBlength} \alias{SpearmanBrown} \title{ Spearman-Brown Prophecy Formulae} \description{ These two functions are various manipulations of the Spearman-Brown Prophecy Formula. They are useful in determining relibility if test length is changed or length of a new test if reliability were to change.} \usage{ SBrel(Nlength, rxx) SBlength(rxxp, rxx) } \arguments{ \item{Nlength}{ New length of a test in relation to original} \item{rxx}{ reliability of test x } \item{rxxp}{ reliability of desired (parallel) test x } } \details{ Nlength represents a ratio of new to original. If the new test has 10 items, and the original test has 5 items, Nlength is 2. Likewise, if the original test has 5 items, and the new test has 10 items, Nlength is .5. In general, researchers should aim for reliabilities > .9. \code{SBrel} is used to address the question, what if I increased/decreased my test length? What will the new reliability be? This is used when computing split-half reliabilities and when when concerned about reducing test length. \cr \code{SBlength} is used to address the question, how long must my test be (in relation to the original test) in order to achieve a desired reliability? \cr The formulae for each are: \cr rxxp <- Nlength*rxx/(1+(Nlength-1)*rxx) \cr N <- rxxp*(1-rxx)/(rxx*(1-rxxp)) } \value{ \item{rxxp }{the prophesized reliability } \item{N }{Ratio of new test length to original test length } } \references{ Allen, M. J. & Yen, W. M. (1979). \emph{Introduction to measurement theory.} Monterey, CA: Brooks/Cole. } \author{ Thomas D. Fletcher \email{t.d.fletcher05@gmail.com} } \seealso{ \code{\link{alpha}} } \examples{ # Given a test with rxx = .7, 10 items # Desire a test with rxx=.9, how many items are needed? new.length <- SBlength(.9, .7) new.length * 10 # 39 items are needed # what is the reliability of a test 1/2 as long SBrel(.5, .7) } \keyword{ univar } \keyword{ models } psychometric/man/CVF.Rd0000644000176200001440000000332414243711442014447 0ustar liggesusers\name{CVF} \alias{CVF} \title{ Compound Variance Factor for Meta-Analytic Artifact Corrections } \description{ The compound variance factor is computed by summing the individual squared coefficients of variation for each artifact when correcting for attenuation in a correlation coefficient } \usage{ CVF(x) } \arguments{ \item{x}{ A matrix or data.frame with columns representing artifacts (Rxx, Ryy, u): see \code{\link{EnterMeta}}} } \details{ The CVF is equal to scv(a) + scv(b) + scv(c), where scv is the squared coefficient of variation. The letters a, b, c represent artifacts reliability in predictor, reliability in criterion, and restriction of range respectively. The scv is computed as the variance in the artifact divided by the square of the average for the artifact. } \value{ a numeric value representing the compound variance factor } \references{ Arthur, Jr., W., Bennett, Jr., W., and Huffcutt, A. I. (2001) \emph{Conducting Meta-analysis using SAS.} Mahwah, NJ: Erlbaum. Hunter, J.E. and Schmidt, F.L. (2004). \emph{Methods of meta-analysis: Correcting error and bias in research findings (2nd ed.).} Thousand Oaks: Sage Publications. Hunter, J.E., Schmidt, F.L., and Jackson, G.B. (1982). \emph{Meta-analysis: Cumulating research findings across studies.} Beverly Hills: Sage Publications. } \author{ Thomas D. Fletcher \email{t.d.fletcher05@gmail.com} } \seealso{ \code{\link{aRxx}}, \code{\link{bRyy}}, \code{\link{cRR}}, \code{\link{varAV}}, \code{\link{CAFAA}}} \examples{ # From Arthur et al data(ABHt32) CVF(ABHt32) # From Hunter et al data(HSJt35) CVF(HSJt35) } \keyword{ univar } \keyword{ models } \keyword{ htest } psychometric/man/artifacts.Rd0000644000176200001440000000405214511312313016000 0ustar liggesusers\name{artifacts} \alias{aRxx} \alias{bRyy} \alias{cRR} \title{ Artifact Distribtutions Used in Meta-Analysis} \description{ Three artifact distributions are computed with each of these three functions which are then used to correct the observed sample-weighted mean correlation for attenuation. The artifacts are reliability in predictor, reliability in criterion, and range-restriction. } \usage{ aRxx(x) bRyy(x) cRR(x) } \arguments{ \item{x}{ A matrix or data.frame with columns Rxx, Ryy, and u: see \code{\link{EnterMeta}}} } \details{ \itemize{ \item \emph{aRxx } Distribution of measurement error in the predictor: a = sqrt(Rxx) \item \emph{bRyy } Distribution of measurement error in the criterion: b = sqrt(Ryy) \item \emph{cRR } Degree of range restriction indicated by ratio u \cr (restricted SD/unrestricted SD): \eqn{c = sqrt((1-u^2)*rb^2+u^2) }. } These are used in the computation of the compound attentuation factor \code{\link{CAFAA}} = mean(a)*mean(b)*mean(c). } \value{ A list containing: \item{ma }{ Mean of a (or b or c)} \item{va }{ Variance of a (or b or c)} } \references{ Arthur, Jr., W., Bennett, Jr., W., and Huffcutt, A. I. (2001) \emph{Conducting Meta-analysis using SAS.} Mahwah, NJ: Erlbaum. Hunter, J.E. and Schmidt, F.L. (2004). \emph{Methods of meta-analysis: Correcting error and bias in research findings (2nd ed.).} Thousand Oaks: Sage Publications. Hunter, J.E., Schmidt, F.L., and Jackson, G.B. (1982). \emph{Meta-analysis: Cumulating research findings across studies.} Beverly Hills: Sage Publications. } \author{ Thomas D. Fletcher \email{t.d.fletcher05@gmail.com} } \note{ One usually will not use these functions alone, but rather use functions that make use of these correction factors. } \seealso{ \code{\link{rhoCA}}, \code{\link{varAV}}, \code{\link{varResT}}, \code{\link{pvaaa}} } \examples{ # From Arthur et al data(ABHt32) aRxx(ABHt32) bRyy(ABHt32) cRR(ABHt32) rhoCA(ABHt32) # From Hunter et al data(HSJt35) aRxx(HSJt35) bRyy(HSJt35) cRR(HSJt35) rhoCA(HSJt35) } \keyword{ univar } \keyword{ models } psychometric/man/alpha.Rd0000644000176200001440000000211214511312365015107 0ustar liggesusers\name{alpha} \alias{alpha} \title{ Cronbach's Coefficient Alpha} \description{ Coefficient alpha is a measure of internal consistency. It is a standard measure of reliability for tests. } \usage{ alpha(x) } \arguments{ \item{x}{ Data.frame or matrix object with rows corresponding individuals and columns to items } } \details{ You can specify any portion of a matrix or data.frame. For instance, if using a data.frame with numerous variables corresponding to items, one can specify subsets of those items. See examples below. \cr alpha <- \eqn{k/(k-1)*(1-SumSxi/Sx)} \cr where k is the number of items, Sx is the standard deviaton of the total test, and SumSxi is the sum of the standard deviations for each item. } \value{ coefficient alpha} \references{ Cronbach, L. J. (1951). Coefficient alpha and the internal structure of tests. \emph{Psychometrika, 6,} 297-334. } \author{Thomas D. Fletcher \email{t.d.fletcher05@gmail.com} } \seealso{ \code{\link{alpha.CI}} } \examples{ data(attitude) alpha(attitude) alpha(attitude[,1:5]) } \keyword{ models } \keyword{ univar } psychometric/man/CI.Rsq.Rd0000644000176200001440000000264214511312671015071 0ustar liggesusers\name{CI.Rsq} \alias{CI.Rsq} \title{ Confidence Interval for R-squared } \description{ Computes the confidence interval for a desired level for the squared-multiple correlation} \usage{ CI.Rsq(rsq, n, k, level = 0.95) } \arguments{ \item{rsq}{ Squared Multiple Correlation } \item{n}{ Sample Size } \item{k}{ Number of Predictors in Model } \item{level}{ Significance Level for constructing the CI, default is .95 } } \details{ CI is constructed based on the approximate SE of Rsq \cr \eqn{sersq <- sqrt((4*rsq*(1-rsq)^2*(n-k-1)^2)/((n^2-1)*(n+3)))} } \value{ Returns a table with 4 elements \item{Rsq }{ Squared Multiple Correlation} \item{SErsq }{ Standard error of Rsq} \item{LCL }{ Lower Confidence Limit of the CI} \item{UCL }{ Upper Confidence Limit of the CI}} \references{ Olkin, I. & Finn, J. D. (1995). Correlation Redux. \emph{Psychological Bulletin, 118}, 155-164. Cohen, J., Cohen, P., West, S. G., & Aiken, L. S. (2003). \emph{Applied multiple regression/correlation analysis for the behavioral sciences (3rd ed.).} Mahwah, NJ: Lawrence Erlbaum. } \author{ Thomas D. Fletcher \email{t.d.fletcher05@gmail.com} } \note{ This is an adequate approximation for n > 60 } \seealso{ \code{\link{CI.Rsqlm}} } \examples{ # see section 3.6.2 Cohen et al (2003) # 95 percent CI CI.Rsq(.5032, 62, 4, level = .95) # 80 percent CI CI.Rsq(.5032, 62, 4, level = .80) } \keyword{ htest } \keyword{ models } psychometric/man/CredIntRho.Rd0000644000176200001440000000412014243711404016023 0ustar liggesusers\name{CredIntRho} \alias{CredIntRho} \title{ Credibility Interval for Meta-Analytic Rho} \description{ Computed the credibility interval about the population correlation coefficient at the desired level.} \usage{ CredIntRho(x, aprox = FALSE, level = 0.95) } \arguments{ \item{x}{ A matrix or data.frame with columns Rxy, n and artifacts (Rxx, Ryy, u): see \code{\link{EnterMeta}}} \item{aprox}{ Logical test to determine if the approximate or exact var e is used} \item{level}{ Significance Level for constructing the CI, default is .95 } } \details{ The credibility interval is used for the detection of potential moderators. Intervals that large or include zero potentially reflect the presence of moderators. Credibility intervals are constructed about rho, whereas confidence intervals are generally constructed about rbar. See Hunter & Schmidt (2004) for a description of the different uses. The credibility interval is computed as: rho +/- z[crit] * SD(rho) where, rho is the corrected correlation, z[crit] is the critcal z value (1.96 for 95\%), and SD(rho) is the sqrt(variance in rho). } \value{ \item{LCL }{ Lower Confidence Limit of the CI} \item{UCL }{ Upper Confidence Limit of the CI} } \references{ Arthur, Jr., W., Bennett, Jr., W., and Huffcutt, A. I. (2001) \emph{Conducting Meta-analysis using SAS.} Mahwah, NJ: Erlbaum. Hunter, J.E. and Schmidt, F.L. (2004). \emph{Methods of meta-analysis: Correcting error and bias in research findings (2nd ed.).} Thousand Oaks: Sage Publications. Hunter, J.E., Schmidt, F.L., and Jackson, G.B. (1982). \emph{Meta-analysis: Cumulating research findings across studies.} Beverly Hills: Sage Publications. } \author{ Thomas D. Fletcher \email{t.d.fletcher05@gmail.com}} \seealso{ \code{\link{rbar}}, \code{\link{rhoCA}}, \code{\link{CIrb}}, \code{\link{varRes}} } \examples{ # From Arthur et al data(ABHt32) CredIntRho(ABHt32, aprox=TRUE) # From Hunter et al data(HSJt35) CredIntRho(HSJt35) } \keyword{ univar } \keyword{ models } \keyword{ htest } psychometric/man/cRRr.Rd0000644000176200001440000000357714243711422014711 0ustar liggesusers\name{cRRr} \alias{cRRr} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Correction for Range Restriction } \description{ Corrects a correlation for Range restriction given population and sample standard deviations} \usage{ cRRr(rr, sdy, sdyu) } \arguments{ \item{rr}{ Observed or restricted correlation } \item{sdy}{ Standard deviation of a restricted sample } \item{sdyu}{ Standard deviation of an unrestricted sample } } \details{ When one of the variables used to measure a correlation has a restricted variance One the correlation will be attenuated. This commonly occurs for instance when using incumbents (those already selected by previous procedures) to based decisions about validity of new selection procedures. Given u (ratio of unrestricted SD of one variable to the restricted SD of that variable), the following formula is used to correct for attenuation in a correlation coefficient: \cr \eqn{rxy <- (rr*(sdyu/sdy))/sqrt(1+rr^2*((sdyu^2/sdy^2)-1))}} \value{ \item{unrestricted }{corrected correlation} } \references{ Cohen, J., Cohen, P., West, S. G., & Aiken, L. S. (2003). \emph{Applied multiple regression/correlation analysis for the behavioral sciences (3rd ed.).} Mahwah, NJ: Lawrence Erlbaum. } \author{ Thomas D. Fletcher \email{t.d.fletcher05@gmail.com} } \note{ Do not confuse this function with the meta-analysis function cRR in this same package! } \seealso{ \code{\link{cRR}} } \examples{ # See section 2.10.3 of Cohen et al (2003) cRRr(.25, 12, 5) # Create two correlated variables x <- rnorm(1000) y <- 0.71*x +rnorm(1000) cor(x,y) # order and select top 1/10 tmp <- cbind(x,y)[order(y,x),][1:100,] rxyr <- cor(tmp[,"x"],tmp[,"y"]) # restricted rxy rxyr # correct for restriction of range cRRr(rxyr, sd(tmp[,"y"]), sd(y)) } \keyword{ htest } \keyword{ models } psychometric/man/CIrdif.Rd0000644000176200001440000000262514243712646015203 0ustar liggesusers\name{CIrdif} \alias{CIrdif} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Confidence Interval for the difference in Correlation Coefficients } \description{ Will construct the CI for a difference in two correlations for a desired level} \usage{ CIrdif(r1, r2, n1, n2, level = 0.95) } \arguments{ \item{r1}{ Correlation 1 } \item{r2}{ Correlation 2 } \item{n1}{ Sample size for \code{r1} } \item{n2}{ Sample size for \code{r2} } \item{level}{ Significance Level for constructing the CI, default is .95} } \details{ Constructs a confidence interval based on the standard error of the difference of two correlations \eqn{(r1 - r2)}, sed \eqn{<- sqrt((1-r1^2)/n1 + (1-r2^2)/n2) }} \value{ Returns a table with 4 elements \item{DifR }{ Observed Difference in correlations} \item{SED }{ Standard error of the difference} \item{LCL }{ Lower Confidence Limit of the CI} \item{UCL }{ Upper Confidence Limit of the CI} } \references{ Cohen, J., Cohen, P., West, S. G., & Aiken, L. S. (2003). \emph{Applied multiple regression/correlation analysis for the behavioral sciences (3rd ed.).} Mahwah, NJ: Lawrence Erlbaum. } \author{ Thomas D. Fletcher \email{t.d.fletcher05@gmail.com} } \seealso{ \code{\link{rdif.nul}} } \examples{ # From ch. 2 in Cohen et al (2003) CIrdif(.657, .430, 62, 143) } \keyword{ htest } \keyword{ models } psychometric/man/rhoCA.Rd0000644000176200001440000000270214243712002015015 0ustar liggesusers\name{rhoCA} \alias{rhoCA} \title{ Meta-Analytically Derived Correlation Coefficient Corrected for Artifacts} \description{ This represents the population correlation coefficient free from attenuaton due to artifacts (sampling error, range-restriction, reliability in the predictor and criterion).} \usage{ rhoCA(x) } \arguments{ \item{x}{ A matrix or data.frame with columns Rxy, n and artifacts (Rxx, Ryy, u): see \code{\link{EnterMeta}}} } \details{ This is the sample weighted correlation coefficient \code{\link{rbar}} divided by the compound attenuation factor, \code{\link{CAFAA}}. } \value{ A numeric value represting the corrected correlation coefficient. } \references{ Arthur, Jr., W., Bennett, Jr., W., and Huffcutt, A. I. (2001) \emph{Conducting Meta-analysis using SAS.} Mahwah, NJ: Erlbaum. Hunter, J.E. and Schmidt, F.L. (2004). \emph{Methods of meta-analysis: Correcting error and bias in research findings (2nd ed.).} Thousand Oaks: Sage Publications. Hunter, J.E., Schmidt, F.L., and Jackson, G.B. (1982). \emph{Meta-analysis: Cumulating research findings across studies.} Beverly Hills: Sage Publications. } \author{ Thomas D. Fletcher \email{t.d.fletcher05@gmail.com} } \seealso{ \code{\link{CAFAA}}, \code{\link{rbar}} } \examples{ # From Arthur et al data(ABHt32) rhoCA(ABHt32) # From Hunter et al data(HSJt35) rhoCA(HSJt35) } \keyword{ univar } \keyword{ models } psychometric/man/r2z.Rd0000644000176200001440000000133714243712014014544 0ustar liggesusers\name{r2z} \alias{r2z} \alias{FISHER r to z} \title{ Fisher r to z' } \description{ Converts a Pearson correlation coefficient to Fishers z'} \usage{ r2z(x) } \arguments{ \item{x}{ Pearson correlation coefficient} } \details{ z' = .5 * log((1+r)/(1-r)) } \value{ Fisher z' } \references{ Cohen, J., Cohen, P., West, S. G., & Aiken, L. S. (2003). \emph{Applied multiple regression/correlation analysis for the behavioral sciences (3rd ed.).} Mahwah, NJ: Lawrence Erlbaum. } \author{ Thomas D. Fletcher \email{t.d.fletcher05@gmail.com} } \seealso{ \code{\link{z2r}}, \code{\link{CIr}}, } \examples{ # From ch. 2 in Cohen et al (2003) r2z(.657) } \keyword{ htest } \keyword{ models }psychometric/man/Qrbar.Rd0000644000176200001440000000512414243711716015104 0ustar liggesusers\name{Qrbar} \alias{Qrbar} \alias{aprox.Qrbar} \title{ Meta-Analytic Q statistic for r-bar } \description{ Provides a chi-square test for significant variation in sample weighted correlation, rbar} \usage{ Qrbar(x) aprox.Qrbar(x) } \arguments{ \item{x}{ A matrix or data.frame with columns Rxy and n: see \code{\link{EnterMeta}}} } \details{ Q is distributed as chi-square with df equal to the number of studies - 1. Multiple equations exist presumably because of a need to do the calculations \sQuote{by hand} in the past. A significant Q statistic implies the presence of one or more moderating variables operating on the observed correlations. } \value{ A table containing the following items: \cr \item{CHISQ }{ Chi-square value} \item{df }{ degrees of freedom} \item{p-val }{ probabilty value} } \references{ Arthur, Jr., W., Bennett, Jr., W., and Huffcutt, A. I. (2001) \emph{Conducting Meta-analysis using SAS.} Mahwah, NJ: Erlbaum. Hunter, J.E. and Schmidt, F.L. (2004). \emph{Methods of meta-analysis: Correcting error and bias in research findings (2nd ed.).} Thousand Oaks: Sage Publications. Hunter, J.E., Schmidt, F.L., and Jackson, G.B. (1982). \emph{Meta-analysis: Cumulating research findings across studies.} Beverly Hills: Sage Publications. } \author{ Thomas D. Fletcher \email{t.d.fletcher05@gmail.com} } \note{ \code{Qrbar} is computed as: \eqn{sum((((n-1)*(r-rb)^2)/(1-rb^2)^2),na.rm=TRUE)} \cr \code{aprox.Qrbar} is computed as: \eqn{(N/(1-rb^2)^2)*vr} where n is sample size of study i, N is total sample size across studies, rb is \code{\link{rbar}}, r is the correlation of study i, and vr is \code{\link{varr}}. } \section{Warning }{The test is presented by Hunter et al. 1982, but is NOT recommended nor mentioned by Hunter & Schmidt (2004). The test is sensitive to the number of studies included in the meta-analysis. Large meta-analyses may find significant Q statistics when variation in the population is not present, and small meta-analyses may find lack of significant Q statistics when moderators are present. Hunter & Schmidt (2004) recommend the credibility inteval, \code{\link{CredIntRho}}, or the 75\% rule, \code{\link{pvse}}, as determinants of the presence of moderators.} \seealso{ \code{\link{varr}}, \code{\link{vare}}, \code{\link{rbar}}, \code{\link{CredIntRho}}, \code{\link{pvse}}} \examples{ # From Arthur et al data(ABHt32) aprox.Qrbar(ABHt32) # From Hunter et al data(HSJt35) Qrbar(HSJt35) aprox.Qrbar(HSJt35) } \keyword{ univar } \keyword{ models } \keyword{ htest } psychometric/man/varRes.Rd0000644000176200001440000000260714243712500015272 0ustar liggesusers\name{varRes} \alias{varRes} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Residual Variance in Meta-Analytic Correlation } \description{ Computes the residual variance in the sample-weighted correlation coefficient by removing variance due to sampling error.} \usage{ varRes(x) } \arguments{ \item{x}{ A matrix or data.frame with columns Rxy and n: see \code{\link{EnterMeta}}} } \details{ computed as \code{varr} - \code{vare} Useful in the construction of the SE for heterogenous populations \code{\link{SERHET}}} \value{ A numeric value representing the residual variance } \references{ Arthur, Jr., W., Bennett, Jr., W., and Huffcutt, A. I. (2001) \emph{Conducting Meta-analysis using SAS.} Mahwah, NJ: Erlbaum. Hunter, J.E. and Schmidt, F.L. (2004). \emph{Methods of meta-analysis: Correcting error and bias in research findings (2nd ed.).} Thousand Oaks: Sage Publications. Hunter, J.E., Schmidt, F.L., and Jackson, G.B. (1982). \emph{Meta-analysis: Cumulating research findings across studies.} Beverly Hills: Sage Publications. } \author{ Thomas D. Fletcher \email{t.d.fletcher05@gmail.com} } \seealso{ \code{\link{varr}}, \code{\link{vare}}, \code{\link{SERHET}} } \examples{ # From Arthur et al data(ABHt32) varRes(ABHt32) # From Hunter et al data(HSJt35) varRes(HSJt35) } \keyword{ univar } \keyword{ models } psychometric/man/FileDrawer.Rd0000644000176200001440000000307214243711526016060 0ustar liggesusers\name{FileDrawer} \alias{FileDrawer} \title{ File Drawer N } \description{ Computes the number of 'lost' studies needed to render the observed meta-analytic correlation to non-significance. } \usage{ FileDrawer(x, rc = 0.1) } \arguments{ \item{x}{ A matrix or data.frame with columns Rxy and n: see \code{\link{EnterMeta}}} \item{rc}{ cut-off correlation for which to make a comparison} } \details{ Use to detect availability bias in published correlations. It is computed as n <- k * (rb/rc - 1), where, n is the file drawer n, k is the number of studies in current meta-analyis, rb is rbar and rc is the cut-off correlation for which you wish to make a comparison. For a test of the null hypothesis, use rc = 0. In many instances, practitioners are interested in reducing correlations to less than 1 percent of the variance accounted for (i.e., rc = .1). } \value{ \item{"# of 'lost' studies needed" }{ File drawer N needed to change decision} } \references{ Hunter, J.E. and Schmidt, F.L. (2004). \emph{Methods of meta-analysis: Correcting error and bias in research findings (2nd ed.).} Thousand Oaks: Sage Publications. Rosenthal, R. (1979). The "file-drawer problem" and tolerance for null results. \emph{ Psychological Bulletin, 86,} 638-641. } \author{ Thomas D. Fletcher \email{t.d.fletcher05@gmail.com} } \seealso{ \code{\link{FunnelPlot}} } \examples{ # From Arthur et al data(ABHt32) FileDrawer(ABHt32) # From Hunter et al data(HSJt35) FileDrawer(HSJt35) } \keyword{ univar } \keyword{ models } psychometric/man/varAV.Rd0000644000176200001440000000264414243712172015055 0ustar liggesusers\name{varAV} \alias{varAV} \title{ Variance Due to Attenuating Artifacts} \description{ Since the presence of artifacts may inflate the observed variance in correlations, one needs to compute the variance attributed to the artifacts. } \usage{ varAV(x) } \arguments{ \item{x}{ A matrix or data.frame with columns Rxy, n and artifacts (Rxx, Ryy, u): see \code{\link{EnterMeta}}} } \details{ varAV is computed as \eqn{\code{rhoCA}^2 * \code{CAFAA}^2 * \code{CVF}} varAV is used to compute the residual variance in correlations \code{\link{varResT}} } \value{ A numeric value representing the variance due to attenuating artifacts} \references{ Arthur, Jr., W., Bennett, Jr., W., and Huffcutt, A. I. (2001) \emph{Conducting Meta-analysis using SAS.} Mahwah, NJ: Erlbaum. Hunter, J.E. and Schmidt, F.L. (2004). \emph{Methods of meta-analysis: Correcting error and bias in research findings (2nd ed.).} Thousand Oaks: Sage Publications. Hunter, J.E., Schmidt, F.L., and Jackson, G.B. (1982). \emph{Meta-analysis: Cumulating research findings across studies.} Beverly Hills: Sage Publications. } \author{ Thomas D. Fletcher \email{t.d.fletcher05@gmail.com} } \seealso{ \code{\link{CAFAA}},\code{\link{rhoCA}}, \code{\link{CVF}} } \examples{ # From Arthur et al data(ABHt32) varAV(ABHt32) # From Hunter et al data(HSJt35) varAV(HSJt35) } \keyword{ univar } \keyword{ models } psychometric/man/CVratio.Rd0000644000176200001440000000247014243711454015404 0ustar liggesusers\name{CVratio} \alias{CVratio} \title{ Content Validity Ratio } \description{ Computes Lawshe's CVR for determining whether items are essential or not. } \usage{ CVratio(NTOTAL, NESSENTIAL) } \arguments{ \item{NTOTAL}{ Total number of Experts} \item{NESSENTIAL}{ Number of Experts indicating item 'essential' } } \details{ To determine content validity (in relation to job performance), a panel of subject matter experts will examine a set of items indicating whether the items are essential, useful, not necessary. The CVR is calculated to indicate whether the item is pertinent to the content validity. \cr CVR values range +1 to -1. Values closer to +1 indicated experts are in aggreement that the item is essential to content validity. } \value{ Content Validity Ratio } \references{ Lawshe, C. H. (1975). A quantitative approach to content validity. \emph{Personnel Psychology, 28,} 563-575. } \note{ CVR = (Ne - N/2)/(N-1) } \author{ Thomas D. Fletcher \email{t.d.fletcher05@gmail.com} } \examples{ # Using 5 Expert panelists (SMEs) # The ratings for an item is as follows: # Rater1 = Essential # Rater2 = Essential # Rater3 = Essential # Rater4 = Useful # Rater5 = Not necessary # # essential = 3 CVratio (5, 3) } \keyword{ univar } psychometric/man/rdif.nul.Rd0000644000176200001440000000231714243712042015550 0ustar liggesusers\name{rdif.nul} \alias{rdif.nul} \title{ Null hypothesis for difference in two correlations } \description{ Tests the hypothesis that two correlations are significantly different } \usage{ rdif.nul(r1, r2, n1, n2) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{r1}{ Correlation 1} \item{r2}{ Correlation 2} \item{n1}{ Sample size for \code{r1} } \item{n2}{ Sample size for \code{r2} } } \details{ First converts r to z' for each correlation. Then constructs a z test for the difference z <- (z1 - z2)/sqrt(1/(n1-3)+1/(n2-3))} \value{ Returns a table with 2 elements \item{zDIF }{ z value for the H0} \item{p }{ p value} } \references{ Cohen, J., Cohen, P., West, S. G., & Aiken, L. S. (2003). \emph{Applied multiple regression/correlation analysis for the behavioral sciences (3rd ed.).} Mahwah, NJ: Lawrence Erlbaum. } \author{ Thomas D. Fletcher \email{t.d.fletcher05@gmail.com} } \note{ Does not test alternate hypotheses (e.g., difference = .1) } \seealso{ \code{\link{r.nil}}, \code{\link{CIrdif}} } \examples{ # From ch. 2 in Cohen et al (2003) rdif.nul(.657, .430, 62, 143) } \keyword{ htest } \keyword{ models } psychometric/man/psychometric-package.Rd0000644000176200001440000000323314513232300020121 0ustar liggesusers\name{psychometric-package} \alias{psychometric-package} \alias{psychometric} \alias{apt} \docType{package} \title{ Applied Psychometric Theory} \description{ Contains functions useful for correlation theory, meta-analysis (validity-generalization), reliability, item analysis, inter-rater reliability, and classical utility} \details{ \tabular{ll}{ Package: \tab psychometric \cr Type: \tab Package \cr Version: \tab 2.4 \cr License: \tab GPL (version 2.0 or later) \cr } This package corresponds to the basic concepts encountered in an introductory course in Psychometric Theory at the Graduate level. It is especially useful for Industrial/Organizational Psychologists, but will be useful for any student or practitioner of psychometric theory. I originally developed this package to correspond with concepts covered illustrated in PSYC 7429 at the University of MO - St. Louis course in Psychometric Theory. } \author{ Thomas D. Fletcher\cr Strategic Resources\cr State Farm Insurance Cos.\cr Maintainer: Thomas D. Fletcher \email{t.d.fletcher05@gmail.com} \cr } \keyword{ package } \seealso{ \code{multilevel-package} \code{ltm-package} \code{psy-package} \code{polycor-package} \code{nlme-package} } \examples{ # Convert Pearson r to Fisher z' r2z (.51) # Convert Fisher z' to r z2r (.563) # Construct a CI about a True Score # Observed = 700, Test Ave. = 500, SD = 100, and reliability = .9 CI.tscore (700, 500, 100, .9) # Compute the classical utility of a test # Assuming base-rate = .5, selection ratio = .5 and rxy = .5 ClassUtil(rxy=.5, BR=.5, SR=.5) # Examine test score items data(TestScores) item.exam(TestScores[,1:10], y = TestScores[,11], discrim=TRUE) }psychometric/man/ABHt32.Rd0000644000176200001440000000173211427121000014737 0ustar liggesusers\name{ABHt32} \alias{ABHt32} \docType{data} \title{Table 3.2 from Arthur et al} \description{ These data are used as an example in ch. 3 of Conducting Meta-Analysis using SAS. The data appear in table 3.1 and 3.2 on pages 66 and 68. The example data are useful in illustrating simple meta-analysis concepts. } \usage{data(ABHt32)} \format{ A data frame with 10 observations on the following 7 variables. \itemize{ \item \emph{study} Study code \item \emph{Rxy} Published Correlation \item \emph{n} Sample Size \item \emph{Rxx} Reliability of Predictor \item \emph{Ryy} Reliability of Criterion \item \emph{u} Range Restriction Ratio \item \emph{moderator} Gender }} \references{ Arthur, Jr., W., Bennett, Jr., W., and Huffcutt, A. I. (2001) \emph{Conducting Meta-analysis using SAS.} Mahwah, NJ: Erlbaum. } \examples{ data(ABHt32) str(ABHt32) rbar(ABHt32) FunnelPlot(ABHt32) } \keyword{datasets} psychometric/man/pvse.Rd0000644000176200001440000000252114243711702015003 0ustar liggesusers\name{pvse} \alias{pvse} \title{ Percent of variance due to sampling error } \description{ Ratio of sampling error variance to weighted variance in correlations for a meta-analysis. This value is compared to 75 (e.g., 75\% rule) to determine the presence of moderators. } \usage{ pvse(x) } \arguments{ \item{x}{ A matrix or data.frame with columns Rxy and n: see \code{\link{EnterMeta}}} } \details{ \code{pvse} <- \code{\link{vare}}/\code{\link{varr}}*100 } \value{ A single numeric value of class matrix representing the \% of variance accounted for by sampling error} \references{ Arthur, Jr., W., Bennett, Jr., W., and Huffcutt, A. I. (2001) \emph{Conducting Meta-analysis using SAS.} Mahwah, NJ: Erlbaum. Hunter, J.E. and Schmidt, F.L. (2004). \emph{Methods of meta-analysis: Correcting error and bias in research findings (2nd ed.).} Thousand Oaks: Sage Publications. Hunter, J.E., Schmidt, F.L., and Jackson, G.B. (1982). \emph{Meta-analysis: Cumulating research findings across studies.} Beverly Hills: Sage Publications. } \author{ Thomas D. Fletcher \email{t.d.fletcher05@gmail.com} } \seealso{ \code{\link{varr}}, \code{\link{vare}} } \examples{ # From Arthur et al data(ABHt32) pvse(ABHt32) # From Hunter et al data(HSJt35) pvse(HSJt35) } \keyword{ univar } \keyword{ models } psychometric/man/SE.Meas.Rd0000644000176200001440000000546714243712100015226 0ustar liggesusers\name{SE.Meas} \alias{SE.Meas} \alias{SE.Est} \alias{SE.Pred} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Standard Errors of Measurement (test scores) } \description{ These functions will calculate the three Standard Errors of Measurement as described by Dudek(1979). They are useful in constructing CI about observed scores, true scores and predicting observed scores on parallel measures.} \usage{ SE.Meas(s, rxx) SE.Est (s, rxx) SE.Pred(sy, rxx) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{s}{ Standard Deviation in tests scores on test x } \item{sy}{ Standard Deviation in tests scores on parallel test y = x} \item{rxx}{ Reliability of test x } } \details{ Dudek (1979) notes that in practice, individuals often misinterpret the SEM. In fact, most textbooks misinterpret these measures. The SE.Meas \eqn{(s*sqrt(1-rxx))} is useful in the construction of CI about observed scores, but should not be interpreted as indicating the TRUE SCORE is necessarily included in the CI. The SE.Est \eqn{(s*sqrt(rxx*(1-rxx)))} is useful in the construction of CI about the TRUE SCORE. The estimate of a CI for a TRUE SCORE also requires the calculation of a TRUE SCORE (due to regression to the mean) from observed scores. The SE.Pred \eqn{(sy*sqrt(1-rxx^2))} is useful in predicting the score on a parallel measure (Y) given a score on test X. SE.Pred is usually used to estimate the score of a re-test of an individual. } \value{ The returned value is the appropriate standard error } \references{ Dudek, F. J. (1979). The continuing misinterpretation of the standard error of measurement. \emph{Psychological Bulletin, 86}, 335-337. Lord, F. M. & Novick, M. R. (1968). \emph{Statistical theories of mental test scores.} Reading, MA: Addison-Wesley. Nunnally, J. C. & Bernstein, I. H. (1994). \emph{Psychometric Theory (3rd ed.).} New York: McGraw-Hill. } \author{ Thomas D. Fletcher \email{t.d.fletcher05@gmail.com} } \note{ Since strictly parallel tests have the same SD, s and sy are equivalent in these functions. SE.Meas() is used by \code{\link{CI.obs}}. SE.Est() is used by \code{\link{CI.tscore}}. You must use \code{\link{Est.true}} to first compute the estimated true score from an observed score accounting for regression to the mean. } \seealso{ \code{\link{Est.true}}, \code{\link{CI.obs}}, \code{\link{CI.tscore}} } \examples{ # Examples from Dudek (1979) # Suppose a test has mean = 500, SD = 100 rxx = .9 # If an individual scores 700 on the test # The three SE are: SE.Meas (100, .9) SE.Est (100, .9) SE.Pred (100, 9) # CI about the true score CI.tscore(700, 500, 100, .9) # CI about the observed score CI.obs(700, 100, .9) } \keyword{ htest } \keyword{ distribution } psychometric/man/alpha.CI.Rd0000644000176200001440000000355614511312336015414 0ustar liggesusers\name{alpha.CI} \alias{alpha.CI} \alias{CI.alpha} \title{ Confidence Interval for Coefficient Alpha} \description{ Computes a one-tailed (or two-tailed) CI at the desired level for coefficient alpha } \usage{ alpha.CI(alpha, k, N, level = 0.90, onesided = FALSE) } \arguments{ \item{alpha}{ coefficient alpha to use for CI construction } \item{k}{ number if items } \item{N}{ sample size } \item{level}{ Significance Level for constructing the CI, default is .90 } \item{onesided}{ return a one-sided (one-tailed) test, default is FALSE } } \details{ By inputting alpha, number of items and sample size, one can make inferences via a confidence interval. This can be used to compare two alpha coefficients (e.g., from two groups), or to compare alpha to some specified value (e.g., > = .7). onesided = FALSE renders a two-sided test (i.e., this is the difference between tails of .025/.975 and .05/.95) } \value{ Returns a table with 3 elements \item{LCL }{lower confidence limit of CI} \item{ALPHA }{coefficient alpha} \item{UCL }{upper confidence limit of CI} } \references{ Feldt, L. S., Woodruff, D. J., & Salih, F. A. (1987). Statistical inferences for coefficient alpha. \emph{Applied Psychological Measurement, 11,} 93-103. } \author{ Thomas D. Fletcher \email{t.d.fletcher05@gmail.com}} \note{ Feldt et al., provide a number of procedures for making inferences about alpha (e.g., F test of the null hypothesis). Since the CI is the most versatile, it is the only function created in this package } \section{ Warning }{You must first compute alpha and then enter into function. \code{alpha.CI} will not evaluate a data.frame or matrix object. } \seealso{ \code{\link{alpha}} } \examples{ # From Feldt et al (1987) # alpha = .79, #items = 26, #examinees = 41 # a two-tailed test 90\% level alpha.CI(.79, 26, 41) } \keyword{ models } \keyword{ univar }psychometric/man/CI.Rsqlm.Rd0000644000176200001440000000250514511312541015414 0ustar liggesusers\name{CI.Rsqlm} \alias{CI.Rsqlm} \title{ Confidence Interval for Rsq - from lm() } \description{ Computes the CI for a desired level based on an object of class lm() } \usage{ CI.Rsqlm(obj, level = 0.95) } \arguments{ \item{obj}{ object of a linear model } \item{level}{ Significance Level for constructing the CI, default is .95 } } \details{ Extracts the necessary information from the linear model object and uses \code{\link{CI.Rsq}}} \value{ Returns a table with 4 elements \item{Rsq }{ Squared Multiple Correlation} \item{SErsq }{ Standard error of Rsq} \item{LCL }{ Lower Confidence Limit of the CI} \item{UCL }{ Upper Confidence Limit of the CI} } \references{ Olkin, I. & Finn, J. D. (1995). Correlation Redux. \emph{Psychological Bulletin, 118}, 155-164. Cohen, J., Cohen, P., West, S. G., & Aiken, L. S. (2003). \emph{Applied multiple regression/correlation analysis for the behavioral sciences (3rd ed.).} Mahwah, NJ: Lawrence Erlbaum. } \author{ Thomas D. Fletcher \email{t.d.fletcher05@gmail.com} } \note{ This is an adequate approximation for n > 60 } \seealso{ \code{\link{CI.Rsq}}} \examples{ # Generate data x <- rnorm(100) z <- rnorm(100) xz <- x*z y <- .25*x - .25*z + .25*x*z + .25*rnorm(100) # Create an lm() object lm1 <- lm(y ~ x*z) CI.Rsqlm(lm1) } \keyword{ htest } \keyword{ models } psychometric/man/item.exam.Rd0000644000176200001440000000732214243711620015720 0ustar liggesusers\name{item.exam} \alias{item.exam} \title{ Item Analysis } \description{ Conducts an item level analysis. Provides item-total correlations, Standard deviation in items, difficulty, discrimination, and reliability and validity indices.} \usage{ item.exam(x, y = NULL, discrim = FALSE) } \arguments{ \item{x}{ matrix or data.frame of items } \item{y}{ Criterion variable } \item{discrim}{ Whether or not the discrimination of item is to be computed} } \details{ If someone is interested in examining the items of a dataset contained in data.frame x, and the criterion measure is also in data.frame x, one must parse the matrix or data.frame and specify each part into the function. See example below. Otherwise, one must be sure that x and y are properly merged/matched. If one is not interested in assessing item-criterion relationships, simply leave out that portion of the call. The function does not check whether the items are dichotomously coded, this is user specified. As such, one can specify that items are binary when in fact they are not. This has the effect of computing the discrimination index for continuously coded variables. \cr The difficulty index (p) is simply the mean of the item. When dichotomously coded, p reflects the proportion endorsing the item. However, when continuously coded, p has a different interpretation.} \value{ A table with rows representing each item and columns repsenting : \item{Sample.SD }{ Standard deviation of the item} \item{Item.total }{ Correlation of the item with the total test score } \item{Item.Tot.woi}{ Correlation of item with total test score (scored without item)} \item{Difficulty }{ Mean of the item (p) } \item{Discrimination }{ Discrimination of the item (u-l)/n } \item{Item.Criterion }{ Correlation of the item with the Criterion (y)} \item{Item.Reliab }{ Item reliability index} \item{Item.Rel.woi }{ Item reliability index (scored without item) } \item{Item.Validity }{ Item validity index } } \references{ Allen, M. J. & Yen, W. M. (1979). \emph{Introduction to measurement theory.} Monterey, CA: Brooks/Cole. } \author{ Thomas D. Fletcher \email{t.d.fletcher05@gmail.com} } \note{ Most all text books suggest the point-biserial correlation for the item-total. Since the point-biserial is equivalent to the Pearson r, the \code{cor} function is used to render the Pearson r for each item-total. However, it might be suggested that the polyserial is more appropriate. For practical purposes, the Pearson is sufficient and is used here. \cr If discrim = TRUE, then the discrimination index is computed and returned EVEN IF the items are not dichotomously coded. The interpretation of the discrimination index is then suspect. \code{\link{discrim}} computes the number of correct responses in the upper and lower groups by summation of the '1s' (correct responses). When data are continuous, the discrimination index represents the difference in the sum of the scores divided by number in each group (1/3*N).} \section{Warning }{ Be cautious when using data with missing values or small data sets. \cr Listwise deletion is employed for both X (matrix of items to be analyzed) and Y (criterion). When the datasets are small, such listwise deletion can make a big impact. Further, since the upper and lower groups are defined as the upper and lower 1/3, the stability of this division of examinees is greatly increased with larger N.} \seealso{ \code{\link{alpha}}, \code{\link{discrim}} } \examples{ data(TestScores) # Look at the data TestScores # Examine the items item.exam(TestScores[,1:10], y = TestScores[,11], discrim=TRUE) } \keyword{ models } \keyword{ univar } psychometric/man/varr.Rd0000644000176200001440000000307614243712530015006 0ustar liggesusers\name{varr} \alias{varr} \title{ Sample Size weighted variance} \description{ Computes the weighted variance in correlations from a data object of the general format found in \code{\link{EnterMeta}}} \usage{ varr(x) } \arguments{ \item{x}{ A matrix or data.frame with columns Rxy and n: see \code{\link{EnterMeta}}} } \details{ For a set of correlations for each study (i), varr is computed as: \eqn{sum(Ni*(ri-rbar)^2)/sum(Ni)} where, Ni is the sample size of study i and ri is the correlation in study i and rbar is the weighted mean correlation. } \value{ Sample weighted variance in correlations: uncorrected for artifacts other than sampling error } \references{ Arthur, Jr., W., Bennett, Jr., W., and Huffcutt, A. I. (2001) \emph{Conducting Meta-analysis using SAS.} Mahwah, NJ: Erlbaum. Hunter, J.E. and Schmidt, F.L. (2004). \emph{Methods of meta-analysis: Correcting error and bias in research findings (2nd ed.).} Thousand Oaks: Sage Publications. Hunter, J.E., Schmidt, F.L., and Jackson, G.B. (1982). \emph{Meta-analysis: Cumulating research findings across studies.} Beverly Hills: Sage Publications. } \author{ Thomas D. Fletcher \email{t.d.fletcher05@gmail.com} } \note{ This is the variance in correlations across studies corrected for sampling error. It is also known as bare-bones meta-analysis.} \seealso{ \code{\link{vare}}, \code{\link{rbar}} } \examples{ # From Arthur et al data(ABHt32) varr(ABHt32) # From Hunter et al data(HSJt35) varr(HSJt35) } \keyword{ univar } \keyword{ models } psychometric/man/Qrho.Rd0000644000176200001440000000452614243711746014756 0ustar liggesusers\name{Qrho} \alias{Qrho} \title{ Meta-Analytic Q statistic for rho } \description{ Provides a chi-square test for significant variation in sample weighted correlation corrected for attenuating artifacts} \usage{ Qrho(x, aproxe = FALSE) } \arguments{ \item{x}{ A matrix or data.frame with columns Rxy, n and artifacts (Rxx, Ryy, u): see \code{\link{EnterMeta}}} \item{aproxe}{ Logical test to determine if the approximate or exact var e is used} } \details{ Q is distributed as chi-square with df equal to the number of studies - 1. A significant Q statistic implies the presence of one or more moderating variables operating on the observed correlations after corrections for artifacts. } \value{ A table containing the following items: \cr \item{CHISQ }{ Chi-square value} \item{df }{ degrees of freedom} \item{p-val }{ probabilty value} } \references{ Arthur, Jr., W., Bennett, Jr., W., and Huffcutt, A. I. (2001) \emph{Conducting Meta-analysis using SAS.} Mahwah, NJ: Erlbaum. Hunter, J.E. and Schmidt, F.L. (2004). \emph{Methods of meta-analysis: Correcting error and bias in research findings (2nd ed.).} Thousand Oaks: Sage Publications. Hunter, J.E., Schmidt, F.L., and Jackson, G.B. (1982). \emph{Meta-analysis: Cumulating research findings across studies.} Beverly Hills: Sage Publications. } \author{ Thomas D. Fletcher \email{t.d.fletcher05@gmail.com} } \note{ Q is defined as: (k*vr)/(vav+ve) where, k is the number of studies, vr is \code{\link{varr}}, vav is \code{\link{varAV}}, and ve is \code{\link{vare}} } \section{Warning }{The test is sensitive to the number of studies included in the meta-analysis. Large meta-analyses may find significant Q statistics when variation in the population is not present, and small meta-analyses may find lack of significant Q statistics when moderators are present. Hunter & Schmidt (2004) recommend the credibility inteval, \code{\link{CredIntRho}}, or the 75\% rule, \code{\link{pvse}}, as determinants of the presence of moderators.} \seealso{ \code{\link{varr}}, \code{\link{vare}}, \code{\link{rbar}}, \code{\link{CredIntRho}}, \code{\link{pvse}}} \examples{ # From Arthur et al data(ABHt32) Qrho(ABHt32) # From Hunter et al data(HSJt35) Qrho(HSJt35) } \keyword{ univar } \keyword{ models } \keyword{ htest } psychometric/man/EnterMeta.Rd0000644000176200001440000000332614243711506015720 0ustar liggesusers\name{EnterMeta} \alias{EnterMeta} \title{ Enter Meta-Analysis Data} \description{ This function creates data entry object suitable for creating an object needed in the typical meta-analysis. The object will have the appropriate variable names. } \usage{ EnterMeta() } \details{ To create a data object appropriate for the meta-analysis functions in this package: Type \cr my.Meta.data <- EnterMeta() \cr Then use the data editor to enter data in the appropriate columns. } \value{ Does not return a value, but rather is used for naming columns of a data.frame() The final object (if saved) will contain: \cr \item{study }{ Enter Study Code or article name} \item{Rxy }{ Correlation coefficient} \item{n }{ Sample size for study} \item{Rxx }{ Reliability of predictor variable X } \item{Ryy }{ Reliability of criterion variable Y} \item{u }{ Degree of range restriction - ratio of restricted to unrestricted standard deviations} \item{moderator }{ moderator variable (if any)} } \author{Thomas D. Fletcher \email{t.d.fletcher05@gmail.com} } \note{ This is the general format required for data objects used for all the meta-analysis functions in this package. If certain variables are empty (e.g., Rxx, u), then the appropriate correction is not made, but the placeholder must be there. Moderator is useful for the user to subset the data and re-run any functions. } \section{Warning }{ This function will not automatically save your data object. You must create the object using the assignment operator. } \seealso{ As an alternative, consider \code{\link{read.csv}} for importing data prepared elsewhere (e.g., Excel)} \examples{ # my.data <- EnterMeta() } \keyword{ manip } psychometric/man/ICC.CI.Rd0000644000176200001440000000337114243711572014727 0ustar liggesusers\name{ICC.CI} \alias{ICC.CI} \alias{ICC1.CI} \alias{ICC2.CI} \title{ Confidence interval for the Intra-class Correlation } \description{ Computes the CI at the desired level for the ICC1 and ICC2} \usage{ ICC1.CI(dv, iv, data, level = 0.95) ICC2.CI(dv, iv, data, level = 0.95) } \arguments{ \item{dv}{ The dependent variable of interest } \item{iv}{ cluster or grouping variable } \item{data}{ data.frame containing the data } \item{level}{ Significance Level for constructing the CI, default is .95} } \details{ Computes the ICC from a one-way ANOVA. The CI is then computed at the desired level using formulae provided by McGraw & Wong (1996). They use the terminology ICC(1) and ICC(k) for ICC1 and ICC2 respectively. } \value{ A table with 3 elements: \item{LCL }{ lower confidence limit if CI} \item{ICC }{ intra-class correlation} \item{UCL }{ upper confidence limit if CI} } \references{ McGraw, K. O. & Wong, S. P. (1996). Forming some inferences about some intraclass correlation coefficients. \emph{Psychological Methods, 1,} 30-46. Bliese, P. (2000). Within-group agreement, non-independence, and reliability: Implications for data aggregation and analysis. In K. J. Klein & S. W. J. Kozlowski (Eds.), \emph{Multilevel theory, research, and methods in organizations: Foundations, extensions, and new directions (pp. 349-381).} San Francisco: Jossey-Bass. } \author{ Thomas D. Fletcher \email{t.d.fletcher05@gmail.com}} \seealso{ \code{\link{ICC.lme}}, \code{\link[multilevel]{ICC1}}, \code{\link[multilevel]{ICC2}} } \examples{ library(multilevel) data(bh1996) ICC1.CI(HRS, GRP, bh1996) ICC2.CI(HRS, GRP, bh1996) } \keyword{ models } \keyword{ univar } \keyword{ htest } psychometric/man/SEz.Rd0000644000176200001440000000176014243712124014532 0ustar liggesusers\name{SEz} \alias{SEz} \title{ Standard Error of Fishers z prime } \description{ Given a sample size, n, will compute the aproximate standard error for z prime This is useful for constructing confidence intervals about a correlation. } \usage{ SEz(n) } \arguments{ \item{n}{ sample size } } \details{ SEz = 1/sqrt(n-3) } \value{ The approximate standard error for Fisher's z prime } \references{ Olkin, I. & Finn, J. D. (1995). Correlation Redux. \emph{Psychological Bulletin, 118}, 155-164. Cohen, J., Cohen, P., West, S. G., & Aiken, L. S. (2003). \emph{Applied multiple regression/correlation analysis for the behavioral sciences (3rd ed.).} Mahwah, NJ: Lawrence Erlbaum. } \author{ Thomas D. Fletcher \email{t.d.fletcher05@gmail.com} } \seealso{ \code{\link{r2z}}, \code{\link{CIr}}, \code{\link{CIz}}, \code{\link{z2r}} } \examples{ # From ch. 2 in Cohen et al (2003) zp <- r2z(.657) zp SEz(15) } \keyword{ htest } \keyword{ models } psychometric/man/varRCA.Rd0000644000176200001440000000277514243712516015163 0ustar liggesusers\name{varRCA} \alias{varRCA} \title{ Variance in Meta-Analytic Rho } \description{ Computes the estimate of the variance in the corrected correlation coefficient.} \usage{ varRCA(x, aprox = FALSE) } \arguments{ \item{x}{ A matrix or data.frame with columns Rxy, n and artifacts (Rxx, Ryy, u): see \code{\link{EnterMeta}}} \item{aprox}{ Logical test to determine if the approximate or exact var e is used } } \details{ Variance in Rho is computed as: \eqn{\code{VarResT} / \code{CAFFA}^2} This is used to construct credibility intervals for rho \code{\link{CredIntRho}} } \value{ A numeric value representing the variance in the population correlation coefficient } \references{ Arthur, Jr., W., Bennett, Jr., W., and Huffcutt, A. I. (2001) \emph{Conducting Meta-analysis using SAS.} Mahwah, NJ: Erlbaum. Hunter, J.E. and Schmidt, F.L. (2004). \emph{Methods of meta-analysis: Correcting error and bias in research findings (2nd ed.).} Thousand Oaks: Sage Publications. Hunter, J.E., Schmidt, F.L., and Jackson, G.B. (1982). \emph{Meta-analysis: Cumulating research findings across studies.} Beverly Hills: Sage Publications. } \author{ Thomas D. Fletcher \email{t.d.fletcher05@gmail.com} } \seealso{ \code{\link{rhoCA}}, \code{\link{CAFAA}}, \code{\link{varResT}}, \code{\link{varRes}} \code{\link{CredIntRho}}} \examples{ # From Arthur et al data(ABHt32) varRCA(ABHt32) # From Hunter et al data(HSJt35) varRCA(HSJt35) } \keyword{ univar } \keyword{ models } psychometric/man/Utility.Rd0000644000176200001440000000376014243712154015501 0ustar liggesusers\name{Utility} \alias{Utility} \alias{MargUtil} \alias{TotUtil} \title{ Marginal and Total Utility of a Test} \description{ Computes the marginal or total utility of a test.} \usage{ MargUtil(Rxy, Sy, MXg, COST, Nselected) TotUtil(Rxy, Sy, MXg, COST, Nselected) } \arguments{ \item{Rxy}{ Correlation of Test X with Criterion Y } \item{Sy}{ Standard Deviation of Y in monetary units } \item{MXg}{ Mean of selected group on test X in standard score units } \item{COST}{ Total cost of testing } \item{Nselected}{ number of applicants selected} } \details{ \emph{Marginal utility} is the gain expected in the outcome (i.e., job performance), in monetary units, for a person from the predictor selected subgroup compared to a person who is randomly selected. \emph{Total utility} is the total gain in the outcome (i.e., job performance), in monetary units, expected for those selected using the test. } \value{ Marginal or Total Utility of a Test (a numeric value in monetary units) } \references{ Cascio, W. F. & Aguinis, H. (2005). \emph{Applied Psychology in Human Resource Management (6th ed.)} Englewood Cliffs, NJ: Prentice-Hall. Murphy, K. R. & Davidshofer, C. O. (2005). \emph{Psychological testing: Principles and applications (5th ed.).} Saddle River, NJ: Prentice Hall. } \author{ Thomas D. Fletcher \email{t.d.fletcher05@gmail.com} } \note{ Computation for marginal and total utility are: MU <- Rxy*Sy*MXg - COST/Nselected \cr TU <- Nselected*Rxy*Sy*MXg - COST The computation of Sy should be done locally (within an organization) and is often difficult. } \seealso{ \code{\link{ClassUtil}} } \examples{ # Rxy = .35 # Each year 72 workers are hired # SD of performance in dollars is $4000 # 1 out of 10 applicants are selected # cost per test = $5 # average test score for those selected = 1.76 MargUtil(.35, 4000, 1.76, 720*5, 72) TotUtil (.35, 4000, 1.76, 720*5, 72) } \keyword{ univar } psychometric/man/discrim.Rd0000644000176200001440000000343614243711466015475 0ustar liggesusers\name{discrim} \alias{discrim} \title{ Item Discrimination } \description{ Discrimination of an item is the ability for a specific item to distinguish among upper and lower ability individuals on a test} \usage{ discrim(x) } \arguments{ \item{x}{ matrix or data.frame of items to be examined. Rows represent persons, Columns represent items } } \details{ The function takes data on individuals and their test scores and computes a total score to separate high and low ordered individuals. The upper and lower groups are defined as the top and bottom 1/3 of the total. Discrimination is then computed and returned for each item using the formula: \cr (number correct in the upper group - number correct in the lower group ) / size of each group } \value{ Discrimination index for each item in the data.frame or matrix analyzed. } \references{ Allen, M. J. & Yen, W. M. (1979). \emph{Introduction to measurement theory.} Monterey, CA: Brooks/Cole. } \author{ Thomas D. Fletcher \email{t.d.fletcher05@gmail.com} } \note{ \code{discrim} is used by \code{\link{item.exam}} \code{discrim} is especially useful for dichotomously coded items such as correct/incorrect. If items are not dischotomously coded, the interpretation of \code{discrim} has less meaning. } \seealso{ \code{\link{item.exam}} } \examples{ # see item.exam # Scores on a test for 12 individuals # 1 = correct item1 <- c(1,1,1,0,1,1,1,1,1,1,0,1) item2 <- c(1,0,1,1,1,1,1,1,1,1,1,0) item3 <- c(1,1,1,1,1,1,1,1,1,1,1,1) item4 <- c(0,1,0,1,0,1,0,1,1,1,1,1) item5 <- c(0,0,0,0,1,0,0,1,1,1,1,1) item6 <- c(0,0,0,0,0,0,1,0,0,1,1,1) item7 <- c(0,0,0,0,0,0,0,0,1,0,0,0) exam <- cbind(item1, item2, item3, item4, item5, item6, item7) discrim(exam) } \keyword{ models } \keyword{ univar } psychometric/man/SErbar.Rd0000644000176200001440000000331514243712112015202 0ustar liggesusers\name{SErbar} \alias{SErbar} \alias{SERHET} \alias{SERHOM} \title{ Standard Error for Sample Size Weighted Mean Correlation } \description{ The standard error of homogenous or heterogenous samples is computed to be used for construction of confidence intervals about the Sample Size Weighted Mean Correlation in meta-analysis. Use \code{SERHOM} if no moderators are present (population is homogenous), and use \code{SERHET} if moderators are present (population is heterogenous). } \usage{ SERHOM(x) SERHET(x) } \arguments{ \item{x}{ A matrix or data.frame with columns Rxy and n: see \code{\link{EnterMeta}}} } \details{ The formula for each are: \cr SERHOM <- \eqn{(1-rb^2)/sqrt(N-k)} \cr SERHET <- \eqn{sqrt((1-rb^2)^2/(N-k)+varRes(x)/k)} where, rb is \code{\link{rbar}}, N is the total sample size, k is the number of studies. } \value{ A numeric value, the standard error } \references{ Arthur, Jr., W., Bennett, Jr., W., and Huffcutt, A. I. (2001) \emph{Conducting Meta-analysis using SAS.} Mahwah, NJ: Erlbaum. Hunter, J.E. and Schmidt, F.L. (2004). \emph{Methods of meta-analysis: Correcting error and bias in research findings (2nd ed.).} Thousand Oaks: Sage Publications. Hunter, J.E., Schmidt, F.L., and Jackson, G.B. (1982). \emph{Meta-analysis: Cumulating research findings across studies.} Beverly Hills: Sage Publications. } \author{ Thomas D. Fletcher \email{t.d.fletcher05@gmail.com} } \seealso{ \code{\link{CIrb}}, \code{\link{rbar}} } \examples{ # From Arthur et al data(ABHt32) SERHOM(ABHt32) SERHET(ABHt32) CIrb(ABHt32) # From Hunter et al data(HSJt35) SERHOM(HSJt35) SERHET(HSJt35) CIrb(HSJt35) } \keyword{ univar } psychometric/man/MetaTable.Rd0000644000176200001440000000355514243711640015675 0ustar liggesusers\name{MetaTable} \alias{MetaTable} \title{ Summary function for 'Complete' Meta-Analysis} \description{ Computes and returns the major functions involved in a Meta-Analysis. It is generic in the sense that no options are available to alter defaults. } \usage{ MetaTable(x) } \arguments{ \item{x}{ A matrix or data.frame with columns Rxy, n and artifacts (Rxx, Ryy, u): see \code{\link{EnterMeta}}} } \details{ For a set of correlations for each study (i), the following calculations are made and returned: r-bar \code{\link{rbar}}, variance in r-bar \code{\link{varr}}, variance due to sampling error (not approximated) \code{\link{vare}}, percent of variance due to sampling error \code{\link{pvse}}, 95\% CI for r-bar (using both the heterogenous and homogenous SE) \code{\link{CIrb}}, rho ( corrected r-bar) \code{\link{rhoCA}}, variance in rho \code{\link{varRCA}}, percent of variance attributable to artifacts \code{\link{pvaaa}}, 90\% Credibility interval \code{\link{CredIntRho}} } \value{ Data.frame with various statistics returned - see details above} \references{ Arthur, Jr., W., Bennett, Jr., W., and Huffcutt, A. I. (2001) \emph{Conducting Meta-analysis using SAS.} Mahwah, NJ: Erlbaum. Hunter, J.E. and Schmidt, F.L. (2004). \emph{Methods of meta-analysis: Correcting error and bias in research findings (2nd ed.).} Thousand Oaks: Sage Publications. Hunter, J.E., Schmidt, F.L., and Jackson, G.B. (1982). \emph{Meta-analysis: Cumulating research findings across studies.} Beverly Hills: Sage Publications. } \author{ Thomas D. Fletcher \email{t.d.fletcher05@gmail.com} } \seealso{ \code{\link{rbar}}, \code{\link{rhoCA}} } \examples{ # From Arthur et al data(ABHt32) MetaTable(ABHt32) # From Hunter et al data(HSJt35) MetaTable(HSJt35) } \keyword{ univar } \keyword{ models } psychometric/man/TestScores.Rd0000644000176200001440000000242111427121000016106 0ustar liggesusers\name{TestScores} \alias{TestScores} \docType{data} \title{Fictitious Test Scores for Illustrative Purposes} \description{ These data were created to correspond to scores for 30 examinees on 10 items of test X plus a score on criterion Y. } \usage{data(TestScores)} \format{ A matrix with 30 observations on the following 11 variables. \describe{ \item{\code{i1}}{ item1 on test x} \item{\code{i2}}{ item2 on test x} \item{\code{i3}}{ item3 on test x} \item{\code{i4}}{ item4 on test x} \item{\code{i5}}{ item5 on test x} \item{\code{i6}}{ item6 on test x} \item{\code{i7}}{ item7 on test x} \item{\code{i8}}{ item8 on test x} \item{\code{i9}}{ item9 on test x} \item{\code{i10}}{ item10 on test x} \item{\code{y}}{ Score on criterion Y} } } \details{ These data are constructed such that items 1 - 10 are coded 0,1 for incorrect/correct responses. The data illustate that some items are better for maintaining internal consistency, whereas others may be more useful for relating to external criteria. } \seealso{\code{\link{item.exam}}} \examples{ data(TestScores) str(TestScores) item.exam(TestScores[,1:10], y = TestScores[,11], discrim=TRUE) alpha(TestScores[,1:10]) } \keyword{datasets} psychometric/man/FunnelPlot.Rd0000644000176200001440000000255614243711540016124 0ustar liggesusers\name{FunnelPlot} \alias{FunnelPlot} \title{ Funnel Plot for Meta-Analysis } \description{ Produces a simple x-y plot corresponding to the correlation and sample size. A vertical line is produced representing the sample weighted correlation. } \usage{ FunnelPlot(x) } \arguments{ \item{x}{ A matrix or data.frame with columns Rxy and n: see \code{\link{EnterMeta}}} } \details{ Plot showing 'no evidence' of availabilty bias will resemble funnel getting smaller at the top, and larger at the bottom of the plot. A plot showing evidence of availablity bias will not resemble a funnel. } \value{ a plot } \references{ Arthur, Jr., W., Bennett, Jr., W., and Huffcutt, A. I. (2001) \emph{Conducting Meta-analysis using SAS.} Mahwah, NJ: Erlbaum. Hunter, J.E. and Schmidt, F.L. (2004). \emph{Methods of meta-analysis: Correcting error and bias in research findings (2nd ed.).} Thousand Oaks: Sage Publications. Hunter, J.E., Schmidt, F.L., and Jackson, G.B. (1982). \emph{Meta-analysis: Cumulating research findings across studies.} Beverly Hills: Sage Publications. } \author{ Thomas D. Fletcher \email{t.d.fletcher05@gmail.com} } \seealso{ \code{\link{FileDrawer}} } \examples{ # From Arthur et al data(ABHt32) FunnelPlot(ABHt32) # From Hunter et al data(HSJt35) FunnelPlot(HSJt35) } \keyword{ univar } \keyword{ models } psychometric/man/pvaaa.Rd0000644000176200001440000000277414243711670015134 0ustar liggesusers\name{pvaaa} \alias{pvaaa} \title{ Percent of Variance Accounted for by Artifacts in Rho } \description{ Computes the percentage variance attributed to attenuating artifacts (sampling error, restriction of range, reliability in predictor and criterion.} \usage{ pvaaa(x, aprox = FALSE) } \arguments{ \item{x}{ A matrix or data.frame with columns Rxy, n and artifacts (Rxx, Ryy, u): see \code{\link{EnterMeta}}} \item{aprox}{ Logical test to determine if the approximate or exact var e is used} } \details{ Percent of variance is computed as: ( \code{vare} + \code{varAV} ) / \code{varr} * 100 } \value{ A numeric value representing the percent of variance accounted for by artifacts } \references{ Arthur, Jr., W., Bennett, Jr., W., and Huffcutt, A. I. (2001) \emph{Conducting Meta-analysis using SAS.} Mahwah, NJ: Erlbaum. Hunter, J.E. and Schmidt, F.L. (2004). \emph{Methods of meta-analysis: Correcting error and bias in research findings (2nd ed.).} Thousand Oaks: Sage Publications. Hunter, J.E., Schmidt, F.L., and Jackson, G.B. (1982). \emph{Meta-analysis: Cumulating research findings across studies.} Beverly Hills: Sage Publications. } \author{ Thomas D. Fletcher \email{t.d.fletcher05@gmail.com} } \seealso{\code{\link{vare}}, \code{\link{varAV}}, \code{\link{varr}}, \code{\link{pvse}} } \examples{ # From Arthur et al data(ABHt32) pvaaa(ABHt32) # From Hunter et al data(HSJt35) pvaaa(HSJt35) } \keyword{ univar } \keyword{ models } psychometric/man/ICC.lme.Rd0000644000176200001440000000533214511312410015172 0ustar liggesusers\name{ICC.lme} \alias{ICC.lme} \alias{ICC1.lme} \alias{ICC2.lme} \title{ Intraclass Correlation Coefficient from a Mixed-Effects Model } \description{ ICC1 and ICC2 computed from a lme() model. } \usage{ ICC1.lme(dv, grp, data) ICC2.lme(dv, grp, data, weighted = FALSE) } \arguments{ \item{dv}{ The dependent variable of interest } \item{grp}{ cluster or grouping variable } \item{data}{ data.frame containing the data } \item{weighted}{ Whether or not a weighted mean is used in calculation of ICC2 } } \details{ First a lme() model is computed from the data. Then ICC1 is computed as \eqn{t00/(t00 + siqma^2)}, where t00 is the variance in intercept of the model and \eqn{sigma^2} is the residual variance for the model. The ICC2 is computed by computing the ICC2 for each group \eqn{t00/(t00 + sigma^2/nj)} where nj is the size of group j. The mean across all groups is then taken to be the ICC2. However, one can specify that the mean should be weigted by group size such that larger groups are given more weight. The calculation of the individual group ICC2 is done by Bliese's \code{\link[multilevel]{gmeanrel}} function. An alternate specification not used here, but sometimes seen in the literature for ICC2 is to use the formula above for the total data set, but replace nj with the average group size. This is the method used in Bliese's \code{\link[multilevel]{mult.icc}}. } \value{ ICC1 or ICC2 } \references{ Bliese, P. (2000). Within-group agreement, non-independence, and reliability: Implications for data aggregation and analysis. In K. J. Klein & S. W. J. Kozlowski (Eds.), \emph{Multilevel theory, research, and methods in organizations: Foundations, extensions, and new directions (pp. 349-381).} San Francisco: Jossey-Bass. } \author{ Thomas D. Fletcher \email{t.d.fletcher05@gmail.com} } \note{ ICC1.lme and ICC2.lme should in principle be equal an ICC computed from a one-way ANOVA only when the data are balanced (equal group sizes for all groups and no missing data). The ICC.lme should be a more accurate measure of ICC in all other instances. The three specifications of ICC2 mentioned above (details) will be similar by not exactly equal because of group variablity. } \section{Warning }{ If data used are attached, you will sometimes receive a warning that can be ignored. The warning states that the following variables ... are masked. This is because the function first attaches the data and then detaches it within the function. } \seealso{ \code{\link{ICC.CI}}, \code{\link[multilevel]{mult.icc}}, \code{\link[multilevel]{gmeanrel}} } \examples{ library(nlme) library(multilevel) data(bh1996) ICC1.lme(HRS, GRP, data=bh1996) ICC2.lme(HRS, GRP, data=bh1996) } \keyword{ models } \keyword{ univar } psychometric/man/CIz.Rd0000644000176200001440000000175114243712732014523 0ustar liggesusers\name{CIz} \alias{CIz} \title{ Confidence Interval for Fisher z' } \description{ Constructs a CI for a specified level about z'. This is useful for constructing CI for a correlation} \usage{ CIz(z, n, level = 0.95) } \arguments{ \item{z}{ Fishers z'} \item{n}{ Sample Size } \item{level}{ Significance Level for constructing the CI, default is .95} } \value{ \item{LCL }{ Lower Confidence Limit of the CI} \item{UCL }{ Upper Confidence Limit of the CI} } \references{ Cohen, J., Cohen, P., West, S. G., & Aiken, L. S. (2003). \emph{Applied multiple regression/correlation analysis for the behavioral sciences (3rd ed.).} Mahwah, NJ: Lawrence Erlbaum. } \author{ Thomas D. Fletcher \email{t.d.fletcher05@gmail.com} } \seealso{ \code{\link{r2z}}, \code{\link{CIr}}, \code{\link{SEz}}, \code{\link{z2r}} } \examples{ # From ch. 2 in Cohen et al (2003) zp <- r2z(.657) CIz(zp, 15) } \keyword{ htest } \keyword{ models } psychometric/man/z2r.Rd0000644000176200001440000000146414243712310014544 0ustar liggesusers\name{z2r} \alias{z2r} \alias{Fisher z to r} \title{ Fisher z' to r} \description{ Converts a Fishers z' to Pearson correlation coefficient } \usage{ z2r(x) } \arguments{ \item{x}{ z' (Fishers z prime) } } \details{ r = (exp(2*z)-1)/exp(2*z)+1) } \value{ A Pearson Correlation coefficient } \references{ Cohen, J., Cohen, P., West, S. G., & Aiken, L. S. (2003). \emph{Applied multiple regression/correlation analysis for the behavioral sciences (3rd ed.).} Mahwah, NJ: Lawrence Erlbaum. } \author{ Thomas D. Fletcher \email{t.d.fletcher05@gmail.com} } \seealso{ \code{\link{r2z}}, \code{\link{CIr}}, \code{\link{CIz}}, \code{\link{SEz}} } \examples{ # From ch. 2 in Cohen et al (2003) zp <- r2z(.657) zp z2r(zp) } \keyword{ htest } \keyword{ models } psychometric/man/HSJt35.Rd0000644000176200001440000000262111427121000014772 0ustar liggesusers\name{HSJt35} \alias{HSJt35} \docType{data} \title{ Table 3.5 Hunter et al.} \description{ This is a useful and fictious example for conducting Meta-Analysis. It appeared in Hunter et al (1982)} \usage{data(HSJt35)} \format{ A data frame with 8 observations on the following 7 variables. \itemize{ \item \emph{study} Study code \item \emph{Rxy} Published correlation \item \emph{n} Sample size \item \emph{Rxx} Reliability of predictor \item \emph{Ryy} Reliability of criterion \item \emph{u} Range Restriction Ratio \item \emph{moderator} none }} \details{ This example has been replicated a number of times (e.g., Hunter & Schmidt, 2004). It is useful in illustrating the basic concepts of validity generalization. The data can be used to demonstrate bare-bones MA as well as correction for artifacts. This data format is the format necessary for the R functions in the psychometric package. } \references{ Hunter, J.E. and Schmidt, F.L. (2004). \emph{Methods of meta-analysis: Correcting error and bias in research findings (2nd ed.).} Thousand Oaks: Sage Publications. Hunter, J.E., Schmidt, F.L., and Jackson, G.B. (1982). \emph{Meta-analysis: Cumulating research findings across studies.} Beverly Hills: Sage Publications. } \examples{ data(HSJt35) rbar(HSJt35) FunnelPlot(HSJt35) CredIntRho(HSJt35) } \keyword{datasets} psychometric/man/vare.Rd0000644000176200001440000000413714243712542014773 0ustar liggesusers\name{vare} \alias{vare} \alias{aprox.vare} \alias{vare36} \title{ Sampling Error Variance} \description{ Computes sampling error variance in correlations from a data object of the general format found in \code{\link{EnterMeta}} } \usage{ vare(x) aprox.vare(x) vare36(x) } \arguments{ \item{x}{ A matrix or data.frame with columns Rxy and n: see \code{\link{EnterMeta}}} } \details{ \code{vare} is the 'core' equation for estimating the sampling error variance. Presumably because of the history of meta-analysis and lack of desktop computing power, hand-calculatons were needed. Thus, two additional equations were developed. The \code{aprox.vare} appears in many textbooks and is used often (Arthur et al.). Another variation is presented by Hunter & Schmidt (2004) as their equation 3.6 \code{vare36}. } \value{ Sampling error variance (exact, approximate, or alternate aproximate) } \references{ Arthur, Jr., W., Bennett, Jr., W., and Huffcutt, A. I. (2001) \emph{Conducting Meta-analysis using SAS.} Mahwah, NJ: Erlbaum. Hunter, J.E. and Schmidt, F.L. (2004). \emph{Methods of meta-analysis: Correcting error and bias in research findings (2nd ed.).} Thousand Oaks: Sage Publications. Hunter, J.E., Schmidt, F.L., and Jackson, G.B. (1982). \emph{Meta-analysis: Cumulating research findings across studies.} Beverly Hills: Sage Publications. } \author{ Thomas D. Fletcher \email{t.d.fletcher05@gmail.com} } \note{ The equations for each function are: \cr vare <- \eqn{sum(n*(1-rb^2)^2/(n-1),na.rm=TRUE)/sum(n,na.rm=TRUE)} \cr aprox.vare <- \eqn{(1-rb^2)^2/(mean(n, na.rm=TRUE)-1)} \cr vare36 <- \eqn{((1-rb^2)^2*k)/T} where k is number of studies and T is total sample size These are only presented here for completeness. The recommended equation is \code{vare}. } \seealso{ \code{\link{varr}}, \code{\link{rbar}} } \examples{ # From Arthur et al data(ABHt32) vare(ABHt32) aprox.vare(ABHt32) vare36(ABHt32) # From Hunter et al data(HSJt35) vare(HSJt35) aprox.vare(HSJt35) vare36(HSJt35) } \keyword{ univar } \keyword{ models } psychometric/DESCRIPTION0000644000176200001440000000104114522003402014455 0ustar liggesusersPackage: psychometric Type: Package Title: Applied Psychometric Theory Version: 2.4 Depends: dplyr, multilevel, purrr, nlme Author: Thomas D. Fletcher Maintainer: Thomas D. Fletcher Description: Contains functions useful for correlation theory, meta-analysis (validity-generalization), reliability, item analysis, inter-rater reliability, and classical utility. License: GPL (>= 2) Packaged: 2023-11-05 19:26:31 UTC; jodydaniel Repository: CRAN NeedsCompilation: no Date/Publication: 2023-11-05 21:20:02 UTC psychometric/R/0000755000176200001440000000000014243503520013162 5ustar liggesuserspsychometric/R/pvse.R0000644000176200001440000000023511427121000014250 0ustar liggesusers"pvse" <- function (x) { ve <- vare(x) vr <- varr(x) pv <- ve/vr*100 mat <- matrix(pv) colnames(mat) <- "Compare to > 75%" return(mat) } psychometric/R/CredIntRho.R0000644000176200001440000000037611427121000015302 0ustar liggesusers"CredIntRho" <- function(x, aprox=FALSE, level=.95) { r <- rhoCA(x) if (!aprox) { vr <- varRCA(x)} else { vr <- varRCA(x,T)} zs <- - qnorm((1-level)/2) sdr <- sqrt(vr) lcl <- r - zs * sdr ucl <- r + zs * sdr return(list(lcl,ucl)) } psychometric/R/varRes.R0000644000176200001440000000015211427121000014533 0ustar liggesusers"varRes" <- function(x) { varr <- varr(x) vare <- vare(x) vr <- varr - vare return(vr) } psychometric/R/vare36.R0000644000176200001440000000023611427121000014402 0ustar liggesusers"vare36" <- function(x) { n <- x$n rb <- rbar(x) T <- sum(n,na.rm=TRUE) k <- length (x$Rxy[!(is.na(x$Rxy))]) ve <- ((1-rb^2)^2*k)/T return(ve) } psychometric/R/varRCA.R0000644000176200001440000000024011427121000014405 0ustar liggesusers"varRCA" <- function(x, aprox=FALSE) { if (!aprox) {vrt <- varResT(x)} else {vrt <- varResT(x, T)} aa <- CAFAA(x) vr <- vrt/aa^2 return(vr) } psychometric/R/SBlength.R0000644000176200001440000000013111427121000014774 0ustar liggesusers"SBlength" <- function(rxxp, rxx) { N <- rxxp*(1-rxx)/(rxx*(1-rxxp)) return(N) } psychometric/R/CIrb.R0000644000176200001440000000044311427121000014113 0ustar liggesusers"CIrb" <- function (x, LEVEL=.95, homogenous=TRUE) { rb <- rbar(x) noma <- 1 - LEVEL if (!homogenous) {serb <- SERHOM(x)} else {serb <- SERHET(x)} zs <- -qnorm(noma/2) merb <- zs*serb lcl <- rb - merb ucl <- rb + merb mat <- list(lcl, ucl) return(as.numeric(mat)) } psychometric/R/SERHOM.R0000644000176200001440000000023211427121000014265 0ustar liggesusers"SERHOM" <- function (x) { N <- sum(x$n,na.rm=TRUE) rb <- rbar(x) k <- length (x$Rxy[!(is.na(x$Rxy))]) se <- (1-rb^2)/sqrt(N-k) return(se) } psychometric/R/r2z.R0000644000176200001440000000006411427121000014010 0ustar liggesusers"r2z" <- function (x) { .5 * log((1+x)/(1-x)) } psychometric/R/FunnelPlot.R0000644000176200001440000000025511427121000015363 0ustar liggesusers"FunnelPlot" <- function(x) { rxy <- x$Rxy N <- x$n rb <- rbar(x) plot(rxy,N, xlab="Effect Sizes", ylab="Sample Sizes", main="Funnel Plot") abline(v=rb) } psychometric/R/CAFAA.R0000644000176200001440000000017111427121000014065 0ustar liggesusers"CAFAA" <- function(x) { a <- aRxx(x)[[1]] b <- bRyy(x)[[1]] c <- cRR(x)[[1]] AA <- a*b*c return(AA) } psychometric/R/ClassUtil.R0000644000176200001440000000116011427121000015174 0ustar liggesusers"ClassUtil" <- function (rxy = 0, BR = .5, SR = .5) { pTP <- BR*SR + rxy*sqrt(BR*(1-BR) * SR*(1-SR)) pFN <- BR - pTP pFP <- SR - pTP pTN <- 1 - pTP - pFN - pFP sen <- pTP/(pTP+pFN) spe <- pTN/(pFP+pTN) cd <- (pTP+pTN)*100 suc <- pTP/(pTP+pFP) imp <- (suc - BR)*100 mat <- matrix(rbind(pTP,pFN,pFP,pTN,NA,sen,spe,cd,suc,imp)) colnames(mat) <- "Probabilities" rownames(mat) <- c("True Positives", "False Negatives", "False Positives", "True Negatives","--", "Sensitivity", "Specificity", "% of Decisions Correct", "Proportion Selected Succesful", "% Improvement over BR") return(mat) } psychometric/R/ICC1.lme.R0000644000176200001440000000054314243503036014544 0ustar liggesusers"ICC1.lme" <- function (dv, grp, data) { dv <- data %>% dplyr::select({{dv}}) %>% purrr::reduce(c) grp <- data %>% dplyr::select({{grp}}) %>% purrr::reduce(c) mod <- lme(dv ~ 1, random=~1|grp, na.action=na.omit) t0 <- as.numeric(VarCorr(mod)[1,1]) sig2 <- as.numeric(VarCorr(mod)[2,1]) icc1 <- t0/(t0+sig2) return(icc1) } psychometric/R/MargUtil.R0000644000176200001440000000016311427121000015017 0ustar liggesusers"MargUtil" <- function(Rxy, Sy, MXg, COST, Nselected) { MU <- Rxy*Sy*MXg - COST/Nselected return(MU) } psychometric/R/aRxx.R0000644000176200001440000000030611427121000014214 0ustar liggesusers"aRxx" <- function(x) { Rxx <- x$Rxx n <- length (x$Rxx[!(is.na(x$Rxx))]) a <- mean(sqrt(Rxx),na.rm=TRUE) va <- var(sqrt(Rxx),na.rm=TRUE)*(n-1)/n out <- list(a,va) return(out) } psychometric/R/SE.Pred.R0000644000176200001440000000012011427121000014464 0ustar liggesusers"SE.Pred" <- function (sy, rxx) { sep <- sy*sqrt(1-rxx^2) return(sep) } psychometric/R/rdif.nul.R0000644000176200001440000000026511427121000015017 0ustar liggesusers"rdif.nul" <- function (r1, r2, n1, n2) { z1 <- r2z(r1) z2 <- r2z(r2) z <- (z1 - z2)/sqrt(1/(n1-3)+1/(n2-3)) p <- pnorm(z) return(data.frame(zDIF = z, p = 1-p)) } psychometric/R/cRR.R0000644000176200001440000000042711427121000013764 0ustar liggesusers"cRR" <- function (x) { rb = rbar(x) n <- length (x$u[!(is.na(x$u))]) u <- x$u if (n == 0) { c <- 1 vc <- 0} else { c <- sqrt((1-u^2)*rb^2+u^2) vc <- var(c, na.rm=TRUE)*(n-1)/n } mc <- mean(c, na.rm=TRUE) out <- list(mc, vc) return(out) } psychometric/R/CI.tscore.R0000644000176200001440000000044611427121000015070 0ustar liggesusers"CI.tscore" <- function(obs, mx, s, rxx, level=.95) { noma <- 1-level see <- SE.Est(s, rxx) zs <- - qnorm(noma/2) mez <- zs*see that <- Est.true(obs, mx, rxx) lcl <- that - mez ucl <- that + mez mat <- data.frame(SE.Est = see, LCL = lcl, T.Score = that, UCL = ucl) return(mat) } psychometric/R/pvaaa.R0000644000176200001440000000026011427121000014361 0ustar liggesusers"pvaaa" <- function(x, aprox=FALSE) { if (!aprox) {ve <- vare(x)} else {ve <- aprox.vare(x)} vr <- varr(x) vav <- varAV(x) pv <- (ve+vav)/vr*100 return(pv) } psychometric/R/ICC1.CI.R0000644000176200001440000000117114243502642014262 0ustar liggesusers"ICC1.CI" <- function (dv, iv, data, level=.95) { dv <- data %>% dplyr::select({{dv}}) %>% purrr::reduce(c) iv <- data %>% dplyr::select({{iv}}) %>% purrr::reduce(c) %>% factor() mod <- aov(dv ~ iv, na.action=na.omit) icc <- ICC1(mod) tmod <- summary(mod) df1 <- tmod[[1]][1,1] df2 <- tmod[[1]][2,1] Fobs <- tmod[[1]][1,4] n <- df2/(df1+1) # k-1 noma <- 1- level Ftabl <- qf(noma/2, df1, df2, lower.tail=F) Ftabu <- qf(noma/2, df2, df1, lower.tail=F) Fl <- Fobs/Ftabl Fu <- Fobs*Ftabu lcl <- (Fl-1)/(Fl+n) ucl <- (Fu-1)/(Fu+n) mat <- data.frame(LCL=lcl, ICC1=icc, UCL=ucl) return(mat) } psychometric/R/varResT.R0000644000176200001440000000027011427121000014660 0ustar liggesusers"varResT" <- function(x, aprox=FALSE) { if (!aprox) {ve <- vare(x)} else {ve <- aprox.vare(x)} vr <- varr(x) vav <- varAV(x) vrest <- vr - ve - vav return(vrest) } psychometric/R/SE.Meas.R0000644000176200001440000000011511427121000014463 0ustar liggesusers"SE.Meas" <- function (s, rxx) { sem <- s*sqrt(1-rxx) return(sem) } psychometric/R/varr.R0000644000176200001440000000021211427121000014240 0ustar liggesusers"varr" <- function(x) { rxy <- x$Rxy n <- x$n rb <- rbar(x) vr <- sum(n*(rxy-rb)^2,na.rm=TRUE)/sum(n,na.rm=TRUE) return(vr) } psychometric/R/CIz.R0000644000176200001440000000032211427121000013755 0ustar liggesusers"CIz" <- function (z, n, level=.95) { noma <- 1-level sez <- SEz(n) zs <- - qnorm(noma/2) mez <- zs*sez lcl <- z - mez ucl <- z + mez mat <- list(lcl, ucl) return(as.numeric(mat)) } psychometric/R/ICC2.CI.R0000644000176200001440000000121314243503214014254 0ustar liggesusers"ICC2.CI" <- function (dv, iv, data, level=.95) { dv <- data %>% dplyr::select({{dv}}) %>% purrr::reduce(c) iv <- data %>% dplyr::select({{iv}}) %>% purrr::reduce(c) %>% factor() mod <- aov(dv ~ as.factor(iv), na.action=na.omit) icc <- ICC2(mod) tmod <- summary(mod) df1 <- tmod[[1]][1,1] df2 <- tmod[[1]][2,1] Fobs <- tmod[[1]][1,4] n <- df2/(df1+1) # k-1 noma <- 1- level Ftabl <- qf(noma/2, df1, df2, lower.tail=F) Ftabu <- qf(noma/2, df2, df1, lower.tail=F) Fl <- Fobs/Ftabl Fu <- Fobs*Ftabu lcl <- 1-1/Fl ucl <- 1-1/Fu mat <- data.frame(LCL=lcl, ICC2=icc, UCL=ucl) return(mat) } psychometric/R/CI.Rsq.R0000644000176200001440000000044011427121000014330 0ustar liggesusers"CI.Rsq" <- function(rsq, n, k, level=.95) { noma <- 1-level sersq <- sqrt((4*rsq*(1-rsq)^2*(n-k-1)^2)/((n^2-1)*(n+3))) zs <- - qnorm(noma/2) mez <- zs*sersq lcl <- rsq - mez ucl <- rsq + mez mat <- data.frame(Rsq = rsq, SErsq = sersq, LCL = lcl, UCL = ucl) return(mat) } psychometric/R/EnterMeta.R0000644000176200001440000000056011427121000015160 0ustar liggesusers"EnterMeta" <- function () { d <- matrix(,ncol=7) d <- data.frame(d) names(d) <-c("study", "Rxy", "n", "Rxx", "Ryy", "u", "moderator") d$study <- as.factor(d$study) d$Rxy <- as.numeric(d$Rxy) d$n <- as.numeric(d$n) d$Rxx <- as.numeric(d$Rxx) d$Ryy <- as.numeric(d$Ryy) d$u <- as.numeric(d$u) d$moderator <- as.factor(d$moderator) meta <- edit(d) } psychometric/R/bRyy.R0000644000176200001440000000030611427121000014217 0ustar liggesusers"bRyy" <- function(x) { Ryy <- x$Ryy n <- length (x$Ryy[!(is.na(x$Ryy))]) b <- mean(sqrt(Ryy),na.rm=TRUE) vb <- var(sqrt(Ryy),na.rm=TRUE)*(n-1)/n out <- list(b,vb) return(out) } psychometric/R/CI.obs.R0000644000176200001440000000037311427121000014353 0ustar liggesusers"CI.obs" <- function (obs, s, rxx, level=.95) { noma <- 1-level sem <- SE.Meas(s, rxx) zs <- - qnorm(noma/2) mez <- zs*sem lcl <- obs - mez ucl <- obs + mez mat <- data.frame(SE.Meas = sem, LCL = lcl, OBS = obs, UCL = ucl) return(mat) } psychometric/R/alpha.R0000644000176200001440000000025711427121000014364 0ustar liggesusers"alpha" <- function(x) { x <- na.exclude(as.matrix(x)) Sx <- sum(var(x)) SumSxi <- sum(apply(x,2,var)) k <- ncol(x) alpha <- k/(k-1)*(1-SumSxi/Sx) return(alpha) } psychometric/R/aprox.Qrbar.R0000644000176200001440000000045011427121000015471 0ustar liggesusers"aprox.Qrbar" <- function(x) { vr <- varr(x) N <- sum(x$n,na.rm=TRUE) rb <- rbar(x) chi <- (N/(1-rb^2)^2)*vr k <- length (x$Rxy[!(is.na(x$Rxy))]) pval <- 1 - pchisq(chi, k-1) mat <- matrix(c(chi,k-1,pval),ncol=3) colnames(mat) <- c("CHISQ", "df", "p-val") return(mat) } psychometric/R/FileDrawer.R0000644000176200001440000000033511427121000015320 0ustar liggesusers"FileDrawer" <- function(x, rc=.1) { k <- length (x$Rxy[!(is.na(x$Rxy))]) rb <- rbar(x) rc <- rc n <- k * (rb/rc - 1) mat <- matrix(n) colnames(mat) <- c("# of 'lost' studies needed") return(mat) } psychometric/R/Qrho.R0000644000176200001440000000051011427121000014200 0ustar liggesusers"Qrho" <- function(x, aproxe=FALSE) { if(!aproxe) { ve <- vare(x)} else {ve <- aprox.vare(x)} k <- length (x$Rxy[!(is.na(x$Rxy))]) vr <- varr(x) vav <- varAV(x) q <- (k*vr)/(vav+ve) pval <- 1 - pchisq(q, k-1) mat <- matrix(c(q,k-1,pval),ncol=3) colnames(mat) <- c("CHISQ", "df", "p-val") return(mat) } psychometric/R/CI.Rsqlm.R0000644000176200001440000000031111427121000014656 0ustar liggesusers"CI.Rsqlm" <- function (obj, level=.95) { l <- level rsq <- summary(obj)$r.squared k <- summary(obj)$df[1] - 1 n <- obj$df + k + 1 mat <- CI.Rsq (rsq, n, k, level=l) return(mat) } psychometric/R/r.nil.R0000644000176200001440000000024311427121000014314 0ustar liggesusers"r.nil" <- function (r, n) { t <- (r*sqrt(n-2))/sqrt(1-r^2) df <- n-2 p <- pt(t, df) d <- data.frame("H0:rNot0" = r, t = t, df=df, p=1-p) return(d) } psychometric/R/cRRr.R0000644000176200001440000000022111427121000014136 0ustar liggesusers"cRRr" <- function (rr, sdy, sdyu) { rxy <- (rr*(sdyu/sdy))/sqrt(1+rr^2*((sdyu^2/sdy^2)-1)) return(data.frame(unrestricted = rxy)) } psychometric/R/TotUtil.R0000644000176200001440000000016211427121000014676 0ustar liggesusers"TotUtil" <- function(Rxy, Sy, MXg, COST, Nselected) { TU <- Nselected*Rxy*Sy*MXg - COST return(TU) } psychometric/R/MetaTable.R0000644000176200001440000000121511427121000015130 0ustar liggesusers"MetaTable" <- function (x) { rb <- rbar (x) vr <- varr (x) ve <- vare (x) pv <- pvse (x)[1] lclhet <- CIrb(x,,F)[1] uclhet <- CIrb(x,,F)[2] lclhom <- CIrb(x)[1] uclhom <- CIrb(x)[2] rho <- rhoCA(x) vrho <- varRCA(x) pva <- pvaaa(x) clcl <- CredIntRho(x, level=.8)[[1]] cucl <- CredIntRho(x, level=.8)[[2]] mat <- data.frame(rbar = rb, Variance.rbar = vr, VarianceSamplingError = ve, PercentDueError = pv, HET95LCL = lclhet, HET95UCL = uclhet, HOM95LCL = lclhom, HOM95UCL = uclhom, RHO = rho, VarianceRho = vrho, PercentDueErrorCorrect = pva, CredInt80LCL = clcl, CredInt80UCL = cucl) return(mat) } psychometric/R/vare.R0000644000176200001440000000020211427121000014222 0ustar liggesusers"vare" <- function(x) { n <- x$n rb <- rbar(x) ve <- sum(n*(1-rb^2)^2/(n-1),na.rm=TRUE)/sum(n,na.rm=TRUE) return(ve) } psychometric/R/SEz.R0000644000176200001440000000005111427121000013770 0ustar liggesusers"SEz" <- function(n) { 1/sqrt(n-3) } psychometric/R/item.exam.R0000644000176200001440000000173711427121000015172 0ustar liggesusers"item.exam" <- function (x, y = NULL, discrim = FALSE) { x <- na.exclude(as.matrix(x)) if (!discrim) { discrim <- NA } else { discrim <- discrim(x) } k <- ncol(x) n <- nrow(x) TOT <- apply(x, 1, sum) TOT.woi <- TOT - (x) diff <- apply(x, 2, mean) rix <- cor(x, TOT, use = "complete") rix.woi <- diag(cor(x, TOT.woi, use = "complete")) sx <- apply(x, 2, sd) vx <- ((n - 1)/n) * sx^2 if (is.null(y)) { riy <- NA } else { y <- y riy <- cor(x, y, use = "complete") } i.val <- riy * sqrt(vx) i.rel <- rix * sqrt(vx) i.rel.woi <- rix.woi * sqrt(vx) mat <- data.frame(Sample.SD = sx, Item.total = rix, Item.Tot.woi = rix.woi, Difficulty = diff, Discrimination = discrim, Item.Criterion = riy, Item.Reliab = i.rel, Item.Rel.woi = i.rel.woi, Item.Validity = i.val) return(mat) } psychometric/R/Est.true.R0000644000176200001440000000013211427121000015000 0ustar liggesusers"Est.true" <- function (obs, mx, rxx) { that <- mx*(1-rxx)+rxx*obs return(that) } psychometric/R/aprox.vare.R0000644000176200001440000000016611427121000015363 0ustar liggesusers"aprox.vare" <- function(x) { n <- x$n rb <- rbar(x) ve <- (1-rb^2)^2/(mean(n, na.rm=TRUE)-1) return(ve) } psychometric/R/z2r.R0000644000176200001440000000007011427121000014005 0ustar liggesusers"z2r" <- function (x) { (exp(2*x)-1)/(exp(2*x)+1) } psychometric/R/CIr.R0000644000176200001440000000032011427121000013743 0ustar liggesusers"CIr" <- function (r, n, level=.95) { z <- r2z(r) uciz <- CIz(z, n, level)[2] lciz <- CIz(z, n, level)[1] ur <- z2r(uciz) lr <- z2r(lciz) mat <- list(lr,ur) return(as.numeric(mat)) } psychometric/R/CVratio.R0000644000176200001440000000017711427121000014647 0ustar liggesusers"CVratio" <- function(NTOTAL, NESSENTIAL) { n <- NTOTAL ne <- NESSENTIAL cvr <- (ne - n/2)/(n/2) return(cvr) } psychometric/R/Qrbar.R0000644000176200001440000000045211427121000014343 0ustar liggesusers"Qrbar" <- function(x) { r <- x$Rxy n <- x$n rb <- rbar(x) chi <- sum((((n-1)*(r-rb)^2)/(1-rb^2)^2),na.rm=TRUE) k <- length (x$Rxy[!(is.na(x$Rxy))]) pval <- 1 - pchisq(chi, k-1) mat <- matrix(c(chi,k-1,pval),ncol=3) colnames(mat) <- c("CHISQ", "df", "p-val") return(mat) } psychometric/R/SE.Est.R0000644000176200001440000000012111427121000014326 0ustar liggesusers"SE.Est" <- function (s, rxx) { see <- s*sqrt(rxx*(1-rxx)) return(see) } psychometric/R/CVF.R0000644000176200001440000000031611427121000013711 0ustar liggesusers"CVF" <- function(x) { ma <- aRxx(x)[[1]] va <- aRxx(x)[[2]] mb <- bRyy(x)[[1]] vb <- bRyy(x)[[2]] mc <- cRR(x)[[1]] vc <- cRR(x)[[2]] cv <- va/ma^2 + vb/mb^2 + vc/mc^2 return(cv) } psychometric/R/rbar.R0000644000176200001440000000017011427121000014217 0ustar liggesusers"rbar" <- function(x) { rxy <- x$Rxy n <- x$n rbar <- sum(n*rxy, na.rm=TRUE)/sum(n,na.rm=TRUE) return(rbar) } psychometric/R/SBrel.R0000644000176200001440000000014311427121000014300 0ustar liggesusers"SBrel" <- function(Nlength, rxx) { rxxp <- Nlength*rxx/(1+(Nlength-1)*rxx) return(rxxp) } psychometric/R/SERHET.R0000644000176200001440000000025111427121000014263 0ustar liggesusers"SERHET" <- function (x) { N <- sum(x$n,na.rm=TRUE) rb <- rbar(x) k <- length (x$Rxy[!(is.na(x$Rxy))]) se <- sqrt((1-rb^2)^2/(N-k)+varRes(x)/k) return(se) } psychometric/R/alpha.CI.R0000644000176200001440000000064711427121000014661 0ustar liggesusers"alpha.CI" <- function (alpha, k, N, level=.90, onesided=FALSE) { if (!onesided) { nomau <- (1 - level)/2 nomal <- 1-nomau } else { nomau <- (1 - level) nomal <- (level) } df1 <- N-1 df2 <- (k-1)*(N-1) Fl <- qf(nomal, df1, df2) Fu <- qf(nomau, df1, df2) lcl <- 1 - (1 - alpha) * Fl ucl <- 1 - (1 - alpha) * Fu mat <- data.frame(LCL = lcl, ALPHA = alpha, UCL = ucl) return(mat) } psychometric/R/CIrdif.R0000644000176200001440000000042611427121000014435 0ustar liggesusers"CIrdif" <- function (r1, r2, n1, n2, level=.95) { rd = r1 - r2 noma <- 1-level sed <- sqrt((1-r1^2)/n1 + (1-r2^2)/n2) zs <- - qnorm(noma/2) mez <- zs*sed lcl <- rd - mez ucl <- rd + mez mat <- data.frame(DifR = rd, SED=sed, LCL = lcl, UCL = ucl) return(mat) } psychometric/R/varAV.R0000644000176200001440000000017411427121000014314 0ustar liggesusers"varAV" <- function(x) { rho <- rhoCA(x) AA <- CAFAA(x) cvf <- CVF(x) vav <- rho^2*AA^2*cvf return(vav) } psychometric/R/rhoCA.R0000644000176200001440000000013511427121000014266 0ustar liggesusers"rhoCA" <- function(x) { rb <- rbar(x) AA <- CAFAA(x) rho <- rb/AA return(rho) } psychometric/R/discrim.R0000644000176200001440000000051011427121000014721 0ustar liggesusers"discrim" <- function(x) { x <- na.exclude(as.matrix(x)) k <- ncol(x) N <- nrow(x) ni <- as.integer(N/3) TOT <- apply(x, 1, mean) tmpx <- cbind(x,TOT)[order(TOT),] tmpxU <- tmpx[(N+1-ni):N,] tmpxL <- tmpx[1:ni,] Ui <- apply(tmpxU,2,sum) Li <- apply(tmpxL,2,sum) discrim <- (Ui - Li)/ni return (discrim[1:k]) } psychometric/R/ICC2.lme.R0000644000176200001440000000064714243504076014557 0ustar liggesusers"ICC2.lme" <- function (dv, grp, data, weighted=FALSE) { dv <- data %>% dplyr::select({{dv}}) %>% purrr::reduce(c) grp <- data %>% dplyr::select({{grp}}) %>% purrr::reduce(c) %>% factor() mod <- lme(dv ~ 1, random=~1|grp, na.action=na.omit) if (!weighted) {icc2 <- mean(gmeanrel(mod)$MeanRel) } else { icc2 <- weighted.mean(gmeanrel(mod)$MeanRel, gmeanrel(mod)$GrpSize) } return(icc2) } psychometric/MD50000644000176200001440000001245214522003402013267 0ustar liggesusers20e47a7f7858708c04ac256f79da526e *DESCRIPTION f54bae9e938d78fb1b7eee93a1671d36 *INDEX b25dc69308e25dc5039c1515f37eb5c3 *NAMESPACE da550fb0b4f513a21128bae6b4072b24 *R/CAFAA.R 5d59f4dec2c4513aaf0af74d54017289 *R/CI.Rsq.R 9fe4ea229b7034d2d3e92d22141da4db *R/CI.Rsqlm.R 8742ad770758160cb162d8f80bcabbfd *R/CI.obs.R 58d489045756abb5ef3b2c4de167fb89 *R/CI.tscore.R 4f005b4a535e6bc0c31d275ea5af0016 *R/CIr.R 604d1e2d724f56290c5ba33fffa55604 *R/CIrb.R 748a028c227e10a26244c93f0065245e *R/CIrdif.R 3b267bd48ba5d5f82f67bbc68cb805eb *R/CIz.R 773a03ee13d799ead7c885cce99d698a *R/CVF.R 00a23358adcdde608cc3c3295633501c *R/CVratio.R 7de30e0ef1581feac89a4ab5427847ce *R/ClassUtil.R 641b5c2987e421dff4196b5d5562a358 *R/CredIntRho.R db3023ad421f495dbc4de0b5833880aa *R/EnterMeta.R 88cecee6522a59cddc0e37ce7e603b5c *R/Est.true.R 167c373efcb2f5e9a73c0ed942eadfde *R/FileDrawer.R 5612a85e3a5bc60a88804226bb1301eb *R/FunnelPlot.R e97534980779cb83b42f5250535f8341 *R/ICC1.CI.R 4c5fbbaa79ebc01b204c1b413c27b699 *R/ICC1.lme.R 92469c571a71c655ff446147112a0011 *R/ICC2.CI.R 5391c74047426aef38929a1b1a48d58e *R/ICC2.lme.R ff436f2d3178070e115f5a71a077047b *R/MargUtil.R 8b70dc3064285fde86a1473e05ae1228 *R/MetaTable.R 2acebcd3ab35f9fdd9f1b763481f602e *R/Qrbar.R 3642c976ad3cf8f8c475e1edec73c918 *R/Qrho.R bdda6cdd814dcac25e606d7fd8684912 *R/SBlength.R 29216787bb53439d17d31ee4b646f67d *R/SBrel.R 1179587055c03addd70dc84670c731cf *R/SE.Est.R 9c5d6d98707b2828deeda0aa9b428f92 *R/SE.Meas.R de1b6019071f7bd333c99fc4fd23abde *R/SE.Pred.R 7fec014cae435307de5e79e2cbb31c90 *R/SERHET.R b26c667661cae1718f639870a3a526ed *R/SERHOM.R a2022c83b4236ed0771b84a45d866ba3 *R/SEz.R 21bdfcae9b6e9ded61d9c51c7c4ef95d *R/TotUtil.R 8e623873e6067b7efdb94fc456b37586 *R/aRxx.R ff6381d14160fb16faed2ab1bfa45012 *R/alpha.CI.R 6d2e89176f35f472d212ef54c6e45f33 *R/alpha.R b60d55038db646e998c3588ccbbf05a3 *R/aprox.Qrbar.R e4d145853491201dfb8e934031c8e5ba *R/aprox.vare.R 8389c64b7dc8ee1c05c727ae15ba0958 *R/bRyy.R 4894a9f3774d8008bf1a7a5ec90d8c62 *R/cRR.R 2ace1153884418fca288fc5f2a2d5409 *R/cRRr.R 9fe8a76860e8857529058f2ff1ca74c2 *R/discrim.R e3a6b6aedc1b5092bc9986618d70b517 *R/item.exam.R 3f86c11439202f762341b756e862de21 *R/pvaaa.R 1efed82af4c0c2f3dca004315cba1bb3 *R/pvse.R 99ff248bf1ab6ae9b77db140d3c46fbb *R/r.nil.R b1dba1efbcfe944d2db53568579cdd74 *R/r2z.R f2514ab4a3f25b6001136bbcf52611f8 *R/rbar.R 2241c319afe7f9ffef0adf5934a311a9 *R/rdif.nul.R c7dffe47c4193c861b85e335c436ab5f *R/rhoCA.R 617f7cf77b9f558481197284a441db0b *R/varAV.R ed2461b0b5ae086c4d409ad8d2fc4904 *R/varRCA.R 1d910cddbf094ed5637959508d9052b0 *R/varRes.R 151bf4e4a28f3000d048b585773fdee2 *R/varResT.R 2312e0b425359d0c75b7bb2f4cfde962 *R/vare.R 6d47ac38a5c2742022f4e8f21565e595 *R/vare36.R 4971912b8a809b2ee3dcb92e45e5af65 *R/varr.R edea39b9c5d8bc34887fa2e821a3b685 *R/z2r.R 72327a8ba5944810b095a9760d35556e *README.md 96dfa4c270641ae98501598b6ce96277 *data/ABHt32.rda db082706f7fe9abd98d8cd3bda7548b3 *data/HSJt35.rda 5140bcbf9b78307d3b933903e62645dc *data/TestScores.rda 09a621cd5bed5fe937febda35d38eba6 *man/ABHt32.Rd 78274794fccb3c026546269bf83cc67d *man/CAFAA.Rd a58d38995c6bb6e6fccfccbb6ba581e4 *man/CI.Rsq.Rd 2d9331b5a3164a8df912c0f8fee5e9b5 *man/CI.Rsqlm.Rd 6a7ef70240a988bf29b28e546d56e2d0 *man/CI.tscore.Rd c188fef3b5fc42db0524fe8db065ba24 *man/CIr.Rd 332bc9682d48c7921370a97ca6e65fb0 *man/CIrb.Rd 2c091373f37ae5393d422a960e5fe841 *man/CIrdif.Rd ce0dc01e64072b409803ad54d9275974 *man/CIz.Rd 0595d4ff7327ac08d9a383997602e5a6 *man/CVF.Rd 16552bb2a471a39e7146e4a5e5835470 *man/CVratio.Rd ed97678cf7477db768028aad6494c70b *man/ClassUtil.Rd 2b33e6561b0c8e155734d4ec2f5d6f2e *man/CredIntRho.Rd ceaf51861182482f381ecf91f761d2a8 *man/EnterMeta.Rd d60aa174831ffbc841bc0a6e13e773be *man/Est.true.Rd a758f9e5ab60934cb42f8926d95fd439 *man/FileDrawer.Rd ec58bffeefb60070d2b9a8ce347ed363 *man/FunnelPlot.Rd d38109d29c364ac013a913210588f228 *man/HSJt35.Rd eaa938a9b0bfd0772bebb77c22593655 *man/ICC.CI.Rd 093f9ec709b643c171e8731c5b3797fb *man/ICC.lme.Rd e6fc8bc9961407e26e7ae9e1f16d5b50 *man/MetaTable.Rd 23f5917e44e7bb5db23dc531717c5e02 *man/Qrbar.Rd d2db73bc4746a91239d20b5e36b3f83f *man/Qrho.Rd 9fa709360954846c80e679d741c6fc74 *man/SBrel.Rd e8908f797d8525c6af8ee009058c5630 *man/SE.Meas.Rd 43911601c2b66e066a46948945316d50 *man/SErbar.Rd f2e7ad03046d85555c12eed4086755f0 *man/SEz.Rd 7a610820562fe7e678f241d6918e3a04 *man/TestScores.Rd d5c40545628d807083c56feba3901869 *man/Utility.Rd 52b9873e4de2b0be5a888e1221a062df *man/alpha.CI.Rd 8de3d78662f141a03a1563f828df227b *man/alpha.Rd 23ad41ae5851491a013e358310552ea0 *man/artifacts.Rd 4e5705e30f1d600a59d6c434f5f5adb2 *man/cRRr.Rd 528ebf80480c52ed2f0506d6f1aa2cb2 *man/discrim.Rd 9551b71eaecc826fcbea3c12a2562f47 *man/item.exam.Rd bed1417deea2ae3d89aac0708b1cfff0 *man/psychometric-package.Rd 6a54b21f44e665f5c6ce3a4a11f890f1 *man/pvaaa.Rd 128a64c2fe36540dd1c2cccb6f4891c6 *man/pvse.Rd 211ab6e3da591eb6a9153047e98cef1d *man/r.nil.Rd 9d4d4c6238b139ca7e0c8562081a6792 *man/r2z.Rd 2860041e5c3d3c66c8c09079528ced7f *man/rbar.Rd ffdce51029f33497223cc18da90deb92 *man/rdif.nul.Rd d676e5b239e87bdb807ca61353d401b2 *man/rhoCA.Rd bf1400089f6dc439e2ff26e4fc6aba33 *man/varAV.Rd 62feac3d26eda6b26133f695dadb883f *man/varRCA.Rd 27b4f92d7b20ee4aaeeca082343feb92 *man/varRes.Rd db2f747495f8e14a86858d8935cbf34e *man/varResT.Rd 64373fe2a4292694ae97001650dbc666 *man/vare.Rd 6ecec30501eebd3ef2b7a26f891987e0 *man/varr.Rd bd583e07aa513880e02097410891dae9 *man/z2r.Rd psychometric/INDEX0000644000176200001440000000661713243467746013607 0ustar liggesusersABHt32 Table 3.2 from Arthur et al CAFAA Compound Attenuation Factor for Meta-Analytic Artifact Corrections CI.Rsq Confidence Interval for R-squared CI.Rsqlm Confidence Interval for Rsq - from lm() CI.tscore Confidence Intervals for Test Scores CIr Confidence Interval for a Correlation Coefficient CIrb Confidence Interval about Sample Weighted Mean Correlation CIrdif Confidence Interval for the difference in Correlation Coefficients CIz Confidence Interval for Fisher z' CVF Compound Variance Factor for Meta-Analytic Artifact Corrections CVratio Content Validity Ratio ClassUtil Classical Utility of a Test CredIntRho Credibility Interval for Meta-Analytic Rho EnterMeta Enter Meta-Analysis Data Est.true Estimation of a True Score FileDrawer File Drawer N FunnelPlot Funnel Plot for Meta-Analysis HSJt35 Table 3.5 Hunter et al. ICC.CI Confidence interval for the Intra-class Correlation ICC.lme Intraclass Correlation Coefficient from a Mixed-Effects Model MetaTable Summary function for 'Complete' Meta-Analysis Qrbar Meta-Analytic Q statistic for r-bar Qrho Meta-Analytic Q statistic for rho SE.Meas Standard Errors of Measurement (test scores) SErbar Standard Error for Sample Size Weighted Mean Correlation SEz Standard Error of Fishers z prime SpearmanBrown Spearman-Brown Prophecy Formulae TestScores Fictitious Test Scores for Illustrative Purposes Utility Marginal and Total Utility of a Test aRxx Artifact Distribtutions Used in Meta-Analysis alpha Cronbach's Coefficient Alpha alpha.CI Confidence Interval for Coefficient Alpha cRRr Correction for Range Restriction discrim Item Discrimination item.exam Item Analysis psychometric-package Applied Psychometric Theory pvaaa Percent of Variance Accounted for by Artifacts in Rho pvse Percent of variance due to sampling error r.nil Nil hypothesis for a correlation r2z Fisher r to z' rbar Sample size weighted mean correlation rdif.nul Null hypothesis for difference in two correlations rhoCA Meta-Analytically Derived Correlation Coefficient Corrected for Artifacts varAV Variance Due to Attenuating Artifacts varRCA Variance in Meta-Analytic Rho varRes Residual Variance in Meta-Analytic Correlation varResT True residual variance in correlations vare Sampling Error Variance varr Sample Size weighted variance z2r Fisher z' to r