psychometric/0000755000175100001440000000000011427315075013006 5ustar hornikuserspsychometric/data/0000755000175100001440000000000011121214527013706 5ustar hornikuserspsychometric/data/ABHt32.rda0000744000175100001440000000055510466514060015335 0ustar hornikusers r0b```b`bfb H020pi6G'c#fa y YAj8D?6};_?T=t_'Rm 쯥E7 n@\ Ē@ )P$f Hwu" _,ek6u Ʒ װ/"Dc|;5PؿGCgy t4dȀI FI y@4%)PsPɘ3+ҥPgn~JjQbI~m90aI*%$Q/4Ezdc0 a c0a e0 *Bpsychometric/data/HSJt35.rda0000744000175100001440000000051510466514060015366 0ustar hornikusers r0b```b`bfb H020pi6`cSfa Hi J fɣZS d @uD]a 71``()<G59'f #T[ZbrI~@r"\}KCu"_|c`u.Bc4*C@@?KEGqTs6wcP?/17g-.)Mr*`LqC'V<搪X_RC+x'6ǥX@E* ܁Iu7.@iyAl>%]ĺ:tq\|Bʓ (-Ipڿ90h|=cʗ~*#d  z؁ uK'Dg\ `:PZ( U' Aֽnu`)s_ د,`a/`jC ].` 8Y Bkq2d)^PAєp%悚 PuX)2,82, 8b442+6L8J psychometric/DESCRIPTION0000744000175100001440000000110111427523002014475 0ustar hornikusersPackage: psychometric Type: Package Title: Applied Psychometric Theory Version: 2.2 Date: 2010-08-07 Author: Thomas D. Fletcher Maintainer: Thomas D. Fletcher Description: Contains functions useful for correlation theory, meta-analysis (validity-generalization), reliability, item analysis, inter-rater reliability, and classical utility Depends: multilevel, nlme, R(>= 2.11.0) Imports: multilevel, nlme License: GPL (>= 2) Packaged: 2010-08-07 17:39:10 UTC; Tom Fletcher Repository: CRAN Date/Publication: 2010-08-08 12:41:38 psychometric/man/0000755000175100001440000000000011121214531013543 5ustar hornikuserspsychometric/man/ABHt32.Rd0000744000175100001440000000173211427314553014777 0ustar hornikusers\name{ABHt32} \alias{ABHt32} \docType{data} \title{Table 3.2 from Arthur et al} \description{ These data are used as an example in ch. 3 of Conducting Meta-Analysis using SAS. The data appear in table 3.1 and 3.2 on pages 66 and 68. The example data are useful in illustrating simple meta-analysis concepts. } \usage{data(ABHt32)} \format{ A data frame with 10 observations on the following 7 variables. \itemize{ \item \emph{study} Study code \item \emph{Rxy} Published Correlation \item \emph{n} Sample Size \item \emph{Rxx} Reliability of Predictor \item \emph{Ryy} Reliability of Criterion \item \emph{u} Range Restriction Ratio \item \emph{moderator} Gender }} \references{ Arthur, Jr., W., Bennett, Jr., W., and Huffcutt, A. I. (2001) \emph{Conducting Meta-analysis using SAS.} Mahwah, NJ: Erlbaum. } \examples{ data(ABHt32) str(ABHt32) rbar(ABHt32) FunnelPlot(ABHt32) } \keyword{datasets} psychometric/man/alpha.CI.Rd0000744000175100001440000000365411036415562015436 0ustar hornikusers\name{alpha.CI} \alias{alpha.CI} \alias{CI.alpha} \title{ Confidence Interval for Coefficient Alpha} \description{ Computes a one-tailed (or two-tailed) CI at the desired level for coefficient alpha } \usage{ alpha.CI(alpha, k, N, level = 0.90, onesided = FALSE) } \arguments{ \item{alpha}{ coefficient alpha to use for CI construction } \item{k}{ number if items } \item{N}{ sample size } \item{level}{ Significance Level for constructing the CI, default is .90 } \item{onesided}{ return a one-sided (one-tailed) test, default is FALSE } } \details{ By inputting alpha, number of items and sample size, one can make inferences via a confidence interval. This can be used to compare two alpha coefficients (e.g., from two groups), or to compare alpha to some specified value (e.g., > = .7). onesided = FALSE renders a two-sided test (i.e., this is the difference between tails of .025/.975 and .05/.95) } \value{ Returns a table with 3 elements \item{LCL }{lower confidence limit of CI} \item{ALPHA }{coefficient alpha} \item{UCL }{upper confidence limit of CI} } \references{ Feldt, L. S., Woodruff, D. J., & Salih, F. A. (1987). Statistical inferences for coefficient alpha. \emph{Applied Psychological Measurement, 11,} 93-103. } \author{ Thomas D. Fletcher \email{tom.fletcher.mp7e@statefarm.com}} \note{ Feldt et al., provide a number of procedures for making inferences about alpha (e.g., F test of the null hypothesis). Since the CI is the most versatile, it is the only function created in this package } \section{ Warning }{You must first compute alpha and then enter into function. \code{alpha.CI} will not evaluate a data.frame or matrix object. } \seealso{ \code{\link{alpha}} } \examples{ # From Feldt et al (1987) # alpha = .79, #items = 26, #examinees = 41 # a two-tailed test 90\% level alpha.CI(.79, 26, 41) } \keyword{ models } \keyword{ univar }psychometric/man/alpha.Rd0000744000175100001440000000217311036415562015137 0ustar hornikusers\name{alpha} \alias{alpha} \title{ Cronbach's Coefficient Alpha} \description{ Coefficient alpha is a measure of internal consistency. It is a standard measure of reliability for tests. } \usage{ alpha(x) } \arguments{ \item{x}{ Data.frame or matrix object with rows corresponding individuals and columns to items } } \details{ You can specify any portion of a matrix or data.frame. For instance, if using a data.frame with numerous variables corresponding to items, one can specify subsets of those items. See examples below. \cr alpha <- \eqn{k/(k-1)*(1-SumSxi/Sx)} \cr where k is the number of items, Sx is the standard deviaton of the total test, and SumSxi is the sum of the standard deviations for each item. } \value{ coefficient alpha} \references{ Cronbach, L. J. (1951). Coefficient alpha and the internal structure of tests. \emph{Psychometrika, 6,} 297-334. } \author{Thomas D. Fletcher \email{tom.fletcher.mp7e@statefarm.com} } \seealso{ \code{\link{alpha.CI}} } \examples{ data(attitude) alpha(attitude) alpha(attitude[,1:5]) } \keyword{ models } \keyword{ univar } psychometric/man/artifacts.Rd0000744000175100001440000000420011427314513016021 0ustar hornikusers\name{artifacts} \alias{aRxx} \alias{bRyy} \alias{cRR} \title{ Artifact Distribtutions Used in Meta-Analysis} \description{ Three artifact distributions are computed with each of these three functions which are then used to correct the observed sample-weighted mean correlation for attenuation. The artifacts are reliability in predictor, reliability in criterion, and range-restriction. } \usage{ aRxx(x) bRyy(x) cRR(x) } \arguments{ \item{x}{ A matrix or data.frame with columns Rxx, Ryy, and u: see \code{\link{EnterMeta}}} } \details{ \itemize{ \item \emph{aRxx } Distribution of measurement error in the predictor: a = sqrt(Rxx) \item \emph{bRyy } Distribution of measurement error in the criterion: b = sqrt(Ryy) \item \emph{cRR } Degree of range restriction indicated by ratio u \cr (restricted SD/unrestricted SD): \eqn{c = sqrt((1-u^2)*rb^2+u^2) }. } These are used in the computation of the compound attentuation factor \code{\link{CAFAA}} = mean(a)*mean(b)*mean(c). } \value{ A list containing: \item{ma }{ Mean of a (or b or c)} \item{va }{ Variance of a (or b or c)} } \references{ Arthur, Jr., W., Bennett, Jr., W., and Huffcutt, A. I. (2001) \emph{Conducting Meta-analysis using SAS.} Mahwah, NJ: Erlbaum. Hunter, J.E. and Schmidt, F.L. (2004). \emph{Methods of meta-analysis: Correcting error and bias in research findings (2nd ed.).} Thousand Oaks: Sage Publications. Hunter, J.E., Schmidt, F.L., and Jackson, G.B. (1982). \emph{Meta-analysis: Cumulating research findings across studies.} Beverly Hills: Sage Publications. } \author{ Thomas D. Fletcher \email{tom.fletcher.mp7e@statefarm.com} } \note{ One usually will not use these functions alone, but rather use functions that make use of these correction factors. } \seealso{ \code{\link{rhoCA}}, \code{\link{varAV}}, \code{\link{varResT}}, \code{\link{pvaaa}} } \examples{ # From Arthur et al data(ABHt32) aRxx(ABHt32) bRyy(ABHt32) cRR(ABHt32) rhoCA(ABHt32) # From Hunter et al data(HSJt35) aRxx(HSJt35) bRyy(HSJt35) cRR(HSJt35) rhoCA(HSJt35) } \keyword{ univar } \keyword{ models } psychometric/man/CAFAA.Rd0000744000175100001440000000345311036415562014647 0ustar hornikusers\name{CAFAA} \alias{CAFAA} \title{ Compound Attenuation Factor for Meta-Analytic Artifact Corrections } \description{ The compound attenuation factor is computed as the product of the mean for each artifact distribution (square root of artifact) when correcting for attenuation in a correlation coefficient. } \usage{ CAFAA(x) } \arguments{ \item{x}{ A matrix or data.frame with columns Rxx, Ryy, and u: see \code{\link{EnterMeta}}} } \details{ The compound attenuation factor is computed as the product of mean(a)*mean(b)*mean(c) where \cr a = sqrt(Rxx) and is computed with the function \code{\link{aRxx}} \cr b = sqrt(Ryy) and is computed with the function \code{\link{bRyy}} \cr c = \eqn{sqrt((1-u^2)*rbar^2+u^2)} and is computed with the function \code{\link{cRR}} } \value{ A numeric value representing the compound attenuation factor } \references{ Arthur, Jr., W., Bennett, Jr., W., and Huffcutt, A. I. (2001) \emph{Conducting Meta-analysis using SAS.} Mahwah, NJ: Erlbaum. Hunter, J.E. and Schmidt, F.L. (2004). \emph{Methods of meta-analysis: Correcting error and bias in research findings (2nd ed.).} Thousand Oaks: Sage Publications. Hunter, J.E., Schmidt, F.L., and Jackson, G.B. (1982). \emph{Meta-analysis: Cumulating research findings across studies.} Beverly Hills: Sage Publications. } \author{ Thomas D. Fletcher \email{tom.fletcher.mp7e@statefarm.com}} \note{ This value is used in the correction for artifacts of a correlation coefficient } \seealso{ \code{\link{rhoCA}}, \code{\link{aRxx}}, \code{\link{bRyy}}, \code{\link{cRR}} } \examples{ #From Arthur et al data(ABHt32) CAFAA(ABHt32) rhoCA(ABHt32) # From Hunter et al data(HSJt35) CAFAA(HSJt35) rhoCA(HSJt35) } \keyword{ univar } \keyword{ models } psychometric/man/CI.Rsq.Rd0000744000175100001440000000273411036415562015114 0ustar hornikusers\name{CI.Rsq} \alias{CI.Rsq} \title{ Confidence Interval for R-squared } \description{ Computes the confidence interval for a desired level for the squared-multiple correlation} \usage{ CI.Rsq(rsq, n, k, level = 0.95) } \arguments{ \item{rsq}{ Squared Multiple Correlation } \item{n}{ Sample Size } \item{k}{ Number of Predictors in Model } \item{level}{ Significance Level for constructing the CI, default is .95 } } \details{ CI is constructed based on the approximate SE of Rsq \cr \eqn{sersq <- sqrt((4*rsq*(1-rsq)^2*(n-k-1)^2)/((n^2-1)*(n+3)))} } \value{ Returns a table with 4 elements \item{Rsq }{ Squared Multiple Correlation} \item{SErsq }{ Standard error of Rsq} \item{LCL }{ Lower Confidence Limit of the CI} \item{UCL }{ Upper Confidence Limit of the CI}} \references{ Olkin, I. & Finn, J. D. (1995). Correlation Redux. \emph{Psychological Bulletin, 118}, 155-164. Cohen, J., Cohen, P., West, S. G., & Aiken, L. S. (2003). \emph{Applied multiple regression/correlation analysis for the behavioral sciences (3rd ed.).} Mahwah, NJ: Lawrence Erlbaum. } \author{ Thomas D. Fletcher \email{tom.fletcher.mp7e@statefarm.com} } \note{ This is an adequate approximation for n > 60 } \seealso{ \code{\link{CI.Rsqlm}} } \examples{ # see section 3.6.2 Cohen et al (2003) # 95 percent CI CI.Rsq(.5032, 62, 4, level = .95) # 80 percent CI CI.Rsq(.5032, 62, 4, level = .80) } \keyword{ htest } \keyword{ models } psychometric/man/CI.Rsqlm.Rd0000744000175100001440000000257511427306144015447 0ustar hornikusers\name{CI.Rsqlm} \alias{CI.Rsqlm} \title{ Confidence Interval for Rsq - from lm() } \description{ Computes the CI for a desired level based on an object of class lm() } \usage{ CI.Rsqlm(obj, level = 0.95) } \arguments{ \item{obj}{ object of a linear model } \item{level}{ Significance Level for constructing the CI, default is .95 } } \details{ Extracts the necessary information from the linear model object and uses \code{\link{CI.Rsq}}} \value{ Returns a table with 4 elements \item{Rsq }{ Squared Multiple Correlation} \item{SErsq }{ Standard error of Rsq} \item{LCL }{ Lower Confidence Limit of the CI} \item{UCL }{ Upper Confidence Limit of the CI} } \references{ Olkin, I. & Finn, J. D. (1995). Correlation Redux. \emph{Psychological Bulletin, 118}, 155-164. Cohen, J., Cohen, P., West, S. G., & Aiken, L. S. (2003). \emph{Applied multiple regression/correlation analysis for the behavioral sciences (3rd ed.).} Mahwah, NJ: Lawrence Erlbaum. } \author{ Thomas D. Fletcher \email{tom.fletcher.mp7e@statefarm.com} } \note{ This is an adequate approximation for n > 60 } \seealso{ \code{\link{CI.Rsq}}} \examples{ # Generate data x <- rnorm(100) z <- rnorm(100) xz <- x*z y <- .25*x - .25*z + .25*x*z + .25*rnorm(100) # Create an lm() object lm1 <- lm(y ~ x*z) CI.Rsqlm(lm1) } \keyword{ htest } \keyword{ models } psychometric/man/CI.tscore.Rd0000744000175100001440000000503011036415562015636 0ustar hornikusers\name{CI.tscore} \alias{CI.tscore} \alias{CI.obs} \title{ Confidence Intervals for Test Scores } \description{ Computes the CI for a desired level for observed scores and estimated true scores} \usage{ CI.tscore(obs, mx, s, rxx, level = 0.95) CI.obs(obs, s, rxx, level = 0.95) } \arguments{ \item{obs}{ Observed test score on test x} \item{mx}{ mean of test x } \item{s}{ standard deviation of test x } \item{rxx}{ reliability of test x} \item{level}{ Significance Level for constructing the CI, default is .95} } \details{ \code{CI.tscore} makes use of \code{\link{Est.true}} to correct the observed score for regression to the mean and \code{\link{SE.Est}} for the correct standard error. \code{CI.tscore} also requires entry of the mean of the test scores for correcting for regression to the mean. \cr \code{CI.obs} is much simpler in construction as it only makes use of the observed score without any corrections. \code{CI.obs} uses \code{\link{SE.Meas}}, the SEM that appears in most test manuals and text books. } \value{ Both functions return a table with 4 elements \item{SE. }{ Standard Error of the Estimate or SE of Measurement} \item{LCL }{ lower confidence limit of the CIDescription of 'comp2'} \item{T.Score }{ (or OBS) Estimate True Score or Observed score} \item{UCL }{ upper confidence limit of the CI} } \references{ Dudek, F. J. (1979). The continuing misinterpretation of the standard error of measurement. \emph{Psychological Bulletin, 86}, 335-337. } \author{ Thomas D. Fletcher \email{tom.fletcher.mp7e@statefarm.com} } \note{ It is not in error to report any one of these. The misinterpretation is in taking the observed score and making inferences about the true score without (1) using the correct standard error and (2) correcting for regression toward the mean of the observed scores. } \section{Warning }{ Be Cautious in construction and interpretation of CIs \cr To obtain percent for 1 SEM \cr 1-((1-pnorm(1))*2) \cr To obtain percent for 2 SEM \cr 1-((1-pnorm(2))*2) \cr 95 percent CI corresponds to 1.96 * SE \cr 1 * SE corresponds to .6827 \cr 2 * SE corresponds to 0.9772499 \cr so, for two-sided, 2 * SE corresponds to 0.9544997 \cr } \seealso{ \code{\link{SE.Meas}} } \examples{ # Examples from Dudek (1979) # Suppose a test has mean = 500, SD = 100 rxx = .9 # If an individual scores 700 on the test CI.tscore (700, 500, 100, .9, level=.68) CI.obs(700, 100,.9, level=.68) } \keyword{ models } \keyword{ htest }psychometric/man/CIr.Rd0000744000175100001440000000205211036415562014523 0ustar hornikusers\name{CIr} \alias{CIr} \title{ Confidence Interval for a Correlation Coefficient } \description{ Will construct the CI for a desired level given a correlation and sample size } \usage{ CIr(r, n, level = 0.95) } \arguments{ \item{r}{ Correlation Coefficient} \item{n}{ Sample Size } \item{level}{ Significance Level for constructing the CI, default is .95} } \value{ \item{LCL }{ Lower Confidence Limit of the CI} \item{UCL }{ Upper Confidence Limit of the CI} } \references{ Cohen, J., Cohen, P., West, S. G., & Aiken, L. S. (2003). \emph{Applied multiple regression/correlation analysis for the behavioral sciences (3rd ed.).} Mahwah, NJ: Lawrence Erlbaum. } \author{ Thomas D. Fletcher \email{tom.fletcher.mp7e@statefarm.com} } \note{ Does not compute r, you must enter it into the function} \seealso{ \code{\link{r2z}}, \code{\link{CIz}}, \code{\link{SEz}}, \code{\link{z2r}} } \examples{ # From ch. 2 in Cohen et al (2003) CIr (.657, 15) } \keyword{ htest } \keyword{ models } psychometric/man/CIrb.Rd0000744000175100001440000000422311036415562014667 0ustar hornikusers\name{CIrb} \alias{CIrb} \alias{CIrbar} \title{ Confidence Interval about Sample Weighted Mean Correlation} \description{ Produces a CI for the desired level of the sample weighted mean correlation using the appropriate standard error. } \usage{ CIrb(x, LEVEL = 0.95, homogenous = TRUE) } \arguments{ \item{x}{ A matrix or data.frame with columns Rxy and n: see \code{\link{EnterMeta}}} \item{LEVEL}{ Significance Level for constructing the CI, default is .95} \item{homogenous}{ Whether or not to use homogenous or heterogenous SE } } \details{ The CI is constructed based on the uncorrected mean correlation. It is corrected for sampling error only. To get the CI for the mean correlation corrected for artifacts, use \code{\link{CredIntRho}}, but this is a credibility interval rather than a confidence interval. See Hunter & Schmidt (2004) for more details on the interpretation of the differences. If the CI is computed about a heterogenous mean correlation, one is implying that moderators are present, but that one can't determine what those moderators might be. Otherwise, strive to parse the studies into homogenous subsets and create CI about those means within the subsets. } \value{ A list containing: \item{LCL }{ Lower Confidence Limit of the CI} \item{UCL }{ Upper Confidence Limit of the CI} } \references{ Arthur, Jr., W., Bennett, Jr., W., and Huffcutt, A. I. (2001) \emph{Conducting Meta-analysis using SAS.} Mahwah, NJ: Erlbaum. Hunter, J.E. and Schmidt, F.L. (2004). \emph{Methods of meta-analysis: Correcting error and bias in research findings (2nd ed.).} Thousand Oaks: Sage Publications. Hunter, J.E., Schmidt, F.L., and Jackson, G.B. (1982). \emph{Meta-analysis: Cumulating research findings across studies.} Beverly Hills: Sage Publications. } \author{ Thomas D. Fletcher \email{tom.fletcher.mp7e@statefarm.com} } \seealso{ \code{\link{SErbar}}, \code{\link{rbar}} } \examples{ #From Arthur et al data(ABHt32) rbar(ABHt32) CIrb(ABHt32) # From Hunter et al data(HSJt35) rbar(HSJt35) CIrb(HSJt35) } \keyword{ univar } \keyword{ models } \keyword{ htest }psychometric/man/CIrdif.Rd0000744000175100001440000000263411036415562015214 0ustar hornikusers\name{CIrdif} \alias{CIrdif} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Confidence Interval for the difference in Correlation Coefficients } \description{ Will construct the CI for a difference in two correlations for a desired level} \usage{ CIrdif(r1, r2, n1, n2, level = 0.95) } \arguments{ \item{r1}{ Correlation 1 } \item{r2}{ Correlation 2 } \item{n1}{ Sample size for \code{r1} } \item{n2}{ Sample size for \code{r2} } \item{level}{ Significance Level for constructing the CI, default is .95} } \details{ Constructs a confidence interval based on the standard error of the difference of two correlations \eqn{(r1 - r2)}, sed \eqn{<- sqrt((1-r1^2)/n1 + (1-r2^2)/n2) }} \value{ Returns a table with 4 elements \item{DifR }{ Observed Difference in correlations} \item{SED }{ Standard error of the difference} \item{LCL }{ Lower Confidence Limit of the CI} \item{UCL }{ Upper Confidence Limit of the CI} } \references{ Cohen, J., Cohen, P., West, S. G., & Aiken, L. S. (2003). \emph{Applied multiple regression/correlation analysis for the behavioral sciences (3rd ed.).} Mahwah, NJ: Lawrence Erlbaum. } \author{ Thomas D. Fletcher \email{tom.fletcher.mp7e@statefarm.com} } \seealso{ \code{\link{rdif.nul}} } \examples{ # From ch. 2 in Cohen et al (2003) CIrdif(.657, .430, 62, 143) } \keyword{ htest } \keyword{ models } psychometric/man/CIz.Rd0000744000175100001440000000176011427306052014535 0ustar hornikusers\name{CIz} \alias{CIz} \title{ Confidence Interval for Fisher z' } \description{ Constructs a CI for a specified level about z'. This is useful for constructing CI for a correlation} \usage{ CIz(z, n, level = 0.95) } \arguments{ \item{z}{ Fishers z'} \item{n}{ Sample Size } \item{level}{ Significance Level for constructing the CI, default is .95} } \value{ \item{LCL }{ Lower Confidence Limit of the CI} \item{UCL }{ Upper Confidence Limit of the CI} } \references{ Cohen, J., Cohen, P., West, S. G., & Aiken, L. S. (2003). \emph{Applied multiple regression/correlation analysis for the behavioral sciences (3rd ed.).} Mahwah, NJ: Lawrence Erlbaum. } \author{ Thomas D. Fletcher \email{tom.fletcher.mp7e@statefarm.com} } \seealso{ \code{\link{r2z}}, \code{\link{CIr}}, \code{\link{SEz}}, \code{\link{z2r}} } \examples{ # From ch. 2 in Cohen et al (2003) zp <- r2z(.657) CIz(zp, 15) } \keyword{ htest } \keyword{ models } psychometric/man/ClassUtil.Rd0000744000175100001440000000505711036415562015761 0ustar hornikusers\name{ClassUtil} \alias{ClassUtil} \title{ Classical Utility of a Test } \description{ Calculate the classical utility of a test given a correlation, base-rate and selection ratio.} \usage{ ClassUtil(rxy = 0, BR = 0.5, SR = 0.5) } \arguments{ \item{rxy}{ Correlation of Test X with Outcome Y } \item{BR}{ Base Rate or prevalence without use of a test} \item{SR}{ Selection Ratio: Number selected out of those tested } } \details{ The degree of utility of using a test as a selection instrument over randomly selecting individuals can be reflected in the decision outcomes expected by using the selection instrument. Suppose you have a predictor (selection instrument) and a criterion (job performance). By regressing the criterion on the predictor, and selecting individuals based on some cut-off value, we have 4 possible outcomes. A = True Positives, B = True Negatives, C = False Negatives, and D = False Positives. The classical utility of using the test over current procedures (random selection) is: [A / (A+D)] - [(A + C) / (A + B + C + D)] Various manipulations of these relationships can be used to assist in decision making. } \value{ Returns a table with the following elements reflecting decision outcomes: \item{True Positives}{ Probability of correctly selecting a successful candidate } \item{False Negatives}{ Probability of incorrectly not selecting a successful candidate } \item{False Positives}{ Probability of incorrectly selecting an unsuccessful candidate } \item{True Negatives}{ Probability of correctly not selecting an unsuccessful candidate } \item{Sensitivity}{ True Positives / (True Positives + False Negatives)} \item{Specificity}{ True Negatives / (True Negatives + False Positives)} \item{\% of Decisions Correct}{ Percentage of correct decisions} \item{Proportion Selected Succesful}{ Proportion of those selected expected to be successful} \item{\% Improvement over BR}{ Percentage of improvement using the test over random selection} } \references{ Murphy, K. R. & Davidshofer, C. O. (2005). \emph{Psychological testing: Principles and applications (5th ed.).} Saddle River, NJ: Prentice Hall. } \author{ Thomas D. Fletcher \email{tom.fletcher.mp7e@statefarm.com} } \seealso{ \code{\link{Utility}} } \examples{ # 50 percent of those randomly selected are expected to be successful # A company need only select 1/10 applicants # The correlation between test scores and performance is .35 ClassUtil(.35, .5, .1) } \keyword{ univar } psychometric/man/CredIntRho.Rd0000744000175100001440000000412711036415562016054 0ustar hornikusers\name{CredIntRho} \alias{CredIntRho} \title{ Credibility Interval for Meta-Analytic Rho} \description{ Computed the credibility interval about the population correlation coefficient at the desired level.} \usage{ CredIntRho(x, aprox = FALSE, level = 0.95) } \arguments{ \item{x}{ A matrix or data.frame with columns Rxy, n and artifacts (Rxx, Ryy, u): see \code{\link{EnterMeta}}} \item{aprox}{ Logical test to determine if the approximate or exact var e is used} \item{level}{ Significance Level for constructing the CI, default is .95 } } \details{ The credibility interval is used for the detection of potential moderators. Intervals that large or include zero potentially reflect the presence of moderators. Credibility intervals are constructed about rho, whereas confidence intervals are generally constructed about rbar. See Hunter & Schmidt (2004) for a description of the different uses. The credibility interval is computed as: rho +/- z[crit] * SD(rho) where, rho is the corrected correlation, z[crit] is the critcal z value (1.96 for 95\%), and SD(rho) is the sqrt(variance in rho). } \value{ \item{LCL }{ Lower Confidence Limit of the CI} \item{UCL }{ Upper Confidence Limit of the CI} } \references{ Arthur, Jr., W., Bennett, Jr., W., and Huffcutt, A. I. (2001) \emph{Conducting Meta-analysis using SAS.} Mahwah, NJ: Erlbaum. Hunter, J.E. and Schmidt, F.L. (2004). \emph{Methods of meta-analysis: Correcting error and bias in research findings (2nd ed.).} Thousand Oaks: Sage Publications. Hunter, J.E., Schmidt, F.L., and Jackson, G.B. (1982). \emph{Meta-analysis: Cumulating research findings across studies.} Beverly Hills: Sage Publications. } \author{ Thomas D. Fletcher \email{tom.fletcher.mp7e@statefarm.com}} \seealso{ \code{\link{rbar}}, \code{\link{rhoCA}}, \code{\link{CIrb}}, \code{\link{varRes}} } \examples{ # From Arthur et al data(ABHt32) CredIntRho(ABHt32, aprox=TRUE) # From Hunter et al data(HSJt35) CredIntRho(HSJt35) } \keyword{ univar } \keyword{ models } \keyword{ htest } psychometric/man/cRRr.Rd0000744000175100001440000000360611036415562014724 0ustar hornikusers\name{cRRr} \alias{cRRr} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Correction for Range Restriction } \description{ Corrects a correlation for Range restriction given population and sample standard deviations} \usage{ cRRr(rr, sdy, sdyu) } \arguments{ \item{rr}{ Observed or restricted correlation } \item{sdy}{ Standard deviation of a restricted sample } \item{sdyu}{ Standard deviation of an unrestricted sample } } \details{ When one of the variables used to measure a correlation has a restricted variance One the correlation will be attenuated. This commonly occurs for instance when using incumbents (those already selected by previous procedures) to based decisions about validity of new selection procedures. Given u (ratio of unrestricted SD of one variable to the restricted SD of that variable), the following formula is used to correct for attenuation in a correlation coefficient: \cr \eqn{rxy <- (rr*(sdyu/sdy))/sqrt(1+rr^2*((sdyu^2/sdy^2)-1))}} \value{ \item{unrestricted }{corrected correlation} } \references{ Cohen, J., Cohen, P., West, S. G., & Aiken, L. S. (2003). \emph{Applied multiple regression/correlation analysis for the behavioral sciences (3rd ed.).} Mahwah, NJ: Lawrence Erlbaum. } \author{ Thomas D. Fletcher \email{tom.fletcher.mp7e@statefarm.com} } \note{ Do not confuse this function with the meta-analysis function cRR in this same package! } \seealso{ \code{\link{cRR}} } \examples{ # See section 2.10.3 of Cohen et al (2003) cRRr(.25, 12, 5) # Create two correlated variables x <- rnorm(1000) y <- 0.71*x +rnorm(1000) cor(x,y) # order and select top 1/10 tmp <- cbind(x,y)[order(y,x),][1:100,] rxyr <- cor(tmp[,"x"],tmp[,"y"]) # restricted rxy rxyr # correct for restriction of range cRRr(rxyr, sd(tmp[,"y"]), sd(y)) } \keyword{ htest } \keyword{ models } psychometric/man/CVF.Rd0000744000175100001440000000333311036415562014467 0ustar hornikusers\name{CVF} \alias{CVF} \title{ Compound Variance Factor for Meta-Analytic Artifact Corrections } \description{ The compound variance factor is computed by summing the individual squared coefficients of variation for each artifact when correcting for attenuation in a correlation coefficient } \usage{ CVF(x) } \arguments{ \item{x}{ A matrix or data.frame with columns representing artifacts (Rxx, Ryy, u): see \code{\link{EnterMeta}}} } \details{ The CVF is equal to scv(a) + scv(b) + scv(c), where scv is the squared coefficient of variation. The letters a, b, c represent artifacts reliability in predictor, reliability in criterion, and restriction of range respectively. The scv is computed as the variance in the artifact divided by the square of the average for the artifact. } \value{ a numeric value representing the compound variance factor } \references{ Arthur, Jr., W., Bennett, Jr., W., and Huffcutt, A. I. (2001) \emph{Conducting Meta-analysis using SAS.} Mahwah, NJ: Erlbaum. Hunter, J.E. and Schmidt, F.L. (2004). \emph{Methods of meta-analysis: Correcting error and bias in research findings (2nd ed.).} Thousand Oaks: Sage Publications. Hunter, J.E., Schmidt, F.L., and Jackson, G.B. (1982). \emph{Meta-analysis: Cumulating research findings across studies.} Beverly Hills: Sage Publications. } \author{ Thomas D. Fletcher \email{tom.fletcher.mp7e@statefarm.com} } \seealso{ \code{\link{aRxx}}, \code{\link{bRyy}}, \code{\link{cRR}}, \code{\link{varAV}}, \code{\link{CAFAA}}} \examples{ # From Arthur et al data(ABHt32) CVF(ABHt32) # From Hunter et al data(HSJt35) CVF(HSJt35) } \keyword{ univar } \keyword{ models } \keyword{ htest } psychometric/man/CVratio.Rd0000744000175100001440000000247711036415562015430 0ustar hornikusers\name{CVratio} \alias{CVratio} \title{ Content Validity Ratio } \description{ Computes Lawshe's CVR for determining whether items are essential or not. } \usage{ CVratio(NTOTAL, NESSENTIAL) } \arguments{ \item{NTOTAL}{ Total number of Experts} \item{NESSENTIAL}{ Number of Experts indicating item 'essential' } } \details{ To determine content validity (in relation to job performance), a panel of subject matter experts will examine a set of items indicating whether the items are essential, useful, not necessary. The CVR is calculated to indicate whether the item is pertinent to the content validity. \cr CVR values range +1 to -1. Values closer to +1 indicated experts are in aggreement that the item is essential to content validity. } \value{ Content Validity Ratio } \references{ Lawshe, C. H. (1975). A quantitative approach to content validity. \emph{Personnel Psychology, 28,} 563-575. } \note{ CVR = (Ne - N/2)/(N-1) } \author{ Thomas D. Fletcher \email{tom.fletcher.mp7e@statefarm.com} } \examples{ # Using 5 Expert panelists (SMEs) # The ratings for an item is as follows: # Rater1 = Essential # Rater2 = Essential # Rater3 = Essential # Rater4 = Useful # Rater5 = Not necessary # # essential = 3 CVratio (5, 3) } \keyword{ univar } psychometric/man/discrim.Rd0000744000175100001440000000344511427307246015512 0ustar hornikusers\name{discrim} \alias{discrim} \title{ Item Discrimination } \description{ Discrimination of an item is the ability for a specific item to distinguish among upper and lower ability individuals on a test} \usage{ discrim(x) } \arguments{ \item{x}{ matrix or data.frame of items to be examined. Rows represent persons, Columns represent items } } \details{ The function takes data on individuals and their test scores and computes a total score to separate high and low ordered individuals. The upper and lower groups are defined as the top and bottom 1/3 of the total. Discrimination is then computed and returned for each item using the formula: \cr (number correct in the upper group - number correct in the lower group ) / size of each group } \value{ Discrimination index for each item in the data.frame or matrix analyzed. } \references{ Allen, M. J. & Yen, W. M. (1979). \emph{Introduction to measurement theory.} Monterey, CA: Brooks/Cole. } \author{ Thomas D. Fletcher \email{tom.fletcher.mp7e@statefarm.com} } \note{ \code{discrim} is used by \code{\link{item.exam}} \code{discrim} is especially useful for dichotomously coded items such as correct/incorrect. If items are not dischotomously coded, the interpretation of \code{discrim} has less meaning. } \seealso{ \code{\link{item.exam}} } \examples{ # see item.exam # Scores on a test for 12 individuals # 1 = correct item1 <- c(1,1,1,0,1,1,1,1,1,1,0,1) item2 <- c(1,0,1,1,1,1,1,1,1,1,1,0) item3 <- c(1,1,1,1,1,1,1,1,1,1,1,1) item4 <- c(0,1,0,1,0,1,0,1,1,1,1,1) item5 <- c(0,0,0,0,1,0,0,1,1,1,1,1) item6 <- c(0,0,0,0,0,0,1,0,0,1,1,1) item7 <- c(0,0,0,0,0,0,0,0,1,0,0,0) exam <- cbind(item1, item2, item3, item4, item5, item6, item7) discrim(exam) } \keyword{ models } \keyword{ univar } psychometric/man/EnterMeta.Rd0000744000175100001440000000333511036415562015737 0ustar hornikusers\name{EnterMeta} \alias{EnterMeta} \title{ Enter Meta-Analysis Data} \description{ This function creates data entry object suitable for creating an object needed in the typical meta-analysis. The object will have the appropriate variable names. } \usage{ EnterMeta() } \details{ To create a data object appropriate for the meta-analysis functions in this package: Type \cr my.Meta.data <- EnterMeta() \cr Then use the data editor to enter data in the appropriate columns. } \value{ Does not return a value, but rather is used for naming columns of a data.frame() The final object (if saved) will contain: \cr \item{study }{ Enter Study Code or article name} \item{Rxy }{ Correlation coefficient} \item{n }{ Sample size for study} \item{Rxx }{ Reliability of predictor variable X } \item{Ryy }{ Reliability of criterion variable Y} \item{u }{ Degree of range restriction - ratio of restricted to unrestricted standard deviations} \item{moderator }{ moderator variable (if any)} } \author{Thomas D. Fletcher \email{tom.fletcher.mp7e@statefarm.com} } \note{ This is the general format required for data objects used for all the meta-analysis functions in this package. If certain variables are empty (e.g., Rxx, u), then the appropriate correction is not made, but the placeholder must be there. Moderator is useful for the user to subset the data and re-run any functions. } \section{Warning }{ This function will not automatically save your data object. You must create the object using the assignment operator. } \seealso{ As an alternative, consider \code{\link{read.csv}} for importing data prepared elsewhere (e.g., Excel)} \examples{ # my.data <- EnterMeta() } \keyword{ manip } psychometric/man/Est.true.Rd0000744000175100001440000000246511036415562015567 0ustar hornikusers\name{Est.true} \alias{Est.true} \title{ Estimation of a True Score } \description{ Given the mean and reliability of a test, this function estimates the true score based on an observed score. The estimation is accounting for regression to the mean } \usage{ Est.true(obs, mx, rxx) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{obs}{ an observed score on test x} \item{mx}{ mean of test x } \item{rxx}{ reliability of test x} } \details{ The estimated true score (that) is computed as \cr that <- mx*(1-rxx)+rxx*obs \cr When the obs score is much higher than the mean, the that < obs \cr When the obs score is much lower than the mean, that > obs } \value{ Estimated True score } \references{ Dudek, F. J. (1979). The continuing misinterpretation of the standard error of measurement. \emph{Psychological Bulletin, 86}, 335-337. } \author{ Thomas D. Fletcher \email{tom.fletcher.mp7e@statefarm.com} } \seealso{ \code{\link{CI.tscore}}, \code{\link{SE.Est}} } \examples{ # Examples from Dudek (1979) # Suppose a test has mean = 500, SD = 100 rxx = .9 # If an individual scores 700 on the test Est.true(700, 500, .9) # If an individual scores 400 on the test Est.true(400, 500, .9) } \keyword{ models } \keyword{ distribution }psychometric/man/FileDrawer.Rd0000744000175100001440000000310111036415562016066 0ustar hornikusers\name{FileDrawer} \alias{FileDrawer} \title{ File Drawer N } \description{ Computes the number of 'lost' studies needed to render the observed meta-analytic correlation to non-significance. } \usage{ FileDrawer(x, rc = 0.1) } \arguments{ \item{x}{ A matrix or data.frame with columns Rxy and n: see \code{\link{EnterMeta}}} \item{rc}{ cut-off correlation for which to make a comparison} } \details{ Use to detect availability bias in published correlations. It is computed as n <- k * (rb/rc - 1), where, n is the file drawer n, k is the number of studies in current meta-analyis, rb is rbar and rc is the cut-off correlation for which you wish to make a comparison. For a test of the null hypothesis, use rc = 0. In many instances, practitioners are interested in reducing correlations to less than 1 percent of the variance accounted for (i.e., rc = .1). } \value{ \item{"# of 'lost' studies needed" }{ File drawer N needed to change decision} } \references{ Hunter, J.E. and Schmidt, F.L. (2004). \emph{Methods of meta-analysis: Correcting error and bias in research findings (2nd ed.).} Thousand Oaks: Sage Publications. Rosenthal, R. (1979). The "file-drawer problem" and tolerance for null results. \emph{ Psychological Bulletin, 86,} 638-641. } \author{ Thomas D. Fletcher \email{tom.fletcher.mp7e@statefarm.com} } \seealso{ \code{\link{FunnelPlot}} } \examples{ # From Arthur et al data(ABHt32) FileDrawer(ABHt32) # From Hunter et al data(HSJt35) FileDrawer(HSJt35) } \keyword{ univar } \keyword{ models } psychometric/man/FunnelPlot.Rd0000744000175100001440000000256511036415562016145 0ustar hornikusers\name{FunnelPlot} \alias{FunnelPlot} \title{ Funnel Plot for Meta-Analysis } \description{ Produces a simple x-y plot corresponding to the correlation and sample size. A vertical line is produced representing the sample weighted correlation. } \usage{ FunnelPlot(x) } \arguments{ \item{x}{ A matrix or data.frame with columns Rxy and n: see \code{\link{EnterMeta}}} } \details{ Plot showing 'no evidence' of availabilty bias will resemble funnel getting smaller at the top, and larger at the bottom of the plot. A plot showing evidence of availablity bias will not resemble a funnel. } \value{ a plot } \references{ Arthur, Jr., W., Bennett, Jr., W., and Huffcutt, A. I. (2001) \emph{Conducting Meta-analysis using SAS.} Mahwah, NJ: Erlbaum. Hunter, J.E. and Schmidt, F.L. (2004). \emph{Methods of meta-analysis: Correcting error and bias in research findings (2nd ed.).} Thousand Oaks: Sage Publications. Hunter, J.E., Schmidt, F.L., and Jackson, G.B. (1982). \emph{Meta-analysis: Cumulating research findings across studies.} Beverly Hills: Sage Publications. } \author{ Thomas D. Fletcher \email{tom.fletcher.mp7e@statefarm.com} } \seealso{ \code{\link{FileDrawer}} } \examples{ # From Arthur et al data(ABHt32) FunnelPlot(ABHt32) # From Hunter et al data(HSJt35) FunnelPlot(HSJt35) } \keyword{ univar } \keyword{ models } psychometric/man/HSJt35.Rd0000744000175100001440000000262111427314604015027 0ustar hornikusers\name{HSJt35} \alias{HSJt35} \docType{data} \title{ Table 3.5 Hunter et al.} \description{ This is a useful and fictious example for conducting Meta-Analysis. It appeared in Hunter et al (1982)} \usage{data(HSJt35)} \format{ A data frame with 8 observations on the following 7 variables. \itemize{ \item \emph{study} Study code \item \emph{Rxy} Published correlation \item \emph{n} Sample size \item \emph{Rxx} Reliability of predictor \item \emph{Ryy} Reliability of criterion \item \emph{u} Range Restriction Ratio \item \emph{moderator} none }} \details{ This example has been replicated a number of times (e.g., Hunter & Schmidt, 2004). It is useful in illustrating the basic concepts of validity generalization. The data can be used to demonstrate bare-bones MA as well as correction for artifacts. This data format is the format necessary for the R functions in the psychometric package. } \references{ Hunter, J.E. and Schmidt, F.L. (2004). \emph{Methods of meta-analysis: Correcting error and bias in research findings (2nd ed.).} Thousand Oaks: Sage Publications. Hunter, J.E., Schmidt, F.L., and Jackson, G.B. (1982). \emph{Meta-analysis: Cumulating research findings across studies.} Beverly Hills: Sage Publications. } \examples{ data(HSJt35) rbar(HSJt35) FunnelPlot(HSJt35) CredIntRho(HSJt35) } \keyword{datasets} psychometric/man/ICC.CI.Rd0000744000175100001440000000340011036415562014734 0ustar hornikusers\name{ICC.CI} \alias{ICC.CI} \alias{ICC1.CI} \alias{ICC2.CI} \title{ Confidence interval for the Intra-class Correlation } \description{ Computes the CI at the desired level for the ICC1 and ICC2} \usage{ ICC1.CI(dv, iv, data, level = 0.95) ICC2.CI(dv, iv, data, level = 0.95) } \arguments{ \item{dv}{ The dependent variable of interest } \item{iv}{ cluster or grouping variable } \item{data}{ data.frame containing the data } \item{level}{ Significance Level for constructing the CI, default is .95} } \details{ Computes the ICC from a one-way ANOVA. The CI is then computed at the desired level using formulae provided by McGraw & Wong (1996). They use the terminology ICC(1) and ICC(k) for ICC1 and ICC2 respectively. } \value{ A table with 3 elements: \item{LCL }{ lower confidence limit if CI} \item{ICC }{ intra-class correlation} \item{UCL }{ upper confidence limit if CI} } \references{ McGraw, K. O. & Wong, S. P. (1996). Forming some inferences about some intraclass correlation coefficients. \emph{Psychological Methods, 1,} 30-46. Bliese, P. (2000). Within-group agreement, non-independence, and reliability: Implications for data aggregation and analysis. In K. J. Klein & S. W. J. Kozlowski (Eds.), \emph{Multilevel theory, research, and methods in organizations: Foundations, extensions, and new directions (pp. 349-381).} San Francisco: Jossey-Bass. } \author{ Thomas D. Fletcher \email{tom.fletcher.mp7e@statefarm.com}} \seealso{ \code{\link{ICC.lme}}, \code{\link[multilevel]{ICC1}}, \code{\link[multilevel]{ICC2}} } \examples{ library(multilevel) data(bh1996) ICC1.CI(HRS, GRP, bh1996) ICC2.CI(HRS, GRP, bh1996) } \keyword{ models } \keyword{ univar } \keyword{ htest } psychometric/man/ICC.lme.Rd0000744000175100001440000000544511121215575015226 0ustar hornikusers\name{ICC.lme} \alias{ICC.lme} \alias{ICC1.lme} \alias{ICC2.lme} \title{ Intraclass Correlation Coefficient from a Mixed-Effects Model } \description{ ICC1 and ICC2 computed from a lme() model. } \usage{ ICC1.lme(dv, grp, data) ICC2.lme(dv, grp, data, weighted = FALSE) } \arguments{ \item{dv}{ The dependent variable of interest } \item{grp}{ cluster or grouping variable } \item{data}{ data.frame containing the data } \item{weighted}{ Whether or not a weighted mean is used in calculation of ICC2 } } \details{ First a lme() model is computed from the data. Then ICC1 is computed as \eqn{t00/(t00 + siqma^2)}, where t00 is the variance in intercept of the model and \eqn{sigma^2} is the residual variance for the model. The ICC2 is computed by computing the ICC2 for each group \eqn{t00/(t00 + sigma^2/nj)} where nj is the size of group j. The mean across all groups is then taken to be the ICC2. However, one can specify that the mean should be weigted by group size such that larger groups are given more weight. The calculation of the individual group ICC2 is done by Bliese's \code{\link[multilevel]{GmeanRel}} function. An alternate specification not used here, but sometimes seen in the literature for ICC2 is to use the formula above for the total data set, but replace nj with the average group size. This is the method used in Bliese's \code{\link[multilevel]{mult.icc}}. } \value{ ICC1 or ICC2 } \references{ Bliese, P. (2000). Within-group agreement, non-independence, and reliability: Implications for data aggregation and analysis. In K. J. Klein & S. W. J. Kozlowski (Eds.), \emph{Multilevel theory, research, and methods in organizations: Foundations, extensions, and new directions (pp. 349-381).} San Francisco: Jossey-Bass. } \author{ Thomas D. Fletcher \email{tom.fletcher.mp7e@statefarm.com} } \note{ ICC1.lme and ICC2.lme should in principle be equal an ICC computed from a one-way ANOVA only when the data are balanced (equal group sizes for all groups and no missing data). The ICC.lme should be a more accurate measure of ICC in all other instances. The three specifications of ICC2 mentioned above (details) will be similar by not exactly equal because of group variablity. } \section{Warning }{ If data used are attached, you will sometimes receive a warning that can be ignored. The warning states that the following variables ... are masked. This is because the function first attaches the data and then detaches it within the function. } \seealso{ \code{\link{ICC.CI}}, \code{\link[multilevel]{mult.icc}}, \code{\link[multilevel]{GmeanRel}} } \examples{ library(nlme) library(multilevel) data(bh1996) ICC1.lme(HRS, GRP, data=bh1996) ICC2.lme(HRS, GRP, data=bh1996) } \keyword{ models } \keyword{ univar } psychometric/man/item.exam.Rd0000744000175100001440000000733111036415562015742 0ustar hornikusers\name{item.exam} \alias{item.exam} \title{ Item Analysis } \description{ Conducts an item level analysis. Provides item-total correlations, Standard deviation in items, difficulty, discrimination, and reliability and validity indices.} \usage{ item.exam(x, y = NULL, discrim = FALSE) } \arguments{ \item{x}{ matrix or data.frame of items } \item{y}{ Criterion variable } \item{discrim}{ Whether or not the discrimination of item is to be computed} } \details{ If someone is interested in examining the items of a dataset contained in data.frame x, and the criterion measure is also in data.frame x, one must parse the matrix or data.frame and specify each part into the function. See example below. Otherwise, one must be sure that x and y are properly merged/matched. If one is not interested in assessing item-criterion relationships, simply leave out that portion of the call. The function does not check whether the items are dichotomously coded, this is user specified. As such, one can specify that items are binary when in fact they are not. This has the effect of computing the discrimination index for continuously coded variables. \cr The difficulty index (p) is simply the mean of the item. When dichotomously coded, p reflects the proportion endorsing the item. However, when continuously coded, p has a different interpretation.} \value{ A table with rows representing each item and columns repsenting : \item{Sample.SD }{ Standard deviation of the item} \item{Item.total }{ Correlation of the item with the total test score } \item{Item.Tot.woi}{ Correlation of item with total test score (scored without item)} \item{Difficulty }{ Mean of the item (p) } \item{Discrimination }{ Discrimination of the item (u-l)/n } \item{Item.Criterion }{ Correlation of the item with the Criterion (y)} \item{Item.Reliab }{ Item reliability index} \item{Item.Rel.woi }{ Item reliability index (scored without item) } \item{Item.Validity }{ Item validity index } } \references{ Allen, M. J. & Yen, W. M. (1979). \emph{Introduction to measurement theory.} Monterey, CA: Brooks/Cole. } \author{ Thomas D. Fletcher \email{tom.fletcher.mp7e@statefarm.com} } \note{ Most all text books suggest the point-biserial correlation for the item-total. Since the point-biserial is equivalent to the Pearson r, the \code{cor} function is used to render the Pearson r for each item-total. However, it might be suggested that the polyserial is more appropriate. For practical purposes, the Pearson is sufficient and is used here. \cr If discrim = TRUE, then the discrimination index is computed and returned EVEN IF the items are not dichotomously coded. The interpretation of the discrimination index is then suspect. \code{\link{discrim}} computes the number of correct responses in the upper and lower groups by summation of the '1s' (correct responses). When data are continuous, the discrimination index represents the difference in the sum of the scores divided by number in each group (1/3*N).} \section{Warning }{ Be cautious when using data with missing values or small data sets. \cr Listwise deletion is employed for both X (matrix of items to be analyzed) and Y (criterion). When the datasets are small, such listwise deletion can make a big impact. Further, since the upper and lower groups are defined as the upper and lower 1/3, the stability of this division of examinees is greatly increased with larger N.} \seealso{ \code{\link{alpha}}, \code{\link{discrim}} } \examples{ data(TestScores) # Look at the data TestScores # Examine the items item.exam(TestScores[,1:10], y = TestScores[,11], discrim=TRUE) } \keyword{ models } \keyword{ univar } psychometric/man/MetaTable.Rd0000744000175100001440000000356411036415562015715 0ustar hornikusers\name{MetaTable} \alias{MetaTable} \title{ Summary function for 'Complete' Meta-Analysis} \description{ Computes and returns the major functions involved in a Meta-Analysis. It is generic in the sense that no options are available to alter defaults. } \usage{ MetaTable(x) } \arguments{ \item{x}{ A matrix or data.frame with columns Rxy, n and artifacts (Rxx, Ryy, u): see \code{\link{EnterMeta}}} } \details{ For a set of correlations for each study (i), the following calculations are made and returned: r-bar \code{\link{rbar}}, variance in r-bar \code{\link{varr}}, variance due to sampling error (not approximated) \code{\link{vare}}, percent of variance due to sampling error \code{\link{pvse}}, 95\% CI for r-bar (using both the heterogenous and homogenous SE) \code{\link{CIrb}}, rho ( corrected r-bar) \code{\link{rhoCA}}, variance in rho \code{\link{varRCA}}, percent of variance attributable to artifacts \code{\link{pvaaa}}, 90\% Credibility interval \code{\link{CredIntRho}} } \value{ Data.frame with various statistics returned - see details above} \references{ Arthur, Jr., W., Bennett, Jr., W., and Huffcutt, A. I. (2001) \emph{Conducting Meta-analysis using SAS.} Mahwah, NJ: Erlbaum. Hunter, J.E. and Schmidt, F.L. (2004). \emph{Methods of meta-analysis: Correcting error and bias in research findings (2nd ed.).} Thousand Oaks: Sage Publications. Hunter, J.E., Schmidt, F.L., and Jackson, G.B. (1982). \emph{Meta-analysis: Cumulating research findings across studies.} Beverly Hills: Sage Publications. } \author{ Thomas D. Fletcher \email{tom.fletcher.mp7e@statefarm.com} } \seealso{ \code{\link{rbar}}, \code{\link{rhoCA}} } \examples{ # From Arthur et al data(ABHt32) MetaTable(ABHt32) # From Hunter et al data(HSJt35) MetaTable(HSJt35) } \keyword{ univar } \keyword{ models } psychometric/man/psychometric-package.Rd0000744000175100001440000000337211427315051020152 0ustar hornikusers\name{psychometric-package} \alias{psychometric-package} \alias{psychometric} \alias{apt} \docType{package} \title{ Applied Psychometric Theory} \description{ Contains functions useful for correlation theory, meta-analysis (validity-generalization), reliability, item analysis, inter-rater reliability, and classical utility} \details{ \tabular{ll}{ Package: \tab psychometric \cr Type: \tab Package \cr Version: \tab 2.2 \cr Date: \tab 2010-08-07 \cr License: \tab GPL (version 2.0 or later) \cr } This package corresponds to the basic concepts encountered in an introductory course in Psychometric Theory at the Graduate level. It is especially useful for Industrial/Organizational Psychologists, but will be useful for any student or practitioner of psychometric theory. I originally developed this package to correspond with concepts covered illustrated in PSYC 7429 at the University of MO - St. Louis course in Psychometric Theory. } \author{ Thomas D. Fletcher\cr Strategic Resources\cr State Farm Insurance Cos.\cr Maintainer: Thomas D. Fletcher \email{tom.fletcher.mp7e@statefarm.com} \cr } \keyword{ package } \seealso{ \code{multilevel-package} \code{ltm-package} \code{psy-package} \code{polycor-package} \code{nlme-package} } \examples{ # Convert Pearson r to Fisher z' r2z (.51) # Convert Fisher z' to r z2r (.563) # Construct a CI about a True Score # Observed = 700, Test Ave. = 500, SD = 100, and reliability = .9 CI.tscore (700, 500, 100, .9) # Compute the classical utility of a test # Assuming base-rate = .5, selection ratio = .5 and rxy = .5 ClassUtil(rxy=.5, BR=.5, SR=.5) # Examine test score items data(TestScores) item.exam(TestScores[,1:10], y = TestScores[,11], discrim=TRUE) }psychometric/man/pvaaa.Rd0000744000175100001440000000300311036415562015133 0ustar hornikusers\name{pvaaa} \alias{pvaaa} \title{ Percent of Variance Accounted for by Artifacts in Rho } \description{ Computes the percentage variance attributed to attenuating artifacts (sampling error, restriction of range, reliability in predictor and criterion.} \usage{ pvaaa(x, aprox = FALSE) } \arguments{ \item{x}{ A matrix or data.frame with columns Rxy, n and artifacts (Rxx, Ryy, u): see \code{\link{EnterMeta}}} \item{aprox}{ Logical test to determine if the approximate or exact var e is used} } \details{ Percent of variance is computed as: ( \code{vare} + \code{varAV} ) / \code{varr} * 100 } \value{ A numeric value representing the percent of variance accounted for by artifacts } \references{ Arthur, Jr., W., Bennett, Jr., W., and Huffcutt, A. I. (2001) \emph{Conducting Meta-analysis using SAS.} Mahwah, NJ: Erlbaum. Hunter, J.E. and Schmidt, F.L. (2004). \emph{Methods of meta-analysis: Correcting error and bias in research findings (2nd ed.).} Thousand Oaks: Sage Publications. Hunter, J.E., Schmidt, F.L., and Jackson, G.B. (1982). \emph{Meta-analysis: Cumulating research findings across studies.} Beverly Hills: Sage Publications. } \author{ Thomas D. Fletcher \email{tom.fletcher.mp7e@statefarm.com} } \seealso{\code{\link{vare}}, \code{\link{varAV}}, \code{\link{varr}}, \code{\link{pvse}} } \examples{ # From Arthur et al data(ABHt32) pvaaa(ABHt32) # From Hunter et al data(HSJt35) pvaaa(HSJt35) } \keyword{ univar } \keyword{ models } psychometric/man/pvse.Rd0000744000175100001440000000253011036415562015024 0ustar hornikusers\name{pvse} \alias{pvse} \title{ Percent of variance due to sampling error } \description{ Ratio of sampling error variance to weighted variance in correlations for a meta-analysis. This value is compared to 75 (e.g., 75\% rule) to determine the presence of moderators. } \usage{ pvse(x) } \arguments{ \item{x}{ A matrix or data.frame with columns Rxy and n: see \code{\link{EnterMeta}}} } \details{ \code{pvse} <- \code{\link{vare}}/\code{\link{varr}}*100 } \value{ A single numeric value of class matrix representing the \% of variance accounted for by sampling error} \references{ Arthur, Jr., W., Bennett, Jr., W., and Huffcutt, A. I. (2001) \emph{Conducting Meta-analysis using SAS.} Mahwah, NJ: Erlbaum. Hunter, J.E. and Schmidt, F.L. (2004). \emph{Methods of meta-analysis: Correcting error and bias in research findings (2nd ed.).} Thousand Oaks: Sage Publications. Hunter, J.E., Schmidt, F.L., and Jackson, G.B. (1982). \emph{Meta-analysis: Cumulating research findings across studies.} Beverly Hills: Sage Publications. } \author{ Thomas D. Fletcher \email{tom.fletcher.mp7e@statefarm.com} } \seealso{ \code{\link{varr}}, \code{\link{vare}} } \examples{ # From Arthur et al data(ABHt32) pvse(ABHt32) # From Hunter et al data(HSJt35) pvse(HSJt35) } \keyword{ univar } \keyword{ models } psychometric/man/Qrbar.Rd0000744000175100001440000000513311121216422015105 0ustar hornikusers\name{Qrbar} \alias{Qrbar} \alias{aprox.Qrbar} \title{ Meta-Analytic Q statistic for r-bar } \description{ Provides a chi-square test for significant variation in sample weighted correlation, rbar} \usage{ Qrbar(x) aprox.Qrbar(x) } \arguments{ \item{x}{ A matrix or data.frame with columns Rxy and n: see \code{\link{EnterMeta}}} } \details{ Q is distributed as chi-square with df equal to the number of studies - 1. Multiple equations exist presumably because of a need to do the calculations \sQuote{by hand} in the past. A significant Q statistic implies the presence of one or more moderating variables operating on the observed correlations. } \value{ A table containing the following items: \cr \item{CHISQ }{ Chi-square value} \item{df }{ degrees of freedom} \item{p-val }{ probabilty value} } \references{ Arthur, Jr., W., Bennett, Jr., W., and Huffcutt, A. I. (2001) \emph{Conducting Meta-analysis using SAS.} Mahwah, NJ: Erlbaum. Hunter, J.E. and Schmidt, F.L. (2004). \emph{Methods of meta-analysis: Correcting error and bias in research findings (2nd ed.).} Thousand Oaks: Sage Publications. Hunter, J.E., Schmidt, F.L., and Jackson, G.B. (1982). \emph{Meta-analysis: Cumulating research findings across studies.} Beverly Hills: Sage Publications. } \author{ Thomas D. Fletcher \email{tom.fletcher.mp7e@statefarm.com} } \note{ \code{Qrbar} is computed as: \eqn{sum((((n-1)*(r-rb)^2)/(1-rb^2)^2),na.rm=TRUE)} \cr \code{aprox.Qrbar} is computed as: \eqn{(N/(1-rb^2)^2)*vr} where n is sample size of study i, N is total sample size across studies, rb is \code{\link{rbar}}, r is the correlation of study i, and vr is \code{\link{varr}}. } \section{Warning }{The test is presented by Hunter et al. 1982, but is NOT recommended nor mentioned by Hunter & Schmidt (2004). The test is sensitive to the number of studies included in the meta-analysis. Large meta-analyses may find significant Q statistics when variation in the population is not present, and small meta-analyses may find lack of significant Q statistics when moderators are present. Hunter & Schmidt (2004) recommend the credibility inteval, \code{\link{CredIntRho}}, or the 75\% rule, \code{\link{pvse}}, as determinants of the presence of moderators.} \seealso{ \code{\link{varr}}, \code{\link{vare}}, \code{\link{rbar}}, \code{\link{CredIntRho}}, \code{\link{pvse}}} \examples{ # From Arthur et al data(ABHt32) aprox.Qrbar(ABHt32) # From Hunter et al data(HSJt35) Qrbar(HSJt35) aprox.Qrbar(HSJt35) } \keyword{ univar } \keyword{ models } \keyword{ htest } psychometric/man/Qrho.Rd0000744000175100001440000000453511036415562014767 0ustar hornikusers\name{Qrho} \alias{Qrho} \title{ Meta-Analytic Q statistic for rho } \description{ Provides a chi-square test for significant variation in sample weighted correlation corrected for attenuating artifacts} \usage{ Qrho(x, aproxe = FALSE) } \arguments{ \item{x}{ A matrix or data.frame with columns Rxy, n and artifacts (Rxx, Ryy, u): see \code{\link{EnterMeta}}} \item{aproxe}{ Logical test to determine if the approximate or exact var e is used} } \details{ Q is distributed as chi-square with df equal to the number of studies - 1. A significant Q statistic implies the presence of one or more moderating variables operating on the observed correlations after corrections for artifacts. } \value{ A table containing the following items: \cr \item{CHISQ }{ Chi-square value} \item{df }{ degrees of freedom} \item{p-val }{ probabilty value} } \references{ Arthur, Jr., W., Bennett, Jr., W., and Huffcutt, A. I. (2001) \emph{Conducting Meta-analysis using SAS.} Mahwah, NJ: Erlbaum. Hunter, J.E. and Schmidt, F.L. (2004). \emph{Methods of meta-analysis: Correcting error and bias in research findings (2nd ed.).} Thousand Oaks: Sage Publications. Hunter, J.E., Schmidt, F.L., and Jackson, G.B. (1982). \emph{Meta-analysis: Cumulating research findings across studies.} Beverly Hills: Sage Publications. } \author{ Thomas D. Fletcher \email{tom.fletcher.mp7e@statefarm.com} } \note{ Q is defined as: (k*vr)/(vav+ve) where, k is the number of studies, vr is \code{\link{varr}}, vav is \code{\link{varAV}}, and ve is \code{\link{vare}} } \section{Warning }{The test is sensitive to the number of studies included in the meta-analysis. Large meta-analyses may find significant Q statistics when variation in the population is not present, and small meta-analyses may find lack of significant Q statistics when moderators are present. Hunter & Schmidt (2004) recommend the credibility inteval, \code{\link{CredIntRho}}, or the 75\% rule, \code{\link{pvse}}, as determinants of the presence of moderators.} \seealso{ \code{\link{varr}}, \code{\link{vare}}, \code{\link{rbar}}, \code{\link{CredIntRho}}, \code{\link{pvse}}} \examples{ # From Arthur et al data(ABHt32) Qrho(ABHt32) # From Hunter et al data(HSJt35) Qrho(HSJt35) } \keyword{ univar } \keyword{ models } \keyword{ htest } psychometric/man/r.nil.Rd0000744000175100001440000000162711427307306015077 0ustar hornikusers\name{r.nil} \alias{r.nil} \alias{r.null} \title{ Nil hypothesis for a correlation } \description{ Performs a two-tailed t-test of the H0 that r = 0 } \usage{ r.nil(r, n) } \arguments{ \item{r}{ Correlation coefficient} \item{n}{ Sample Size} } \value{ Returns a table with 4 elements \item{\dQuote{H0:rNot0}}{ correlation to be tested} \item{t }{ t value for the H0} \item{df }{ degrees of freedom} \item{p }{ p value} } \references{ Cohen, J., Cohen, P., West, S. G., & Aiken, L. S. (2003). \emph{Applied multiple regression/correlation analysis for the behavioral sciences (3rd ed.).} Mahwah, NJ: Lawrence Erlbaum. } \author{ Thomas D. Fletcher \email{tom.fletcher.mp7e@statefarm.com} } \seealso{ \code{\link{rdif.nul}}, \code{\link{CIrdif}} } \examples{ # From ch. 2 in Cohen et al (2003) r.nil(.657, 15) } \keyword{ htest } \keyword{ models } psychometric/man/r2z.Rd0000744000175100001440000000134611036415562014570 0ustar hornikusers\name{r2z} \alias{r2z} \alias{FISHER r to z} \title{ Fisher r to z' } \description{ Converts a Pearson correlation coefficient to Fishers z'} \usage{ r2z(x) } \arguments{ \item{x}{ Pearson correlation coefficient} } \details{ z' = .5 * log((1+r)/(1-r)) } \value{ Fisher z' } \references{ Cohen, J., Cohen, P., West, S. G., & Aiken, L. S. (2003). \emph{Applied multiple regression/correlation analysis for the behavioral sciences (3rd ed.).} Mahwah, NJ: Lawrence Erlbaum. } \author{ Thomas D. Fletcher \email{tom.fletcher.mp7e@statefarm.com} } \seealso{ \code{\link{z2r}}, \code{\link{CIr}}, } \examples{ # From ch. 2 in Cohen et al (2003) r2z(.657) } \keyword{ htest } \keyword{ models }psychometric/man/rbar.Rd0000744000175100001440000000277211036415562015005 0ustar hornikusers\name{rbar} \alias{rbar} \title{ Sample size weighted mean correlation} \description{ Computes the weighted mean correlation from a data object of the general format found in \code{\link{EnterMeta}}} \usage{ rbar(x) } \arguments{ \item{x}{ A matrix or data.frame with columns Rxy and n: see \code{\link{EnterMeta}}} } \details{ For a set of correlations for each study (i), rbar is computed as: sum(Ni*ri)/sum(Ni) where, Ni is the sample size of study i and ri is the correlation in study i. } \value{ Sample Weighted Average Correlation: uncorrected for artifacts other than sampling error } \references{ Arthur, Jr., W., Bennett, Jr., W., and Huffcutt, A. I. (2001) \emph{Conducting Meta-analysis using SAS.} Mahwah, NJ: Erlbaum. Hunter, J.E. and Schmidt, F.L. (2004). \emph{Methods of meta-analysis: Correcting error and bias in research findings (2nd ed.).} Thousand Oaks: Sage Publications. Hunter, J.E., Schmidt, F.L., and Jackson, G.B. (1982). \emph{Meta-analysis: Cumulating research findings across studies.} Beverly Hills: Sage Publications. } \author{ Thomas D. Fletcher \email{tom.fletcher.mp7e@statefarm.com} } \note{ This is the mean correlation across studies corrected for sampling error. It is also known as bare-bones meta-analysis.} \seealso{ \code{\link{varr}}, \code{\link{rhoCA}} } \examples{ # From Arthur et al data(ABHt32) rbar(ABHt32) # From Hunter et al data(HSJt35) rbar(HSJt35) } \keyword{ univar } \keyword{ models } psychometric/man/rdif.nul.Rd0000744000175100001440000000232611036415564015575 0ustar hornikusers\name{rdif.nul} \alias{rdif.nul} \title{ Null hypothesis for difference in two correlations } \description{ Tests the hypothesis that two correlations are significantly different } \usage{ rdif.nul(r1, r2, n1, n2) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{r1}{ Correlation 1} \item{r2}{ Correlation 2} \item{n1}{ Sample size for \code{r1} } \item{n2}{ Sample size for \code{r2} } } \details{ First converts r to z' for each correlation. Then constructs a z test for the difference z <- (z1 - z2)/sqrt(1/(n1-3)+1/(n2-3))} \value{ Returns a table with 2 elements \item{zDIF }{ z value for the H0} \item{p }{ p value} } \references{ Cohen, J., Cohen, P., West, S. G., & Aiken, L. S. (2003). \emph{Applied multiple regression/correlation analysis for the behavioral sciences (3rd ed.).} Mahwah, NJ: Lawrence Erlbaum. } \author{ Thomas D. Fletcher \email{tom.fletcher.mp7e@statefarm.com} } \note{ Does not test alternate hypotheses (e.g., difference = .1) } \seealso{ \code{\link{r.nil}}, \code{\link{CIrdif}} } \examples{ # From ch. 2 in Cohen et al (2003) rdif.nul(.657, .430, 62, 143) } \keyword{ htest } \keyword{ models } psychometric/man/rhoCA.Rd0000744000175100001440000000271111036415564015046 0ustar hornikusers\name{rhoCA} \alias{rhoCA} \title{ Meta-Analytically Derived Correlation Coefficient Corrected for Artifacts} \description{ This represents the population correlation coefficient free from attenuaton due to artifacts (sampling error, range-restriction, reliability in the predictor and criterion).} \usage{ rhoCA(x) } \arguments{ \item{x}{ A matrix or data.frame with columns Rxy, n and artifacts (Rxx, Ryy, u): see \code{\link{EnterMeta}}} } \details{ This is the sample weighted correlation coefficient \code{\link{rbar}} divided by the compound attenuation factor, \code{\link{CAFAA}}. } \value{ A numeric value represting the corrected correlation coefficient. } \references{ Arthur, Jr., W., Bennett, Jr., W., and Huffcutt, A. I. (2001) \emph{Conducting Meta-analysis using SAS.} Mahwah, NJ: Erlbaum. Hunter, J.E. and Schmidt, F.L. (2004). \emph{Methods of meta-analysis: Correcting error and bias in research findings (2nd ed.).} Thousand Oaks: Sage Publications. Hunter, J.E., Schmidt, F.L., and Jackson, G.B. (1982). \emph{Meta-analysis: Cumulating research findings across studies.} Beverly Hills: Sage Publications. } \author{ Thomas D. Fletcher \email{tom.fletcher.mp7e@statefarm.com} } \seealso{ \code{\link{CAFAA}}, \code{\link{rbar}} } \examples{ # From Arthur et al data(ABHt32) rhoCA(ABHt32) # From Hunter et al data(HSJt35) rhoCA(HSJt35) } \keyword{ univar } \keyword{ models } psychometric/man/SBrel.Rd0000744000175100001440000000377511036415564015074 0ustar hornikusers\name{SpearmanBrown} \alias{SBrel} \alias{SBlength} \alias{SpearmanBrown} \title{ Spearman-Brown Prophecy Formulae} \description{ These two functions are various manipulations of the Spearman-Brown Prophecy Formula. They are useful in determining relibility if test length is changed or length of a new test if reliability were to change.} \usage{ SBrel(Nlength, rxx) SBlength(rxxp, rxx) } \arguments{ \item{Nlength}{ New length of a test in relation to original} \item{rxx}{ reliability of test x } \item{rxxp}{ reliability of desired (parallel) test x } } \details{ Nlength represents a ratio of new to original. If the new test has 10 items, and the original test has 5 items, Nlength is 2. Likewise, if the original test has 5 items, and the new test has 10 items, Nlength is .5. In general, researchers should aim for reliabilities > .9. \code{SBrel} is used to address the question, what if I increased/decreased my test length? What will the new reliability be? This is used when computing split-half reliabilities and when when concerned about reducing test length. \cr \code{SBlength} is used to address the question, how long must my test be (in relation to the original test) in order to achieve a desired reliability? \cr The formulae for each are: \cr rxxp <- Nlength*rxx/(1+(Nlength-1)*rxx) \cr N <- rxxp*(1-rxx)/(rxx*(1-rxxp)) } \value{ \item{rxxp }{the prophesized reliability } \item{N }{Ratio of new test length to original test length } } \references{ Allen, M. J. & Yen, W. M. (1979). \emph{Introduction to measurement theory.} Monterey, CA: Brooks/Cole. } \author{ Thomas D. Fletcher \email{tom.fletcher.mp7e@statefarm.com} } \seealso{ \code{\link{alpha}} } \examples{ # Given a test with rxx = .7, 10 items # Desire a test with rxx=.9, how many items are needed? new.length <- SBlength(.9, .7) new.length * 10 # 39 items are needed # what is the reliability of a test 1/2 as long SBrel(.5, .7) } \keyword{ univar } \keyword{ models } psychometric/man/SE.Meas.Rd0000744000175100001440000000547611036415564015260 0ustar hornikusers\name{SE.Meas} \alias{SE.Meas} \alias{SE.Est} \alias{SE.Pred} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Standard Errors of Measurement (test scores) } \description{ These functions will calculate the three Standard Errors of Measurement as described by Dudek(1979). They are useful in constructing CI about observed scores, true scores and predicting observed scores on parallel measures.} \usage{ SE.Meas(s, rxx) SE.Est (s, rxx) SE.Pred(sy, rxx) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{s}{ Standard Deviation in tests scores on test x } \item{sy}{ Standard Deviation in tests scores on parallel test y = x} \item{rxx}{ Reliability of test x } } \details{ Dudek (1979) notes that in practice, individuals often misinterpret the SEM. In fact, most textbooks misinterpret these measures. The SE.Meas \eqn{(s*sqrt(1-rxx))} is useful in the construction of CI about observed scores, but should not be interpreted as indicating the TRUE SCORE is necessarily included in the CI. The SE.Est \eqn{(s*sqrt(rxx*(1-rxx)))} is useful in the construction of CI about the TRUE SCORE. The estimate of a CI for a TRUE SCORE also requires the calculation of a TRUE SCORE (due to regression to the mean) from observed scores. The SE.Pred \eqn{(sy*sqrt(1-rxx^2))} is useful in predicting the score on a parallel measure (Y) given a score on test X. SE.Pred is usually used to estimate the score of a re-test of an individual. } \value{ The returned value is the appropriate standard error } \references{ Dudek, F. J. (1979). The continuing misinterpretation of the standard error of measurement. \emph{Psychological Bulletin, 86}, 335-337. Lord, F. M. & Novick, M. R. (1968). \emph{Statistical theories of mental test scores.} Reading, MA: Addison-Wesley. Nunnally, J. C. & Bernstein, I. H. (1994). \emph{Psychometric Theory (3rd ed.).} New York: McGraw-Hill. } \author{ Thomas D. Fletcher \email{tom.fletcher.mp7e@statefarm.com} } \note{ Since strictly parallel tests have the same SD, s and sy are equivalent in these functions. SE.Meas() is used by \code{\link{CI.obs}}. SE.Est() is used by \code{\link{CI.tscore}}. You must use \code{\link{Est.true}} to first compute the estimated true score from an observed score accounting for regression to the mean. } \seealso{ \code{\link{Est.true}}, \code{\link{CI.obs}}, \code{\link{CI.tscore}} } \examples{ # Examples from Dudek (1979) # Suppose a test has mean = 500, SD = 100 rxx = .9 # If an individual scores 700 on the test # The three SE are: SE.Meas (100, .9) SE.Est (100, .9) SE.Pred (100, 9) # CI about the true score CI.tscore(700, 500, 100, .9) # CI about the observed score CI.obs(700, 100, .9) } \keyword{ htest } \keyword{ distribution } psychometric/man/SErbar.Rd0000744000175100001440000000332411036415564015231 0ustar hornikusers\name{SErbar} \alias{SErbar} \alias{SERHET} \alias{SERHOM} \title{ Standard Error for Sample Size Weighted Mean Correlation } \description{ The standard error of homogenous or heterogenous samples is computed to be used for construction of confidence intervals about the Sample Size Weighted Mean Correlation in meta-analysis. Use \code{SERHOM} if no moderators are present (population is homogenous), and use \code{SERHET} if moderators are present (population is heterogenous). } \usage{ SERHOM(x) SERHET(x) } \arguments{ \item{x}{ A matrix or data.frame with columns Rxy and n: see \code{\link{EnterMeta}}} } \details{ The formula for each are: \cr SERHOM <- \eqn{(1-rb^2)/sqrt(N-k)} \cr SERHET <- \eqn{sqrt((1-rb^2)^2/(N-k)+varRes(x)/k)} where, rb is \code{\link{rbar}}, N is the total sample size, k is the number of studies. } \value{ A numeric value, the standard error } \references{ Arthur, Jr., W., Bennett, Jr., W., and Huffcutt, A. I. (2001) \emph{Conducting Meta-analysis using SAS.} Mahwah, NJ: Erlbaum. Hunter, J.E. and Schmidt, F.L. (2004). \emph{Methods of meta-analysis: Correcting error and bias in research findings (2nd ed.).} Thousand Oaks: Sage Publications. Hunter, J.E., Schmidt, F.L., and Jackson, G.B. (1982). \emph{Meta-analysis: Cumulating research findings across studies.} Beverly Hills: Sage Publications. } \author{ Thomas D. Fletcher \email{tom.fletcher.mp7e@statefarm.com} } \seealso{ \code{\link{CIrb}}, \code{\link{rbar}} } \examples{ # From Arthur et al data(ABHt32) SERHOM(ABHt32) SERHET(ABHt32) CIrb(ABHt32) # From Hunter et al data(HSJt35) SERHOM(HSJt35) SERHET(HSJt35) CIrb(HSJt35) } \keyword{ univar } psychometric/man/SEz.Rd0000744000175100001440000000176711036415564014565 0ustar hornikusers\name{SEz} \alias{SEz} \title{ Standard Error of Fishers z prime } \description{ Given a sample size, n, will compute the aproximate standard error for z prime This is useful for constructing confidence intervals about a correlation. } \usage{ SEz(n) } \arguments{ \item{n}{ sample size } } \details{ SEz = 1/sqrt(n-3) } \value{ The approximate standard error for Fisher's z prime } \references{ Olkin, I. & Finn, J. D. (1995). Correlation Redux. \emph{Psychological Bulletin, 118}, 155-164. Cohen, J., Cohen, P., West, S. G., & Aiken, L. S. (2003). \emph{Applied multiple regression/correlation analysis for the behavioral sciences (3rd ed.).} Mahwah, NJ: Lawrence Erlbaum. } \author{ Thomas D. Fletcher \email{tom.fletcher.mp7e@statefarm.com} } \seealso{ \code{\link{r2z}}, \code{\link{CIr}}, \code{\link{CIz}}, \code{\link{z2r}} } \examples{ # From ch. 2 in Cohen et al (2003) zp <- r2z(.657) zp SEz(15) } \keyword{ htest } \keyword{ models } psychometric/man/TestScores.Rd0000744000175100001440000000242111036413204016133 0ustar hornikusers\name{TestScores} \alias{TestScores} \docType{data} \title{Fictitious Test Scores for Illustrative Purposes} \description{ These data were created to correspond to scores for 30 examinees on 10 items of test X plus a score on criterion Y. } \usage{data(TestScores)} \format{ A matrix with 30 observations on the following 11 variables. \describe{ \item{\code{i1}}{ item1 on test x} \item{\code{i2}}{ item2 on test x} \item{\code{i3}}{ item3 on test x} \item{\code{i4}}{ item4 on test x} \item{\code{i5}}{ item5 on test x} \item{\code{i6}}{ item6 on test x} \item{\code{i7}}{ item7 on test x} \item{\code{i8}}{ item8 on test x} \item{\code{i9}}{ item9 on test x} \item{\code{i10}}{ item10 on test x} \item{\code{y}}{ Score on criterion Y} } } \details{ These data are constructed such that items 1 - 10 are coded 0,1 for incorrect/correct responses. The data illustate that some items are better for maintaining internal consistency, whereas others may be more useful for relating to external criteria. } \seealso{\code{\link{item.exam}}} \examples{ data(TestScores) str(TestScores) item.exam(TestScores[,1:10], y = TestScores[,11], discrim=TRUE) alpha(TestScores[,1:10]) } \keyword{datasets} psychometric/man/Utility.Rd0000744000175100001440000000376711036415564015531 0ustar hornikusers\name{Utility} \alias{Utility} \alias{MargUtil} \alias{TotUtil} \title{ Marginal and Total Utility of a Test} \description{ Computes the marginal or total utility of a test.} \usage{ MargUtil(Rxy, Sy, MXg, COST, Nselected) TotUtil(Rxy, Sy, MXg, COST, Nselected) } \arguments{ \item{Rxy}{ Correlation of Test X with Criterion Y } \item{Sy}{ Standard Deviation of Y in monetary units } \item{MXg}{ Mean of selected group on test X in standard score units } \item{COST}{ Total cost of testing } \item{Nselected}{ number of applicants selected} } \details{ \emph{Marginal utility} is the gain expected in the outcome (i.e., job performance), in monetary units, for a person from the predictor selected subgroup compared to a person who is randomly selected. \emph{Total utility} is the total gain in the outcome (i.e., job performance), in monetary units, expected for those selected using the test. } \value{ Marginal or Total Utility of a Test (a numeric value in monetary units) } \references{ Cascio, W. F. & Aguinis, H. (2005). \emph{Applied Psychology in Human Resource Management (6th ed.)} Englewood Cliffs, NJ: Prentice-Hall. Murphy, K. R. & Davidshofer, C. O. (2005). \emph{Psychological testing: Principles and applications (5th ed.).} Saddle River, NJ: Prentice Hall. } \author{ Thomas D. Fletcher \email{tom.fletcher.mp7e@statefarm.com} } \note{ Computation for marginal and total utility are: MU <- Rxy*Sy*MXg - COST/Nselected \cr TU <- Nselected*Rxy*Sy*MXg - COST The computation of Sy should be done locally (within an organization) and is often difficult. } \seealso{ \code{\link{ClassUtil}} } \examples{ # Rxy = .35 # Each year 72 workers are hired # SD of performance in dollars is $4000 # 1 out of 10 applicants are selected # cost per test = $5 # average test score for those selected = 1.76 MargUtil(.35, 4000, 1.76, 720*5, 72) TotUtil (.35, 4000, 1.76, 720*5, 72) } \keyword{ univar } psychometric/man/varAV.Rd0000744000175100001440000000265311036415564015076 0ustar hornikusers\name{varAV} \alias{varAV} \title{ Variance Due to Attenuating Artifacts} \description{ Since the presence of artifacts may inflate the observed variance in correlations, one needs to compute the variance attributed to the artifacts. } \usage{ varAV(x) } \arguments{ \item{x}{ A matrix or data.frame with columns Rxy, n and artifacts (Rxx, Ryy, u): see \code{\link{EnterMeta}}} } \details{ varAV is computed as \eqn{\code{rhoCA}^2 * \code{CAFAA}^2 * \code{CVF}} varAV is used to compute the residual variance in correlations \code{\link{varResT}} } \value{ A numeric value representing the variance due to attenuating artifacts} \references{ Arthur, Jr., W., Bennett, Jr., W., and Huffcutt, A. I. (2001) \emph{Conducting Meta-analysis using SAS.} Mahwah, NJ: Erlbaum. Hunter, J.E. and Schmidt, F.L. (2004). \emph{Methods of meta-analysis: Correcting error and bias in research findings (2nd ed.).} Thousand Oaks: Sage Publications. Hunter, J.E., Schmidt, F.L., and Jackson, G.B. (1982). \emph{Meta-analysis: Cumulating research findings across studies.} Beverly Hills: Sage Publications. } \author{ Thomas D. Fletcher \email{tom.fletcher.mp7e@statefarm.com} } \seealso{ \code{\link{CAFAA}},\code{\link{rhoCA}}, \code{\link{CVF}} } \examples{ # From Arthur et al data(ABHt32) varAV(ABHt32) # From Hunter et al data(HSJt35) varAV(HSJt35) } \keyword{ univar } \keyword{ models } psychometric/man/vare.Rd0000744000175100001440000000414611427307363015014 0ustar hornikusers\name{vare} \alias{vare} \alias{aprox.vare} \alias{vare36} \title{ Sampling Error Variance} \description{ Computes sampling error variance in correlations from a data object of the general format found in \code{\link{EnterMeta}} } \usage{ vare(x) aprox.vare(x) vare36(x) } \arguments{ \item{x}{ A matrix or data.frame with columns Rxy and n: see \code{\link{EnterMeta}}} } \details{ \code{vare} is the 'core' equation for estimating the sampling error variance. Presumably because of the history of meta-analysis and lack of desktop computing power, hand-calculatons were needed. Thus, two additional equations were developed. The \code{aprox.vare} appears in many textbooks and is used often (Arthur et al.). Another variation is presented by Hunter & Schmidt (2004) as their equation 3.6 \code{vare36}. } \value{ Sampling error variance (exact, approximate, or alternate aproximate) } \references{ Arthur, Jr., W., Bennett, Jr., W., and Huffcutt, A. I. (2001) \emph{Conducting Meta-analysis using SAS.} Mahwah, NJ: Erlbaum. Hunter, J.E. and Schmidt, F.L. (2004). \emph{Methods of meta-analysis: Correcting error and bias in research findings (2nd ed.).} Thousand Oaks: Sage Publications. Hunter, J.E., Schmidt, F.L., and Jackson, G.B. (1982). \emph{Meta-analysis: Cumulating research findings across studies.} Beverly Hills: Sage Publications. } \author{ Thomas D. Fletcher \email{tom.fletcher.mp7e@statefarm.com} } \note{ The equations for each function are: \cr vare <- \eqn{sum(n*(1-rb^2)^2/(n-1),na.rm=TRUE)/sum(n,na.rm=TRUE)} \cr aprox.vare <- \eqn{(1-rb^2)^2/(mean(n, na.rm=TRUE)-1)} \cr vare36 <- \eqn{((1-rb^2)^2*k)/T} where k is number of studies and T is total sample size These are only presented here for completeness. The recommended equation is \code{vare}. } \seealso{ \code{\link{varr}}, \code{\link{rbar}} } \examples{ # From Arthur et al data(ABHt32) vare(ABHt32) aprox.vare(ABHt32) vare36(ABHt32) # From Hunter et al data(HSJt35) vare(HSJt35) aprox.vare(HSJt35) vare36(HSJt35) } \keyword{ univar } \keyword{ models } psychometric/man/varr.Rd0000744000175100001440000000310511036415564015022 0ustar hornikusers\name{varr} \alias{varr} \title{ Sample Size weighted variance} \description{ Computes the weighted variance in correlations from a data object of the general format found in \code{\link{EnterMeta}}} \usage{ varr(x) } \arguments{ \item{x}{ A matrix or data.frame with columns Rxy and n: see \code{\link{EnterMeta}}} } \details{ For a set of correlations for each study (i), varr is computed as: \eqn{sum(Ni*(ri-rbar)^2)/sum(Ni)} where, Ni is the sample size of study i and ri is the correlation in study i and rbar is the weighted mean correlation. } \value{ Sample weighted variance in correlations: uncorrected for artifacts other than sampling error } \references{ Arthur, Jr., W., Bennett, Jr., W., and Huffcutt, A. I. (2001) \emph{Conducting Meta-analysis using SAS.} Mahwah, NJ: Erlbaum. Hunter, J.E. and Schmidt, F.L. (2004). \emph{Methods of meta-analysis: Correcting error and bias in research findings (2nd ed.).} Thousand Oaks: Sage Publications. Hunter, J.E., Schmidt, F.L., and Jackson, G.B. (1982). \emph{Meta-analysis: Cumulating research findings across studies.} Beverly Hills: Sage Publications. } \author{ Thomas D. Fletcher \email{tom.fletcher.mp7e@statefarm.com} } \note{ This is the variance in correlations across studies corrected for sampling error. It is also known as bare-bones meta-analysis.} \seealso{ \code{\link{vare}}, \code{\link{rbar}} } \examples{ # From Arthur et al data(ABHt32) varr(ABHt32) # From Hunter et al data(HSJt35) varr(HSJt35) } \keyword{ univar } \keyword{ models } psychometric/man/varRCA.Rd0000744000175100001440000000300411036415564015164 0ustar hornikusers\name{varRCA} \alias{varRCA} \title{ Variance in Meta-Analytic Rho } \description{ Computes the estimate of the variance in the corrected correlation coefficient.} \usage{ varRCA(x, aprox = FALSE) } \arguments{ \item{x}{ A matrix or data.frame with columns Rxy, n and artifacts (Rxx, Ryy, u): see \code{\link{EnterMeta}}} \item{aprox}{ Logical test to determine if the approximate or exact var e is used } } \details{ Variance in Rho is computed as: \eqn{\code{VarResT} / \code{CAFFA}^2} This is used to construct credibility intervals for rho \code{\link{CredIntRho}} } \value{ A numeric value representing the variance in the population correlation coefficient } \references{ Arthur, Jr., W., Bennett, Jr., W., and Huffcutt, A. I. (2001) \emph{Conducting Meta-analysis using SAS.} Mahwah, NJ: Erlbaum. Hunter, J.E. and Schmidt, F.L. (2004). \emph{Methods of meta-analysis: Correcting error and bias in research findings (2nd ed.).} Thousand Oaks: Sage Publications. Hunter, J.E., Schmidt, F.L., and Jackson, G.B. (1982). \emph{Meta-analysis: Cumulating research findings across studies.} Beverly Hills: Sage Publications. } \author{ Thomas D. Fletcher \email{tom.fletcher.mp7e@statefarm.com} } \seealso{ \code{\link{rhoCA}}, \code{\link{CAFAA}}, \code{\link{varResT}}, \code{\link{varRes}} \code{\link{CredIntRho}}} \examples{ # From Arthur et al data(ABHt32) varRCA(ABHt32) # From Hunter et al data(HSJt35) varRCA(HSJt35) } \keyword{ univar } \keyword{ models } psychometric/man/varRes.Rd0000744000175100001440000000261611036415564015320 0ustar hornikusers\name{varRes} \alias{varRes} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Residual Variance in Meta-Analytic Correlation } \description{ Computes the residual variance in the sample-weighted correlation coefficient by removing variance due to sampling error.} \usage{ varRes(x) } \arguments{ \item{x}{ A matrix or data.frame with columns Rxy and n: see \code{\link{EnterMeta}}} } \details{ computed as \code{varr} - \code{vare} Useful in the construction of the SE for heterogenous populations \code{\link{SERHET}}} \value{ A numeric value representing the residual variance } \references{ Arthur, Jr., W., Bennett, Jr., W., and Huffcutt, A. I. (2001) \emph{Conducting Meta-analysis using SAS.} Mahwah, NJ: Erlbaum. Hunter, J.E. and Schmidt, F.L. (2004). \emph{Methods of meta-analysis: Correcting error and bias in research findings (2nd ed.).} Thousand Oaks: Sage Publications. Hunter, J.E., Schmidt, F.L., and Jackson, G.B. (1982). \emph{Meta-analysis: Cumulating research findings across studies.} Beverly Hills: Sage Publications. } \author{ Thomas D. Fletcher \email{tom.fletcher.mp7e@statefarm.com} } \seealso{ \code{\link{varr}}, \code{\link{vare}}, \code{\link{SERHET}} } \examples{ # From Arthur et al data(ABHt32) varRes(ABHt32) # From Hunter et al data(HSJt35) varRes(HSJt35) } \keyword{ univar } \keyword{ models } psychometric/man/varResT.Rd0000744000175100001440000000273011036415564015441 0ustar hornikusers\name{varResT} \alias{varResT} \title{ True residual variance in correlations } \description{ Residual variance attributed to both the variance due to sampling error and artifacts. } \usage{ varResT(x, aprox = FALSE) } \arguments{ \item{x}{ A matrix or data.frame with columns Rxy, n and artifacts (Rxx, Ryy, u): see \code{\link{EnterMeta}}} \item{aprox}{ Logical test to determine if the approximate or exact var e is used } } \details{ \code{varResT} <- \code{varr} - \code{vare} - \code{varAV} varResT is used in the compution of the variance in rho, \code{varRCA} } \value{ A numeric value representing the True residual variance } \references{ Arthur, Jr., W., Bennett, Jr., W., and Huffcutt, A. I. (2001) \emph{Conducting Meta-analysis using SAS.} Mahwah, NJ: Erlbaum. Hunter, J.E. and Schmidt, F.L. (2004). \emph{Methods of meta-analysis: Correcting error and bias in research findings (2nd ed.).} Thousand Oaks: Sage Publications. Hunter, J.E., Schmidt, F.L., and Jackson, G.B. (1982). \emph{Meta-analysis: Cumulating research findings across studies.} Beverly Hills: Sage Publications. } \author{ Thomas D. Fletcher \email{tom.fletcher.mp7e@statefarm.com} } \seealso{ \code{\link{varr}}, \code{\link{vare}}, \code{\link{varAV}}, \code{\link{varRCA}} } \examples{ # From Arthur et al data(ABHt32) varResT(ABHt32) # From Hunter et al data(HSJt35) varResT(HSJt35) } \keyword{ univar } \keyword{ models } psychometric/man/z2r.Rd0000744000175100001440000000147311036415564014573 0ustar hornikusers\name{z2r} \alias{z2r} \alias{Fisher z to r} \title{ Fisher z' to r} \description{ Converts a Fishers z' to Pearson correlation coefficient } \usage{ z2r(x) } \arguments{ \item{x}{ z' (Fishers z prime) } } \details{ r = (exp(2*z)-1)/exp(2*z)+1) } \value{ A Pearson Correlation coefficient } \references{ Cohen, J., Cohen, P., West, S. G., & Aiken, L. S. (2003). \emph{Applied multiple regression/correlation analysis for the behavioral sciences (3rd ed.).} Mahwah, NJ: Lawrence Erlbaum. } \author{ Thomas D. Fletcher \email{tom.fletcher.mp7e@statefarm.com} } \seealso{ \code{\link{r2z}}, \code{\link{CIr}}, \code{\link{CIz}}, \code{\link{SEz}} } \examples{ # From ch. 2 in Cohen et al (2003) zp <- r2z(.657) zp z2r(zp) } \keyword{ htest } \keyword{ models } psychometric/NAMESPACE0000744000175100001440000000006610471145560014226 0ustar hornikusersexportPattern("^[^\\.]") import(multilevel, nlme) psychometric/R/0000755000175100001440000000000011121214530013170 5ustar hornikuserspsychometric/R/alpha.CI.R0000744000175100001440000000064710542121564014714 0ustar hornikusers"alpha.CI" <- function (alpha, k, N, level=.90, onesided=FALSE) { if (!onesided) { nomau <- (1 - level)/2 nomal <- 1-nomau } else { nomau <- (1 - level) nomal <- (level) } df1 <- N-1 df2 <- (k-1)*(N-1) Fl <- qf(nomal, df1, df2) Fu <- qf(nomau, df1, df2) lcl <- 1 - (1 - alpha) * Fl ucl <- 1 - (1 - alpha) * Fu mat <- data.frame(LCL = lcl, ALPHA = alpha, UCL = ucl) return(mat) } psychometric/R/alpha.R0000744000175100001440000000025710466514060014421 0ustar hornikusers"alpha" <- function(x) { x <- na.exclude(as.matrix(x)) Sx <- sum(var(x)) SumSxi <- sum(apply(x,2,var)) k <- ncol(x) alpha <- k/(k-1)*(1-SumSxi/Sx) return(alpha) } psychometric/R/aprox.Qrbar.R0000744000175100001440000000045011121216611015514 0ustar hornikusers"aprox.Qrbar" <- function(x) { vr <- varr(x) N <- sum(x$n,na.rm=TRUE) rb <- rbar(x) chi <- (N/(1-rb^2)^2)*vr k <- length (x$Rxy[!(is.na(x$Rxy))]) pval <- 1 - pchisq(chi, k-1) mat <- matrix(c(chi,k-1,pval),ncol=3) colnames(mat) <- c("CHISQ", "df", "p-val") return(mat) } psychometric/R/aprox.vare.R0000744000175100001440000000016611121216670015413 0ustar hornikusers"aprox.vare" <- function(x) { n <- x$n rb <- rbar(x) ve <- (1-rb^2)^2/(mean(n, na.rm=TRUE)-1) return(ve) } psychometric/R/aRxx.R0000744000175100001440000000030611121216715014244 0ustar hornikusers"aRxx" <- function(x) { Rxx <- x$Rxx n <- length (x$Rxx[!(is.na(x$Rxx))]) a <- mean(sqrt(Rxx),na.rm=TRUE) va <- var(sqrt(Rxx),na.rm=TRUE)*(n-1)/n out <- list(a,va) return(out) } psychometric/R/bRyy.R0000744000175100001440000000030611121216731014245 0ustar hornikusers"bRyy" <- function(x) { Ryy <- x$Ryy n <- length (x$Ryy[!(is.na(x$Ryy))]) b <- mean(sqrt(Ryy),na.rm=TRUE) vb <- var(sqrt(Ryy),na.rm=TRUE)*(n-1)/n out <- list(b,vb) return(out) } psychometric/R/CAFAA.R0000744000175100001440000000017110466514060014122 0ustar hornikusers"CAFAA" <- function(x) { a <- aRxx(x)[[1]] b <- bRyy(x)[[1]] c <- cRR(x)[[1]] AA <- a*b*c return(AA) } psychometric/R/CI.obs.R0000744000175100001440000000037310466514060014410 0ustar hornikusers"CI.obs" <- function (obs, s, rxx, level=.95) { noma <- 1-level sem <- SE.Meas(s, rxx) zs <- - qnorm(noma/2) mez <- zs*sem lcl <- obs - mez ucl <- obs + mez mat <- data.frame(SE.Meas = sem, LCL = lcl, OBS = obs, UCL = ucl) return(mat) } psychometric/R/CI.Rsq.R0000744000175100001440000000044010466514060014365 0ustar hornikusers"CI.Rsq" <- function(rsq, n, k, level=.95) { noma <- 1-level sersq <- sqrt((4*rsq*(1-rsq)^2*(n-k-1)^2)/((n^2-1)*(n+3))) zs <- - qnorm(noma/2) mez <- zs*sersq lcl <- rsq - mez ucl <- rsq + mez mat <- data.frame(Rsq = rsq, SErsq = sersq, LCL = lcl, UCL = ucl) return(mat) } psychometric/R/CI.Rsqlm.R0000744000175100001440000000031110466514060014713 0ustar hornikusers"CI.Rsqlm" <- function (obj, level=.95) { l <- level rsq <- summary(obj)$r.squared k <- summary(obj)$df[1] - 1 n <- obj$df + k + 1 mat <- CI.Rsq (rsq, n, k, level=l) return(mat) } psychometric/R/CI.tscore.R0000744000175100001440000000044610466514060015125 0ustar hornikusers"CI.tscore" <- function(obs, mx, s, rxx, level=.95) { noma <- 1-level see <- SE.Est(s, rxx) zs <- - qnorm(noma/2) mez <- zs*see that <- Est.true(obs, mx, rxx) lcl <- that - mez ucl <- that + mez mat <- data.frame(SE.Est = see, LCL = lcl, T.Score = that, UCL = ucl) return(mat) } psychometric/R/CIr.R0000744000175100001440000000032010466514060014000 0ustar hornikusers"CIr" <- function (r, n, level=.95) { z <- r2z(r) uciz <- CIz(z, n, level)[2] lciz <- CIz(z, n, level)[1] ur <- z2r(uciz) lr <- z2r(lciz) mat <- list(lr,ur) return(as.numeric(mat)) } psychometric/R/CIrb.R0000744000175100001440000000044310471160014014140 0ustar hornikusers"CIrb" <- function (x, LEVEL=.95, homogenous=TRUE) { rb <- rbar(x) noma <- 1 - LEVEL if (!homogenous) {serb <- SERHOM(x)} else {serb <- SERHET(x)} zs <- -qnorm(noma/2) merb <- zs*serb lcl <- rb - merb ucl <- rb + merb mat <- list(lcl, ucl) return(as.numeric(mat)) } psychometric/R/CIrdif.R0000744000175100001440000000042610466514060014472 0ustar hornikusers"CIrdif" <- function (r1, r2, n1, n2, level=.95) { rd = r1 - r2 noma <- 1-level sed <- sqrt((1-r1^2)/n1 + (1-r2^2)/n2) zs <- - qnorm(noma/2) mez <- zs*sed lcl <- rd - mez ucl <- rd + mez mat <- data.frame(DifR = rd, SED=sed, LCL = lcl, UCL = ucl) return(mat) } psychometric/R/CIz.R0000744000175100001440000000032210466514060014012 0ustar hornikusers"CIz" <- function (z, n, level=.95) { noma <- 1-level sez <- SEz(n) zs <- - qnorm(noma/2) mez <- zs*sez lcl <- z - mez ucl <- z + mez mat <- list(lcl, ucl) return(as.numeric(mat)) } psychometric/R/ClassUtil.R0000744000175100001440000000116010466514060015231 0ustar hornikusers"ClassUtil" <- function (rxy = 0, BR = .5, SR = .5) { pTP <- BR*SR + rxy*sqrt(BR*(1-BR) * SR*(1-SR)) pFN <- BR - pTP pFP <- SR - pTP pTN <- 1 - pTP - pFN - pFP sen <- pTP/(pTP+pFN) spe <- pTN/(pFP+pTN) cd <- (pTP+pTN)*100 suc <- pTP/(pTP+pFP) imp <- (suc - BR)*100 mat <- matrix(rbind(pTP,pFN,pFP,pTN,NA,sen,spe,cd,suc,imp)) colnames(mat) <- "Probabilities" rownames(mat) <- c("True Positives", "False Negatives", "False Positives", "True Negatives","--", "Sensitivity", "Specificity", "% of Decisions Correct", "Proportion Selected Succesful", "% Improvement over BR") return(mat) } psychometric/R/CredIntRho.R0000744000175100001440000000037610471154546015344 0ustar hornikusers"CredIntRho" <- function(x, aprox=FALSE, level=.95) { r <- rhoCA(x) if (!aprox) { vr <- varRCA(x)} else { vr <- varRCA(x,T)} zs <- - qnorm((1-level)/2) sdr <- sqrt(vr) lcl <- r - zs * sdr ucl <- r + zs * sdr return(list(lcl,ucl)) } psychometric/R/cRR.R0000744000175100001440000000042711121216744014016 0ustar hornikusers"cRR" <- function (x) { rb = rbar(x) n <- length (x$u[!(is.na(x$u))]) u <- x$u if (n == 0) { c <- 1 vc <- 0} else { c <- sqrt((1-u^2)*rb^2+u^2) vc <- var(c, na.rm=TRUE)*(n-1)/n } mc <- mean(c, na.rm=TRUE) out <- list(mc, vc) return(out) } psychometric/R/cRRr.R0000744000175100001440000000022110466514060014173 0ustar hornikusers"cRRr" <- function (rr, sdy, sdyu) { rxy <- (rr*(sdyu/sdy))/sqrt(1+rr^2*((sdyu^2/sdy^2)-1)) return(data.frame(unrestricted = rxy)) } psychometric/R/CVF.R0000744000175100001440000000031610466514060013746 0ustar hornikusers"CVF" <- function(x) { ma <- aRxx(x)[[1]] va <- aRxx(x)[[2]] mb <- bRyy(x)[[1]] vb <- bRyy(x)[[2]] mc <- cRR(x)[[1]] vc <- cRR(x)[[2]] cv <- va/ma^2 + vb/mb^2 + vc/mc^2 return(cv) } psychometric/R/CVratio.R0000744000175100001440000000017710466514060014704 0ustar hornikusers"CVratio" <- function(NTOTAL, NESSENTIAL) { n <- NTOTAL ne <- NESSENTIAL cvr <- (ne - n/2)/(n/2) return(cvr) } psychometric/R/discrim.R0000744000175100001440000000051010466514060014756 0ustar hornikusers"discrim" <- function(x) { x <- na.exclude(as.matrix(x)) k <- ncol(x) N <- nrow(x) ni <- as.integer(N/3) TOT <- apply(x, 1, mean) tmpx <- cbind(x,TOT)[order(TOT),] tmpxU <- tmpx[(N+1-ni):N,] tmpxL <- tmpx[1:ni,] Ui <- apply(tmpxU,2,sum) Li <- apply(tmpxL,2,sum) discrim <- (Ui - Li)/ni return (discrim[1:k]) } psychometric/R/EnterMeta.R0000744000175100001440000000056010466514060015215 0ustar hornikusers"EnterMeta" <- function () { d <- matrix(,ncol=7) d <- data.frame(d) names(d) <-c("study", "Rxy", "n", "Rxx", "Ryy", "u", "moderator") d$study <- as.factor(d$study) d$Rxy <- as.numeric(d$Rxy) d$n <- as.numeric(d$n) d$Rxx <- as.numeric(d$Rxx) d$Ryy <- as.numeric(d$Ryy) d$u <- as.numeric(d$u) d$moderator <- as.factor(d$moderator) meta <- edit(d) } psychometric/R/Est.true.R0000744000175100001440000000013210466514060015035 0ustar hornikusers"Est.true" <- function (obs, mx, rxx) { that <- mx*(1-rxx)+rxx*obs return(that) } psychometric/R/FileDrawer.R0000744000175100001440000000033510466514060015355 0ustar hornikusers"FileDrawer" <- function(x, rc=.1) { k <- length (x$Rxy[!(is.na(x$Rxy))]) rb <- rbar(x) rc <- rc n <- k * (rb/rc - 1) mat <- matrix(n) colnames(mat) <- c("# of 'lost' studies needed") return(mat) } psychometric/R/FunnelPlot.R0000744000175100001440000000025510466514060015420 0ustar hornikusers"FunnelPlot" <- function(x) { rxy <- x$Rxy N <- x$n rb <- rbar(x) plot(rxy,N, xlab="Effect Sizes", ylab="Sample Sizes", main="Funnel Plot") abline(v=rb) } psychometric/R/ICC1.CI.R0000744000175100001440000000106010572545010014273 0ustar hornikusers"ICC1.CI" <- function (dv, iv, data, level=.95) { require(multilevel) attach(data) mod <- aov(dv ~ as.factor(iv), na.action=na.omit) detach(data) icc <- ICC1(mod) tmod <- summary(mod) df1 <- tmod[[1]][1,1] df2 <- tmod[[1]][2,1] Fobs <- tmod[[1]][1,4] n <- df2/(df1+1) # k-1 noma <- 1- level Ftabl <- qf(noma/2, df1, df2, lower.tail=F) Ftabu <- qf(noma/2, df2, df1, lower.tail=F) Fl <- Fobs/Ftabl Fu <- Fobs*Ftabu lcl <- (Fl-1)/(Fl+n) ucl <- (Fu-1)/(Fu+n) mat <- data.frame(LCL=lcl, ICC1=icc, UCL=ucl) return(mat) } psychometric/R/ICC1.lme.R0000744000175100001440000000041110471144402014552 0ustar hornikusers"ICC1.lme" <- function (dv, grp, data) { require(nlme) attach(data) mod <- lme(dv ~ 1, random=~1|grp, na.action=na.omit) detach(data) t0 <- as.numeric(VarCorr(mod)[1,1]) sig2 <- as.numeric(VarCorr(mod)[2,1]) icc1 <- t0/(t0+sig2) return(icc1) } psychometric/R/ICC2.CI.R0000744000175100001440000000104210572545056014306 0ustar hornikusers"ICC2.CI" <- function (dv, iv, data, level=.95) { require(multilevel) attach(data) mod <- aov(dv ~ as.factor(iv), na.action=na.omit) detach(data) icc <- ICC2(mod) tmod <- summary(mod) df1 <- tmod[[1]][1,1] df2 <- tmod[[1]][2,1] Fobs <- tmod[[1]][1,4] n <- df2/(df1+1) # k-1 noma <- 1- level Ftabl <- qf(noma/2, df1, df2, lower.tail=F) Ftabu <- qf(noma/2, df2, df1, lower.tail=F) Fl <- Fobs/Ftabl Fu <- Fobs*Ftabu lcl <- 1-1/Fl ucl <- 1-1/Fu mat <- data.frame(LCL=lcl, ICC2=icc, UCL=ucl) return(mat) } psychometric/R/ICC2.lme.R0000744000175100001440000000050410471154606014565 0ustar hornikusers"ICC2.lme" <- function (dv, grp, data, weighted=FALSE) { require(nlme) attach(data) mod <- lme(dv ~ 1, random=~1|grp, na.action=na.omit) detach(data) if (!weighted) {icc2 <- mean(GmeanRel(mod)$MeanRel) } else { icc2 <- weighted.mean(GmeanRel(mod)$MeanRel, GmeanRel(mod)$GrpSize) } return(icc2) } psychometric/R/item.exam.R0000744000175100001440000000173711036412772015230 0ustar hornikusers"item.exam" <- function (x, y = NULL, discrim = FALSE) { x <- na.exclude(as.matrix(x)) if (!discrim) { discrim <- NA } else { discrim <- discrim(x) } k <- ncol(x) n <- nrow(x) TOT <- apply(x, 1, sum) TOT.woi <- TOT - (x) diff <- apply(x, 2, mean) rix <- cor(x, TOT, use = "complete") rix.woi <- diag(cor(x, TOT.woi, use = "complete")) sx <- apply(x, 2, sd) vx <- ((n - 1)/n) * sx^2 if (is.null(y)) { riy <- NA } else { y <- y riy <- cor(x, y, use = "complete") } i.val <- riy * sqrt(vx) i.rel <- rix * sqrt(vx) i.rel.woi <- rix.woi * sqrt(vx) mat <- data.frame(Sample.SD = sx, Item.total = rix, Item.Tot.woi = rix.woi, Difficulty = diff, Discrimination = discrim, Item.Criterion = riy, Item.Reliab = i.rel, Item.Rel.woi = i.rel.woi, Item.Validity = i.val) return(mat) } psychometric/R/MargUtil.R0000744000175100001440000000016310466514060015054 0ustar hornikusers"MargUtil" <- function(Rxy, Sy, MXg, COST, Nselected) { MU <- Rxy*Sy*MXg - COST/Nselected return(MU) } psychometric/R/MetaTable.R0000744000175100001440000000121510542126346015166 0ustar hornikusers"MetaTable" <- function (x) { rb <- rbar (x) vr <- varr (x) ve <- vare (x) pv <- pvse (x)[1] lclhet <- CIrb(x,,F)[1] uclhet <- CIrb(x,,F)[2] lclhom <- CIrb(x)[1] uclhom <- CIrb(x)[2] rho <- rhoCA(x) vrho <- varRCA(x) pva <- pvaaa(x) clcl <- CredIntRho(x, level=.8)[[1]] cucl <- CredIntRho(x, level=.8)[[2]] mat <- data.frame(rbar = rb, Variance.rbar = vr, VarianceSamplingError = ve, PercentDueError = pv, HET95LCL = lclhet, HET95UCL = uclhet, HOM95LCL = lclhom, HOM95UCL = uclhom, RHO = rho, VarianceRho = vrho, PercentDueErrorCorrect = pva, CredInt80LCL = clcl, CredInt80UCL = cucl) return(mat) } psychometric/R/pvaaa.R0000744000175100001440000000026010471154664014424 0ustar hornikusers"pvaaa" <- function(x, aprox=FALSE) { if (!aprox) {ve <- vare(x)} else {ve <- aprox.vare(x)} vr <- varr(x) vav <- varAV(x) pv <- (ve+vav)/vr*100 return(pv) } psychometric/R/pvse.R0000744000175100001440000000023510466514060014305 0ustar hornikusers"pvse" <- function (x) { ve <- vare(x) vr <- varr(x) pv <- ve/vr*100 mat <- matrix(pv) colnames(mat) <- "Compare to > 75%" return(mat) } psychometric/R/Qrbar.R0000744000175100001440000000045211121216764014377 0ustar hornikusers"Qrbar" <- function(x) { r <- x$Rxy n <- x$n rb <- rbar(x) chi <- sum((((n-1)*(r-rb)^2)/(1-rb^2)^2),na.rm=TRUE) k <- length (x$Rxy[!(is.na(x$Rxy))]) pval <- 1 - pchisq(chi, k-1) mat <- matrix(c(chi,k-1,pval),ncol=3) colnames(mat) <- c("CHISQ", "df", "p-val") return(mat) } psychometric/R/Qrho.R0000744000175100001440000000051010471220650014230 0ustar hornikusers"Qrho" <- function(x, aproxe=FALSE) { if(!aproxe) { ve <- vare(x)} else {ve <- aprox.vare(x)} k <- length (x$Rxy[!(is.na(x$Rxy))]) vr <- varr(x) vav <- varAV(x) q <- (k*vr)/(vav+ve) pval <- 1 - pchisq(q, k-1) mat <- matrix(c(q,k-1,pval),ncol=3) colnames(mat) <- c("CHISQ", "df", "p-val") return(mat) } psychometric/R/r.nil.R0000744000175100001440000000024310466514060014351 0ustar hornikusers"r.nil" <- function (r, n) { t <- (r*sqrt(n-2))/sqrt(1-r^2) df <- n-2 p <- pt(t, df) d <- data.frame("H0:rNot0" = r, t = t, df=df, p=1-p) return(d) } psychometric/R/r2z.R0000744000175100001440000000006410466514060014045 0ustar hornikusers"r2z" <- function (x) { .5 * log((1+x)/(1-x)) } psychometric/R/rbar.R0000744000175100001440000000017011121216774014254 0ustar hornikusers"rbar" <- function(x) { rxy <- x$Rxy n <- x$n rbar <- sum(n*rxy, na.rm=TRUE)/sum(n,na.rm=TRUE) return(rbar) } psychometric/R/rdif.nul.R0000744000175100001440000000026510466514060015054 0ustar hornikusers"rdif.nul" <- function (r1, r2, n1, n2) { z1 <- r2z(r1) z2 <- r2z(r2) z <- (z1 - z2)/sqrt(1/(n1-3)+1/(n2-3)) p <- pnorm(z) return(data.frame(zDIF = z, p = 1-p)) } psychometric/R/rhoCA.R0000744000175100001440000000013510466514060014323 0ustar hornikusers"rhoCA" <- function(x) { rb <- rbar(x) AA <- CAFAA(x) rho <- rb/AA return(rho) } psychometric/R/SBlength.R0000744000175100001440000000013110466514060015031 0ustar hornikusers"SBlength" <- function(rxxp, rxx) { N <- rxxp*(1-rxx)/(rxx*(1-rxxp)) return(N) } psychometric/R/SBrel.R0000744000175100001440000000014310466514060014335 0ustar hornikusers"SBrel" <- function(Nlength, rxx) { rxxp <- Nlength*rxx/(1+(Nlength-1)*rxx) return(rxxp) } psychometric/R/SE.Est.R0000744000175100001440000000012110466514060014363 0ustar hornikusers"SE.Est" <- function (s, rxx) { see <- s*sqrt(rxx*(1-rxx)) return(see) } psychometric/R/SE.Meas.R0000744000175100001440000000011510466514060014520 0ustar hornikusers"SE.Meas" <- function (s, rxx) { sem <- s*sqrt(1-rxx) return(sem) } psychometric/R/SE.Pred.R0000744000175100001440000000012010466514060014521 0ustar hornikusers"SE.Pred" <- function (sy, rxx) { sep <- sy*sqrt(1-rxx^2) return(sep) } psychometric/R/SERHET.R0000744000175100001440000000025111121217002014301 0ustar hornikusers"SERHET" <- function (x) { N <- sum(x$n,na.rm=TRUE) rb <- rbar(x) k <- length (x$Rxy[!(is.na(x$Rxy))]) se <- sqrt((1-rb^2)^2/(N-k)+varRes(x)/k) return(se) } psychometric/R/SERHOM.R0000744000175100001440000000023211121217010014302 0ustar hornikusers"SERHOM" <- function (x) { N <- sum(x$n,na.rm=TRUE) rb <- rbar(x) k <- length (x$Rxy[!(is.na(x$Rxy))]) se <- (1-rb^2)/sqrt(N-k) return(se) } psychometric/R/SEz.R0000744000175100001440000000005110466514060014025 0ustar hornikusers"SEz" <- function(n) { 1/sqrt(n-3) } psychometric/R/TotUtil.R0000744000175100001440000000016210466514060014733 0ustar hornikusers"TotUtil" <- function(Rxy, Sy, MXg, COST, Nselected) { TU <- Nselected*Rxy*Sy*MXg - COST return(TU) } psychometric/R/varAV.R0000744000175100001440000000017410466514060014351 0ustar hornikusers"varAV" <- function(x) { rho <- rhoCA(x) AA <- CAFAA(x) cvf <- CVF(x) vav <- rho^2*AA^2*cvf return(vav) } psychometric/R/vare.R0000744000175100001440000000020211121217015014244 0ustar hornikusers"vare" <- function(x) { n <- x$n rb <- rbar(x) ve <- sum(n*(1-rb^2)^2/(n-1),na.rm=TRUE)/sum(n,na.rm=TRUE) return(ve) } psychometric/R/vare36.R0000744000175100001440000000023611121217022014422 0ustar hornikusers"vare36" <- function(x) { n <- x$n rb <- rbar(x) T <- sum(n,na.rm=TRUE) k <- length (x$Rxy[!(is.na(x$Rxy))]) ve <- ((1-rb^2)^2*k)/T return(ve) } psychometric/R/varr.R0000744000175100001440000000021211121217031014260 0ustar hornikusers"varr" <- function(x) { rxy <- x$Rxy n <- x$n rb <- rbar(x) vr <- sum(n*(rxy-rb)^2,na.rm=TRUE)/sum(n,na.rm=TRUE) return(vr) } psychometric/R/varRCA.R0000744000175100001440000000024010471154754014450 0ustar hornikusers"varRCA" <- function(x, aprox=FALSE) { if (!aprox) {vrt <- varResT(x)} else {vrt <- varResT(x, T)} aa <- CAFAA(x) vr <- vrt/aa^2 return(vr) } psychometric/R/varRes.R0000744000175100001440000000015210466514060014570 0ustar hornikusers"varRes" <- function(x) { varr <- varr(x) vare <- vare(x) vr <- varr - vare return(vr) } psychometric/R/varResT.R0000744000175100001440000000027010471154770014721 0ustar hornikusers"varResT" <- function(x, aprox=FALSE) { if (!aprox) {ve <- vare(x)} else {ve <- aprox.vare(x)} vr <- varr(x) vav <- varAV(x) vrest <- vr - ve - vav return(vrest) } psychometric/R/z2r.R0000744000175100001440000000007010466514060014042 0ustar hornikusers"z2r" <- function (x) { (exp(2*x)-1)/(exp(2*x)+1) } psychometric/README.txt0000744000175100001440000000165711427304655014520 0ustar hornikusersThe following changes have been made since version 0.1.0 of Applied Psychometric Theory Changes in 0.1.1 1. There was an error in alpha.CI (error fixed) 2. Defaults in alpha.CI have been changed (level = .90, onesided=FALSE) 3. A new function 'MetaTable' has been added to summarize various MetaAnalysis functions. Changes in 0.1.2 1. There was an error in the df of the calculation of the upper CI for ICC1 and ICC2 Changes in 2.0 1. My affiliation has changed from @umsl.edu to @statefarm.com 2. Item.Exam() was updated to include item.total correlation without item included Changes in 2.1 1. in the help and R files, na.rm=T is replaced with na.rm=TRUE 2. an extra '(' was removed from the help file description of ICC1.lme() 3. a grammar error was corrected in help file for cRR() resulting in error in reading in newer versions. Changes in 2.2 1. revised help and Rd files 2. re-compiled to work under R 2.11.1