pax_global_header00006660000000000000000000000064137413253210014513gustar00rootroot0000000000000052 comment=5891832f5fff266ebfaed5a2c5ce4b62d92e9ff3 r-cran-statcheck-1.3.0/000077500000000000000000000000001374132532100146455ustar00rootroot00000000000000r-cran-statcheck-1.3.0/COPYING000066400000000000000000000436551374132532100157150ustar00rootroot00000000000000 GNU GENERAL PUBLIC LICENSE Version 2, June 1991 Copyright (C) 1989, 1991 Free Software Foundation, Inc. 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA Everyone is permitted to copy and distribute verbatim copies of this license document, but changing it is not allowed. Preamble The licenses for most software are designed to take away your freedom to share and change it. By contrast, the GNU General Public License is intended to guarantee your freedom to share and change free software--to make sure the software is free for all its users. This General Public License applies to most of the Free Software Foundation's software and to any other program whose authors commit to using it. (Some other Free Software Foundation software is covered by the GNU Library General Public License instead.) You can apply it to your programs, too. When we speak of free software, we are referring to freedom, not price. Our General Public Licenses are designed to make sure that you have the freedom to distribute copies of free software (and charge for this service if you wish), that you receive source code or can get it if you want it, that you can change the software or use pieces of it in new free programs; and that you know you can do these things. To protect your rights, we need to make restrictions that forbid anyone to deny you these rights or to ask you to surrender the rights. These restrictions translate to certain responsibilities for you if you distribute copies of the software, or if you modify it. For example, if you distribute copies of such a program, whether gratis or for a fee, you must give the recipients all the rights that you have. You must make sure that they, too, receive or can get the source code. And you must show them these terms so they know their rights. We protect your rights with two steps: (1) copyright the software, and (2) offer you this license which gives you legal permission to copy, distribute and/or modify the software. Also, for each author's protection and ours, we want to make certain that everyone understands that there is no warranty for this free software. If the software is modified by someone else and passed on, we want its recipients to know that what they have is not the original, so that any problems introduced by others will not reflect on the original authors' reputations. Finally, any free program is threatened constantly by software patents. We wish to avoid the danger that redistributors of a free program will individually obtain patent licenses, in effect making the program proprietary. To prevent this, we have made it clear that any patent must be licensed for everyone's free use or not licensed at all. The precise terms and conditions for copying, distribution and modification follow. GNU GENERAL PUBLIC LICENSE TERMS AND CONDITIONS FOR COPYING, DISTRIBUTION AND MODIFICATION 0. This License applies to any program or other work which contains a notice placed by the copyright holder saying it may be distributed under the terms of this General Public License. The "Program", below, refers to any such program or work, and a "work based on the Program" means either the Program or any derivative work under copyright law: that is to say, a work containing the Program or a portion of it, either verbatim or with modifications and/or translated into another language. (Hereinafter, translation is included without limitation in the term "modification".) Each licensee is addressed as "you". Activities other than copying, distribution and modification are not covered by this License; they are outside its scope. The act of running the Program is not restricted, and the output from the Program is covered only if its contents constitute a work based on the Program (independent of having been made by running the Program). Whether that is true depends on what the Program does. 1. You may copy and distribute verbatim copies of the Program's source code as you receive it, in any medium, provided that you conspicuously and appropriately publish on each copy an appropriate copyright notice and disclaimer of warranty; keep intact all the notices that refer to this License and to the absence of any warranty; and give any other recipients of the Program a copy of this License along with the Program. You may charge a fee for the physical act of transferring a copy, and you may at your option offer warranty protection in exchange for a fee. 2. You may modify your copy or copies of the Program or any portion of it, thus forming a work based on the Program, and copy and distribute such modifications or work under the terms of Section 1 above, provided that you also meet all of these conditions: a) You must cause the modified files to carry prominent notices stating that you changed the files and the date of any change. b) You must cause any work that you distribute or publish, that in whole or in part contains or is derived from the Program or any part thereof, to be licensed as a whole at no charge to all third parties under the terms of this License. c) If the modified program normally reads commands interactively when run, you must cause it, when started running for such interactive use in the most ordinary way, to print or display an announcement including an appropriate copyright notice and a notice that there is no warranty (or else, saying that you provide a warranty) and that users may redistribute the program under these conditions, and telling the user how to view a copy of this License. (Exception: if the Program itself is interactive but does not normally print such an announcement, your work based on the Program is not required to print an announcement.) These requirements apply to the modified work as a whole. If identifiable sections of that work are not derived from the Program, and can be reasonably considered independent and separate works in themselves, then this License, and its terms, do not apply to those sections when you distribute them as separate works. But when you distribute the same sections as part of a whole which is a work based on the Program, the distribution of the whole must be on the terms of this License, whose permissions for other licensees extend to the entire whole, and thus to each and every part regardless of who wrote it. Thus, it is not the intent of this section to claim rights or contest your rights to work written entirely by you; rather, the intent is to exercise the right to control the distribution of derivative or collective works based on the Program. In addition, mere aggregation of another work not based on the Program with the Program (or with a work based on the Program) on a volume of a storage or distribution medium does not bring the other work under the scope of this License. 3. You may copy and distribute the Program (or a work based on it, under Section 2) in object code or executable form under the terms of Sections 1 and 2 above provided that you also do one of the following: a) Accompany it with the complete corresponding machine-readable source code, which must be distributed under the terms of Sections 1 and 2 above on a medium customarily used for software interchange; or, b) Accompany it with a written offer, valid for at least three years, to give any third party, for a charge no more than your cost of physically performing source distribution, a complete machine-readable copy of the corresponding source code, to be distributed under the terms of Sections 1 and 2 above on a medium customarily used for software interchange; or, c) Accompany it with the information you received as to the offer to distribute corresponding source code. (This alternative is allowed only for noncommercial distribution and only if you received the program in object code or executable form with such an offer, in accord with Subsection b above.) The source code for a work means the preferred form of the work for making modifications to it. For an executable work, complete source code means all the source code for all modules it contains, plus any associated interface definition files, plus the scripts used to control compilation and installation of the executable. However, as a special exception, the source code distributed need not include anything that is normally distributed (in either source or binary form) with the major components (compiler, kernel, and so on) of the operating system on which the executable runs, unless that component itself accompanies the executable. If distribution of executable or object code is made by offering access to copy from a designated place, then offering equivalent access to copy the source code from the same place counts as distribution of the source code, even though third parties are not compelled to copy the source along with the object code. 4. You may not copy, modify, sublicense, or distribute the Program except as expressly provided under this License. Any attempt otherwise to copy, modify, sublicense or distribute the Program is void, and will automatically terminate your rights under this License. However, parties who have received copies, or rights, from you under this License will not have their licenses terminated so long as such parties remain in full compliance. 5. You are not required to accept this License, since you have not signed it. However, nothing else grants you permission to modify or distribute the Program or its derivative works. These actions are prohibited by law if you do not accept this License. Therefore, by modifying or distributing the Program (or any work based on the Program), you indicate your acceptance of this License to do so, and all its terms and conditions for copying, distributing or modifying the Program or works based on it. 6. Each time you redistribute the Program (or any work based on the Program), the recipient automatically receives a license from the original licensor to copy, distribute or modify the Program subject to these terms and conditions. You may not impose any further restrictions on the recipients' exercise of the rights granted herein. You are not responsible for enforcing compliance by third parties to this License. 7. If, as a consequence of a court judgment or allegation of patent infringement or for any other reason (not limited to patent issues), conditions are imposed on you (whether by court order, agreement or otherwise) that contradict the conditions of this License, they do not excuse you from the conditions of this License. If you cannot distribute so as to satisfy simultaneously your obligations under this License and any other pertinent obligations, then as a consequence you may not distribute the Program at all. For example, if a patent license would not permit royalty-free redistribution of the Program by all those who receive copies directly or indirectly through you, then the only way you could satisfy both it and this License would be to refrain entirely from distribution of the Program. If any portion of this section is held invalid or unenforceable under any particular circumstance, the balance of the section is intended to apply and the section as a whole is intended to apply in other circumstances. It is not the purpose of this section to induce you to infringe any patents or other property right claims or to contest validity of any such claims; this section has the sole purpose of protecting the integrity of the free software distribution system, which is implemented by public license practices. Many people have made generous contributions to the wide range of software distributed through that system in reliance on consistent application of that system; it is up to the author/donor to decide if he or she is willing to distribute software through any other system and a licensee cannot impose that choice. This section is intended to make thoroughly clear what is believed to be a consequence of the rest of this License. 8. If the distribution and/or use of the Program is restricted in certain countries either by patents or by copyrighted interfaces, the original copyright holder who places the Program under this License may add an explicit geographical distribution limitation excluding those countries, so that distribution is permitted only in or among countries not thus excluded. In such case, this License incorporates the limitation as if written in the body of this License. 9. The Free Software Foundation may publish revised and/or new versions of the General Public License from time to time. Such new versions will be similar in spirit to the present version, but may differ in detail to address new problems or concerns. Each version is given a distinguishing version number. If the Program specifies a version number of this License which applies to it and "any later version", you have the option of following the terms and conditions either of that version or of any later version published by the Free Software Foundation. If the Program does not specify a version number of this License, you may choose any version ever published by the Free Software Foundation. 10. If you wish to incorporate parts of the Program into other free programs whose distribution conditions are different, write to the author to ask for permission. For software which is copyrighted by the Free Software Foundation, write to the Free Software Foundation; we sometimes make exceptions for this. Our decision will be guided by the two goals of preserving the free status of all derivatives of our free software and of promoting the sharing and reuse of software generally. NO WARRANTY 11. BECAUSE THE PROGRAM IS LICENSED FREE OF CHARGE, THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF ALL NECESSARY SERVICING, REPAIR OR CORRECTION. 12. IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MAY MODIFY AND/OR REDISTRIBUTE THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS), EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES. END OF TERMS AND CONDITIONS How to Apply These Terms to Your New Programs If you develop a new program, and you want it to be of the greatest possible use to the public, the best way to achieve this is to make it free software which everyone can redistribute and change under these terms. To do so, attach the following notices to the program. It is safest to attach them to the start of each source file to most effectively convey the exclusion of warranty; and each file should have at least the "copyright" line and a pointer to where the full notice is found. Copyright (C) This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program; if not, write to the Free Software Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA Also add information on how to contact you by electronic and paper mail. If the program is interactive, make it output a short notice like this when it starts in an interactive mode: Gnomovision version 69, Copyright (C) year name of author Gnomovision comes with ABSOLUTELY NO WARRANTY; for details type `show w'. This is free software, and you are welcome to redistribute it under certain conditions; type `show c' for details. The hypothetical commands `show w' and `show c' should show the appropriate parts of the General Public License. Of course, the commands you use may be called something other than `show w' and `show c'; they could even be mouse-clicks or menu items--whatever suits your program. You should also get your employer (if you work as a programmer) or your school, if any, to sign a "copyright disclaimer" for the program, if necessary. Here is a sample; alter the names: Yoyodyne, Inc., hereby disclaims all copyright interest in the program `Gnomovision' (which makes passes at compilers) written by James Hacker. , 1 April 1989 Ty Coon, President of Vice This General Public License does not permit incorporating your program into proprietary programs. If your program is a subroutine library, you may consider it more useful to permit linking proprietary applications with the library. If this is what you want to do, use the GNU Library General Public License instead of this License.r-cran-statcheck-1.3.0/DESCRIPTION000066400000000000000000000010711374132532100163520ustar00rootroot00000000000000Package: statcheck Type: Package Title: Extract Statistics from Articles and Recompute p Values Version: 1.3.0 Date: 2018-05-04 Author: Sacha Epskamp & Michele B. Nuijten Maintainer: Michele B. Nuijten Depends: R (>= 2.14.2) Imports: plyr, ggplot2, rmarkdown Description: Extract statistics from articles and recompute p values. License: GPL-2 LazyLoad: yes ByteCompile: yes NeedsCompilation: no Packaged: 2018-05-04 09:49:38 UTC; mnuijten Repository: CRAN Date/Publication: 2018-05-04 11:03:58 UTC r-cran-statcheck-1.3.0/MD5000066400000000000000000000023001374132532100151500ustar00rootroot00000000000000ff547ababc85108875b8e8f94239d4dd *COPYING fb14cb0d100067ed4e8eea53deae809c *DESCRIPTION c5c9aa5f2a54761ea2f271363afda31c *NAMESPACE fa69b7791d0af819cbe6da16ed5d5a5b *R/PDFimport.R 751a680d098ab29271574f5e594e84ad *R/checkdir.R fd3a629e6f8a97399c1cc9226cb154a1 *R/htmlImport.R b128103d0149f9d241781ce4c8670c63 *R/identify.statcheck.R ecabe3fddb79f7122a6345af0e018af8 *R/plot.statcheck.R 299d717873c5b5378840b0ac86cc298f *R/statcheck.R 11749390398c4823adc8aa7bfedd0d67 *R/statcheckReport.R bc880473567a4bf82737d960a02b6173 *R/summary.statcheck.R f430184a8086ba9a1b08926711f3ae68 *inst/rmd/statcheckReport_template.Rmd 8b9da7c46de4ab53fb5695ae287dd0ef *man/checkHTML.Rd a43e379a1841c1553d65d8f4817b3880 *man/checkHTMLdir.Rd bc8d5144d822e9af3753992e279eff55 *man/checkPDF.Rd 0e82ee71b855c2bed34b148ee83d3332 *man/checkPDFdir.Rd 6152e4391306817f7352242b25c114ba *man/checkdir.Rd fae41831c7c0becd36944c6061f90668 *man/identify.statcheck.Rd 94b092e9a08e86824fa08cf11fb57de2 *man/plot.statcheck.Rd 65f2946abc37a17a0a842f51b56ed663 *man/statcheck-package.Rd 6493c35f6527f04ebed8ed9ccb81efc6 *man/statcheck.Rd b8f90cc6d8da4518bc7cff8517123a32 *man/statcheckReport.Rd 52753da40d63b08d34a6c71533fe1e0b *man/summary.statcheck.Rd r-cran-statcheck-1.3.0/NAMESPACE000066400000000000000000000011201374132532100160560ustar00rootroot00000000000000export(statcheck) export(checkPDF) export(checkPDFdir) export(checkHTML) export(checkHTMLdir) export(checkdir) export(statcheckReport) S3method(plot,statcheck) S3method(summary,statcheck) S3method(identify,statcheck) importFrom(tcltk,"tk_choose.dir") importFrom(tcltk,"tk_choose.files") import(plyr) import(ggplot2) import(rmarkdown) importFrom("graphics", "abline", "identify", "legend", "par", "plot", "plot.default", "points", "text") importFrom("stats", "as.formula", "pchisq", "pf", "pnorm", "pt") importFrom("utils", "setTxtProgressBar", "txtProgressBar")r-cran-statcheck-1.3.0/R/000077500000000000000000000000001374132532100150465ustar00rootroot00000000000000r-cran-statcheck-1.3.0/R/PDFimport.R000066400000000000000000000030721374132532100170370ustar00rootroot00000000000000# Inner function to read pdf: getPDF <- function(x) { txtfiles <- character(length(x)) for (i in 1:length(x)) { system(paste('pdftotext -q -enc "ASCII7" "', x[i], '"', sep = "")) if (file.exists(gsub("\\.pdf$", "\\.txt", x[i]))) { fileName <- gsub("\\.pdf$", "\\.txt", x[i]) txtfiles[i] <- readChar(fileName, file.info(fileName)$size) } else{ warning(paste("Failure in file", x[i])) txtfiles[i] <- "" } } return(txtfiles) } ## Function to check directory of PDFs: checkPDFdir <- function(dir, subdir = TRUE, ...) { if (missing(dir)) dir <- tk_choose.dir() all.files <- list.files(dir, pattern = "\\.pdf", full.names = TRUE, recursive = subdir) files <- all.files[grepl("\\.pdf$", all.files)] if (length(files) == 0) stop("No PDF found") txts <- character(length(files)) message("Importing PDF files...") pb <- txtProgressBar(max = length(files), style = 3) for (i in 1:length(files)) { txts[i] <- getPDF(files[i]) setTxtProgressBar(pb, i) } close(pb) names(txts) <- gsub("\\.pdf$", "", basename(files)) return(statcheck(txts, ...)) } ## Function to given PDFs: checkPDF <- function(files, ...) { if (missing(files)) files <- tk_choose.files() txts <- sapply(files, getPDF) names(txts) <- gsub("\\.pdf$", "", basename(files), perl = TRUE) return(statcheck(txts, ...)) } r-cran-statcheck-1.3.0/R/checkdir.R000066400000000000000000000022521374132532100167460ustar00rootroot00000000000000checkdir <- function(dir, subdir = TRUE, ...) { if (missing(dir)) dir <- tk_choose.dir() pdfs <- any(grepl("\\.pdf$", list.files(dir, recursive = subdir), ignore.case = TRUE)) htmls <- any(grepl( "\\.html?$", list.files(dir, recursive = subdir), ignore.case = TRUE )) if (pdfs) pdfres <- checkPDFdir(dir, ...) if (htmls) htmlres <- checkHTMLdir(dir, ...) if (pdfs & htmls) { if (!is.null(pdfres) & !is.null(htmlres)) Res <- rbind(pdfres, htmlres) else stop("statcheck did not find any results") } else if (pdfs & !htmls) { if (!is.null(pdfres)) Res <- pdfres else stop("statcheck did not find any results") } else if (!pdfs & htmls) { if (!is.null(htmlres)) Res <- htmlres else stop("statcheck did not find any results") } else if (!pdfs & !htmls) stop("No PDF or HTML found") class(Res) <- c("statcheck", "data.frame") return(Res) }r-cran-statcheck-1.3.0/R/htmlImport.R000066400000000000000000000053211374132532100173310ustar00rootroot00000000000000getHTML <- function(x){ strings <- lapply(x, function(fileName)readChar(file(fileName), file.info(fileName)$size, useBytes = TRUE)) # Remove subscripts (except for p_rep) strings <- lapply(strings, gsub, pattern = "(?!rep).*?", replacement = "", perl = TRUE) # Remove HTML tags: strings <- lapply(strings, gsub, pattern = "<(.|\n)*?>", replacement = "") # Replace html codes: strings <- lapply(strings, gsub, pattern = "<", replacement = "<", fixed = TRUE) strings <- lapply(strings, gsub, pattern = "<", replacement = "<", fixed = TRUE) strings <- lapply(strings, gsub, pattern = "=", replacement = "=", fixed = TRUE) strings <- lapply(strings, gsub, pattern = ">", replacement = ">", fixed = TRUE) strings <- lapply(strings, gsub, pattern = ">", replacement = ">", fixed = TRUE) strings <- lapply(strings, gsub, pattern = "(", replacement = "(", fixed = TRUE) strings <- lapply(strings, gsub, pattern = ")", replacement = ")", fixed = TRUE) strings <- lapply(strings, gsub, pattern = " ", replacement = " ", fixed = TRUE) strings <- lapply(strings, gsub, pattern = " ", replacement = " ", fixed = TRUE) strings <- lapply(strings, gsub, pattern = "\n", replacement = "") strings <- lapply(strings, gsub, pattern = "\r", replacement = "") strings <- lapply(strings, gsub, pattern = "\\s+", replacement = " ") strings <- lapply(strings, gsub, pattern = "−", replacement = "-", fixed = TRUE) return(strings) } checkHTMLdir <- function(dir, subdir = TRUE, extension = TRUE, ...) { if (missing(dir)) { dir <- tk_choose.dir() } if (extension == TRUE) { pat = ".html|.htm" } if (extension == FALSE) { pat = "" } files <- list.files(dir, pattern = pat, full.names = TRUE, recursive = subdir) if (length(files) == 0) { stop("No HTML found") } txts <- character(length(files)) message("Importing HTML files...") pb <- txtProgressBar(max = length(files), style = 3) for (i in 1:length(files)) { txts[i] <- getHTML(files[i]) setTxtProgressBar(pb, i) } close(pb) names(txts) <- gsub(".html", "", basename(files)) names(txts) <- gsub(".htm", "", names(txts)) return(statcheck(txts, ...)) } checkHTML <- function(files, ...) { if (missing(files)) files <- tk_choose.files() txts <- sapply(files, getHTML) names(txts) <- gsub(".html", "", basename(files)) names(txts) <- gsub(".htm", "", names(txts)) return(statcheck(txts, ...)) } r-cran-statcheck-1.3.0/R/identify.statcheck.R000066400000000000000000000007201374132532100207530ustar00rootroot00000000000000identify.statcheck <- function(x, alpha = .05, ...) { reported <- x$Reported.P.Value computed <- x$Computed # replace 'ns' for > alpha reported[x$Reported.Comparison == "ns"] <- alpha plot(x, APAstyle = FALSE, ...) # makes use of the plot.statcheck() function ID <- identify(reported, computed) res <- x[ID,] class(res) <- c("statcheck", "data.frame") return(res) }r-cran-statcheck-1.3.0/R/plot.statcheck.R000066400000000000000000000162641374132532100201300ustar00rootroot00000000000000plot.statcheck <- function( x, alpha = .05, APAstyle = TRUE, group = NULL, ... ){ if (APAstyle == TRUE) { # add this line of code to avoid the NOTE in the R CMD check when building the package # solves the NOTE: "No visible binding for global variable" Type <- Computed <- Reported.P.Value <- NULL # Add vector "Type" to statcheck object, specifying whether observations are # correctly reported, reporting inconsistencies, or decision errors. x$Type[x$Error == "FALSE" & x$DecisionError == "FALSE"] <- "Correctly Reported" x$Type[x$Error == "TRUE" & x$DecisionError == "FALSE"] <- "Reporting Inconsistency" x$Type[x$Error == "TRUE" & x$DecisionError == "TRUE"] <- "Decision Error" #Create ggplot "APA format" theme apatheme <- theme_bw() + theme( panel.grid.major = element_blank(), panel.grid.minor = element_blank(), axis.line = element_line() ) #If no grouping variable is specified, don't use faceting if (is.null(group)) { #Create plot "p"; map computed p-values to x-axis, reported p-values to y-axis, and #color to the Type variable created earlier. Environment command allows apatheme to #be applied later because of bug when creating functions with ggplot2 p <- ggplot(x, aes(y = Computed, x = Reported.P.Value, col = Type), environment = environment()) #Add data points to plot p + geom_point(size = 2.5) + #Add vertical grey dashed line, located at specified alpha level geom_vline(xintercept = alpha, color = "grey60", linetype = "dashed") + #Add horizontal grey dashed line, located at specified alpha level geom_hline(yintercept = alpha, color = "grey60", linetype = "dashed") + #Add a line showing where accurately reported p-values should fall geom_abline(intercept = 0, slope = 1, color = "grey60") + #Add text annotations demarcating over-/under-estimated areas of the plot annotate("text", x = 0.5, y = .10, label = "overestimated") + annotate("text", x = 0.5, y = .90, label = "underestimated") + #Rename the x- and y-axis, and manually specify breaks scale_x_continuous( name = "Reported p-values", breaks = c(0.00, 0.05, 0.10, 0.25, 0.50, 0.75, 1.0), limits = c(0, 1) ) + scale_y_continuous( name = "Computed p-values", breaks = c(0.00, 0.05, 0.10, 0.25, 0.50, 0.75, 1.0), limits = c(0, 1) ) + #Manually specify greyscale colors for different levels of Type scale_color_manual( breaks = c( "Correctly Reported", "Reporting Inconsistency", "Decision Error" ), values = c("grey80", "black", "grey50") ) + apatheme } else { #If grouping variable is specified, use for faceting #Create plot "p"; map computed p-values to x-axis, reported p-values to y-axis, and #color to the Type variable created earlier. Environment command allows apatheme to #be applied later because of bug when creating functions with ggplot2 p <- ggplot(x, aes(y = Computed, x = Reported.P.Value, col = Type), environment = environment()) #Add data points to plot p + geom_point(size = 2.5) + #Add vertical grey dashed line, located at specified alpha level geom_vline(xintercept = alpha, color = "grey60", linetype = "dashed") + #Add horizontal grey dashed line, located at specified alpha level geom_hline(yintercept = alpha, color = "grey60", linetype = "dashed") + #Add a line showing where accurately reported p-values should fall geom_abline(intercept = 0, slope = 1, color = "grey60") + #Add text annotations demarcating over-/under-estimated areas of the plot annotate("text", x = 0.5, y = .10, label = "overestimated") + annotate("text", x = 0.5, y = .90, label = "underestimated") + #Rename the x- and y-axis, and manually specify breaks scale_x_continuous(name = "Reported p-values", breaks = c(0.00, 0.05, 0.10, 0.25, 0.50, 0.75, 1.0)) + scale_y_continuous(name = "Computed p-values", breaks = c(0.00, 0.05, 0.10, 0.25, 0.50, 0.75, 1.0)) + #Manually specify greyscale colors for different levels of Type scale_color_manual( breaks = c( "Correctly Reported", "Reporting Inconsistency", "Decision Error" ), values = c("grey80", "black", "grey50") ) + facet_grid(as.formula(paste(group, "~ ."))) + apatheme } } else { # Extract limit args: args <- list(...) if (is.null(args$xlim)) args$xlim <- c(0, 1) if (is.null(args$ylim)) args$ylim <- c(0, 1) reported <- x$Reported.P.Value computed <- x$Computed # replace 'ns' for > alpha reported[x$Reported.Comparison == "ns"] <- alpha # scatterplot of reported and recalculated p values do.call(plot.default, c( list( x = reported, y = computed, xlab = "reported p value", ylab = "recalculated p value", pch = 20 ), args )) # orange dot for error points(reported[x$Error], computed[x$Error], pch = 20, col = "orange") # red dot for gross error (non-sig reported as sig and vice versa) points(reported[x$DecisionError], computed[x$DecisionError], pch = 20, col = "red") # indicate exact p values with diamond points(x$Reported.P.Value[x$Reported.Comparison == "="], computed[x$Reported.Comparison == "="], pch = 5) # general layout of figure: # lines & text to indicate under- and overestimates abline(h = .05) abline(v = .05) abline(0, 1) text(.8, .4, "overestimated") text(.4, .8, "underestimated") text(0, .53, "non-sig", cex = .7) text(0, .50, "reported", cex = .7) text(0, .47, "as sig", cex = .7) text(.5, 0, "sig reported as non-sig", cex = .7) par(xpd = TRUE) legend( .88, -.15, pch = c(20, 20, 5), col = c("orange", "red", "black"), legend = c("p inconsistency", "decision error", "exact (p = ...)"), cex = .8 ) par(xpd = FALSE) } }r-cran-statcheck-1.3.0/R/statcheck.R000066400000000000000000001471141374132532100171520ustar00rootroot00000000000000statcheck <- function( x, stat = c("t", "F", "cor", "chisq", "Z", "Q"), OneTailedTests = FALSE, alpha = .05, pEqualAlphaSig = TRUE, pZeroError = TRUE, OneTailedTxt = FALSE, AllPValues = FALSE ) { # Create empty data frame for main result: Res <- data.frame( Source = NULL, Statistic = NULL, df1 = NULL, df2 = NULL, Test.Comparison = NULL, Value = NULL, Reported.Comparison = NULL, Reported.P.Value = NULL, Computed = NULL, Error = NULL, DecisionError = NULL, CopyPaste = NULL, Location = NULL, stringsAsFactors = FALSE, dec = NULL, testdec = NULL, OneTail = NULL, OneTailedInTxt = NULL, APAfactor = NULL ) class(Res) <- c("statcheck", "data.frame") OneTailedInTxt <- NULL # Create empty data frame for p values: pRes <- data.frame( Source = NULL, Statistic = NULL, Reported.Comparison = NULL, Reported.P.Value = NULL, Raw = NULL, stringsAsFactors = FALSE ) if (length(x) == 0) return(Res) if (is.null(names(x))) names(x) <- 1:length(x) message("Extracting statistics...") pb <- txtProgressBar(max = length(x), style = 3) for (i in 1:length(x)) { txt <- x[i] #--------------------------- # extract all p values in order to calculate the ratio (statcheck results)/(total # of p values) # p-values # Get location of p-values in text: pLoc <- gregexpr("([^a-z]ns)|(p\\s?[<>=]\\s?\\d?\\.\\d+e?-?\\d*)", txt, ignore.case = TRUE)[[1]] if (pLoc[1] != -1) { # Get raw text of p-values: pRaw <- substring(txt, pLoc, pLoc + attr(pLoc, "match.length") - 1) nums <- gregexpr("(\\d*\\.?\\d+\\s?e?-?\\d*)|ns", pRaw, ignore.case = TRUE) # Extract p-values suppressWarnings(pValsChar <- substring( pRaw, sapply(nums, '[', 1), sapply(nums, function(x) x[1] + attr(x, "match.length")[1] - 1) )) suppressWarnings(pVals <- as.numeric(pValsChar)) # Extract (in)equality eqLoc <- gregexpr("p\\s?.?", pRaw) pEq <- substring( pRaw, sapply(eqLoc, function(x) x[1] + attr(x, "match.length")[1] - 1), sapply(eqLoc, function(x) x[1] + attr(x, "match.length")[1] - 1) ) pEq[grepl("ns", pRaw, ignore.case = TRUE)] <- "ns" pvalues <- data.frame( Source = names(x)[i], Statistic = "p", Reported.Comparison = pEq, Reported.P.Value = pVals, Raw = pRaw, stringsAsFactors = FALSE ) # remove p values greater than one pvalues <- pvalues[pvalues$Reported.P.Value <= 1 | is.na(pvalues$Reported.P.Value), ] pRes <- rbind(pRes, pvalues) rm(pvalues) } #--------------------------- # search for "one-sided"/"one-tailed"/"directional" in full text to detect one-sided testing # onesided <- gregexpr("sided|tailed|directional",txt,ignore.case=TRUE)[[1]] onesided <- gregexpr("one.?sided|one.?tailed|directional", txt, ignore.case = TRUE)[[1]] if (onesided[1] != -1) { onesided <- 1 } else { onesided <- 0 } OneTailedInTxt <- as.logical(onesided) #--------------------------- # t-values: if ("t" %in% stat) { # Get location of t-values in text: tLoc <- gregexpr( "t\\s?\\(\\s?\\d*\\.?\\d+\\s?\\)\\s?[<>=]\\s?[^a-z\\d]{0,3}\\s?\\d*,?\\d*\\.?\\d+\\s?,\\s?(([^a-z]ns)|(p\\s?[<>=]\\s?\\d?\\.\\d+e?-?\\d*))", txt, ignore.case = TRUE )[[1]] if (tLoc[1] != -1) { # Get raw text of t-values: tRaw <- substring(txt, tLoc, tLoc + attr(tLoc, "match.length") - 1) # remove commas (thousands separators) tRaw <- gsub("(?<=\\d),(?=\\d+)", "", tRaw, perl = TRUE) # Replace weird codings of a minus sign with actual minus sign: # First remove spaces tRaw <- gsub("(?<=\\=)\\s+(?=.*\\,)", "", tRaw, perl = TRUE) # Replace any weird string with a minus sign tRaw <- gsub("(?<=\\=)\\s?[^\\d\\.]+(?=.*\\,)", " -", tRaw, perl = TRUE) # Add spaces again: tRaw <- gsub("(?<=\\=)(?=(\\.|\\d))", " ", tRaw, perl = TRUE) # Extract location of numbers: nums <- gregexpr("(\\-?\\s?\\d*\\.?\\d+\\s?e?-?\\d*)|ns", tRaw, ignore.case = TRUE) # Extract df: df <- as.numeric(substring( tRaw, sapply(nums, '[', 1), sapply(nums, function(x) x[1] + attr(x, "match.length")[1] - 1) )) # Extract t-values suppressWarnings(tValsChar <- substring( tRaw, sapply(nums, '[', 2), sapply(nums, function(x) x[2] + attr(x, "match.length")[2] - 1) )) suppressWarnings(tVals <- as.numeric(tValsChar)) # Extract number of decimals test statistic testdec <- attr(regexpr("\\.\\d+", tValsChar), "match.length") - 1 testdec[testdec < 0] <- 0 # Extract (in)equality test statistic testEqLoc <- gregexpr("\\)\\s?[<>=]", tRaw) testEq <- substring( tRaw, sapply(testEqLoc, function(x) x[1] + attr(x, "match.length")[1] - 1), sapply(testEqLoc, function(x) x[1] + attr(x, "match.length")[1] - 1) ) # Extract p-values suppressWarnings(pValsChar <- substring( tRaw, sapply(nums, '[', 3), sapply(nums, function(x) x[3] + attr(x, "match.length")[3] - 1) )) suppressWarnings(pVals <- as.numeric(pValsChar)) # Extract (in)equality eqLoc <- gregexpr("p\\s?[<>=]", tRaw, ignore.case = TRUE) pEq <- substring( tRaw, sapply(eqLoc, function(x) x[1] + attr(x, "match.length")[1] - 1), sapply(eqLoc, function(x) x[1] + attr(x, "match.length")[1] - 1) ) pEq[grepl("ns", tRaw, ignore.case = TRUE)] <- "ns" # determine number of decimals of p value dec <- attr(regexpr("\\.\\d+", pValsChar), "match.length") - 1 dec[dec < 0] <- 0 # Create data frame: tRes <- data.frame( Source = names(x)[i], Statistic = "t", df1 = NA, df2 = df, Test.Comparison = testEq, Value = tVals, Reported.Comparison = pEq, Reported.P.Value = pVals, Computed = pt(-1 * abs(tVals), df) * 2, Location = tLoc, Raw = tRaw, stringsAsFactors = FALSE, dec = dec, testdec = testdec, OneTailedInTxt = OneTailedInTxt ) # Append, clean and close: Res <- rbind(Res, tRes) rm(tRes) } } #--------------------------- # F-values: if ("F" %in% stat) { # Get location of F-values in text: # also pick up degrees of freedom wrongly converted into letters: # 1 --> l or I FLoc <- gregexpr( "F\\s?\\(\\s?\\d*\\.?(I|l|\\d+)\\s?,\\s?\\d*\\.?\\d+\\s?\\)\\s?[<>=]\\s?\\d*,?\\d*\\.?\\d+\\s?,\\s?(([^a-z]ns)|(p\\s?[<>=]\\s?\\d?\\.\\d+e?-?\\d*))", txt, ignore.case = TRUE )[[1]] if (FLoc[1] != -1) { # Get raw text of F-values: FRaw <- substring(txt, FLoc, FLoc + attr(FLoc, "match.length") - 1) # convert wrongly printed "l" or "I" into 1 FRaw <- gsub("l|I", 1, FRaw) # Extract location of numbers: nums <- gregexpr("(\\d*\\.?\\d+\\s?e?-?\\d*)|ns", FRaw, ignore.case = TRUE) # Extract df1: df1 <- as.numeric(substring( FRaw, sapply(nums, '[', 1), sapply(nums, function(x) x[1] + attr(x, "match.length")[1] - 1) )) # Extract df2: df2 <- as.numeric(substring( FRaw, sapply(nums, '[', 2), sapply(nums, function(x) x[2] + attr(x, "match.length")[2] - 1) )) # remove commas (thousands separators) Fsplit <- strsplit(FRaw, "\\)", perl = TRUE) FValsRaw <- lapply(Fsplit, function(x) x[2]) FandDF <- lapply(Fsplit, function(x) x[1]) FValsRaw <- gsub("(?<=\\d),(?=\\d+)", "", FValsRaw, perl = TRUE) FRaw <- paste(FandDF, ")", FValsRaw, sep = "") # Extract F-values numsF <- gregexpr("(\\d*\\.?\\d+)|ns", FValsRaw) suppressWarnings(FValsChar <- substring( FValsRaw, sapply(numsF, '[', 1), sapply(numsF, function(x) x[1] + attr(x, "match.length")[1] - 1) )) suppressWarnings(FVals <- as.numeric(FValsChar)) # Extract number of decimals test statistic testdec <- attr(regexpr("\\.\\d+", FValsChar), "match.length") - 1 testdec[testdec < 0] <- 0 # Extract (in)equality test statistic testEqLoc <- gregexpr("\\)\\s?[<>=]", FRaw) testEq <- substring( FRaw, sapply(testEqLoc, function(x) x[1] + attr(x, "match.length")[1] - 1), sapply(testEqLoc, function(x) x[1] + attr(x, "match.length")[1] - 1) ) # Extract p-values suppressWarnings(pValsChar <- substring( FValsRaw, sapply(numsF, '[', 2), sapply(numsF, function(x) x[2] + attr(x, "match.length")[2] - 1) )) suppressWarnings(pVals <- as.numeric(pValsChar)) # Extract (in)equality eqLoc <- gregexpr("p\\s?[<>=]", FRaw, ignore.case = TRUE) pEq <- substring( FRaw, sapply(eqLoc, function(x) x[1] + attr(x, "match.length")[1] - 1), sapply(eqLoc, function(x) x[1] + attr(x, "match.length")[1] - 1) ) pEq[grepl("ns", FRaw, ignore.case = TRUE)] <- "ns" # determine number of decimals of p value dec <- attr(regexpr("\\.\\d+", pValsChar), "match.length") - 1 dec[dec < 0] <- NA # Create data frame: FRes <- data.frame( Source = names(x)[i], Statistic = "F", df1 = df1, df2 = df2, Test.Comparison = testEq, Value = FVals, Reported.Comparison = pEq, Reported.P.Value = pVals, Computed = pf(FVals, df1, df2, lower.tail = FALSE), Location = FLoc, Raw = FRaw, stringsAsFactors = FALSE, dec = dec, testdec = testdec, OneTailedInTxt = OneTailedInTxt ) # Append, clean and close: Res <- rbind(Res, FRes) rm(FRes) } } #--------------------------- # correlations: if (any(c("r", "cor", "correlations") %in% stat)) { # Get location of r-values in text: rLoc <- gregexpr( "r\\s?\\(\\s?\\d*\\.?\\d+\\s?\\)\\s?[<>=]\\s?[^a-z\\d]{0,3}\\s?\\d*\\.?\\d+\\s?,\\s?(([^a-z]ns)|(p\\s?[<>=]\\s?\\d?\\.\\d+e?-?\\d*))", txt, ignore.case = TRUE )[[1]] if (rLoc[1] != -1) { # Get raw text of r-values: rRaw <- substring(txt, rLoc, rLoc + attr(rLoc, "match.length") - 1) # Replace weird codings of a minus sign with actual minus sign: # First remove spaces rRaw <- gsub("(?<=\\=)\\s+(?=.*\\,)", "", rRaw, perl = TRUE) # Replace any weird string with a minus sign rRaw <- gsub("(?<=\\=)\\s?[^\\d\\.]+(?=.*\\,)", " -", rRaw, perl = TRUE) # Add spaces again: rRaw <- gsub("(?<=\\=)(?=(\\.|\\d))", " ", rRaw, perl = TRUE) # Extract location of numbers: nums <- gregexpr("(\\-?\\s?\\d*\\.?\\d+\\s?e?-?\\d*)|ns", rRaw, ignore.case = TRUE) # Extract df: df <- as.numeric(substring( rRaw, sapply(nums, '[', 1), sapply(nums, function(x) x[1] + attr(x, "match.length")[1] - 1) )) # Extract r-values suppressWarnings(rValsChar <- substring( rRaw, sapply(nums, '[', 2), sapply(nums, function(x) x[2] + attr(x, "match.length")[2] - 1) )) suppressWarnings(rVals <- as.numeric(rValsChar)) # Extract number of decimals test statistic testdec <- attr(regexpr("\\.\\d+", rValsChar), "match.length") - 1 testdec[testdec < 0] <- 0 # Extract (in)equality test statistic testEqLoc <- gregexpr("\\)\\s?[<>=]", rRaw) testEq <- substring( rRaw, sapply(testEqLoc, function(x) x[1] + attr(x, "match.length")[1] - 1), sapply(testEqLoc, function(x) x[1] + attr(x, "match.length")[1] - 1) ) # Extract p-values suppressWarnings(pValsChar <- substring( rRaw, sapply(nums, '[', 3), sapply(nums, function(x) x[3] + attr(x, "match.length")[3] - 1) )) suppressWarnings(pVals <- as.numeric(pValsChar)) # Extract (in)equality eqLoc <- gregexpr("p\\s?[<>=]", rRaw, ignore.case = TRUE) pEq <- substring( rRaw, sapply(eqLoc, function(x) x[1] + attr(x, "match.length")[1] - 1), sapply(eqLoc, function(x) x[1] + attr(x, "match.length")[1] - 1) ) pEq[grepl("ns", rRaw, ignore.case = TRUE)] <- "ns" # determine number of decimals of p value dec <- attr(regexpr("\\.\\d+", pValsChar), "match.length") - 1 dec[dec < 0] <- 0 # computed p = NA for correlations reported as >1 pComputed <- pmin(pt(-1 * abs(r2t(rVals, df)), df) * 2, 1) pComputed[is.nan(pComputed)] <- NA # Create data frame: rRes <- data.frame( Source = names(x)[i], Statistic = "r", df1 = NA, df2 = df, Test.Comparison = testEq, Value = rVals, Reported.Comparison = pEq, Reported.P.Value = pVals, Computed = pComputed, Location = rLoc, Raw = rRaw, stringsAsFactors = FALSE, dec = dec, testdec = testdec, OneTailedInTxt = OneTailedInTxt ) # Append, clean and close: Res <- rbind(Res, rRes) rm(rRes) } } #--------------------------- # z-values: if ("Z" %in% stat) { # Get location of z-values in text: zLoc <- gregexpr( "[^a-z]z\\s?[<>=]\\s?[^a-z\\d]{0,3}\\s?\\d*,?\\d*\\.?\\d+\\s?,\\s?(([^a-z]ns)|(p\\s?[<>=]\\s?\\d?\\.\\d+e?-?\\d*))", txt, ignore.case = TRUE )[[1]] if (zLoc[1] != -1) { # Get raw text of z-values: zRaw <- substring(txt, zLoc, zLoc + attr(zLoc, "match.length") - 1) # remove any character before test statistic zRaw <- gsub(".?(z|Z)", "Z", zRaw, perl = TRUE) # remove commas (thousands separators) zRaw <- gsub("(?<=\\d),(?=\\d+\\.)", "", zRaw, perl = TRUE) # Replace weird codings of a minus sign with actual minus sign: # First remove spaces zRaw <- gsub("(?<=\\=)\\s+(?=.*\\,)", "", zRaw, perl = TRUE) # Replace any weird string with a minus sign zRaw <- gsub("(?<=\\=)\\s?[^\\d\\.]+(?=.*\\,)", " -", zRaw, perl = TRUE) # Add spaces again: zRaw <- gsub("(?<=\\=)(?=(\\.|\\d))", " ", zRaw, perl = TRUE) # Extract location of numbers: nums <- gregexpr("(\\-?\\s?\\d*\\.?\\d+\\s?e?-?\\d*)|ns", zRaw, ignore.case = TRUE) # Extract z-values suppressWarnings(zValsChar <- substring( zRaw, sapply(nums, '[', 1), sapply(nums, function(x) x[1] + attr(x, "match.length")[1] - 1) )) suppressWarnings(zVals <- as.numeric(zValsChar)) # Extract number of decimals test statistic testdec <- attr(regexpr("\\.\\d+", zValsChar), "match.length") - 1 testdec[testdec < 0] <- 0 # Extract (in)equality test statistic testEqLoc <- gregexpr("(z|Z|z'|Z')\\s?[<>=]", zRaw) testEq <- substring( zRaw, sapply(testEqLoc, function(x) x[1] + attr(x, "match.length")[1] - 1), sapply(testEqLoc, function(x) x[1] + attr(x, "match.length")[1] - 1) ) # Extract p-values suppressWarnings(pValsChar <- substring( zRaw, sapply(nums, '[', 2), sapply(nums, function(x) x[2] + attr(x, "match.length")[2] - 1) )) suppressWarnings(pVals <- as.numeric(pValsChar)) # Extract (in)equality eqLoc <- gregexpr("p\\s?[<>=]", zRaw, ignore.case = TRUE) pEq <- substring( zRaw, sapply(eqLoc, function(x) x[1] + attr(x, "match.length")[1] - 1), sapply(eqLoc, function(x) x[1] + attr(x, "match.length")[1] - 1) ) pEq[grepl("ns", zRaw, ignore.case = TRUE)] <- "ns" # determine number of decimals of p value dec <- attr(regexpr("\\.\\d+", pValsChar), "match.length") - 1 dec[dec < 0] <- 0 # Create data frame: zRes <- data.frame( Source = names(x)[i], Statistic = "Z", df1 = NA, df2 = NA, Test.Comparison = testEq, Value = zVals, Reported.Comparison = pEq, Reported.P.Value = pVals, Computed = pnorm(abs(zVals), lower.tail = FALSE) * 2, Location = zLoc, Raw = zRaw, stringsAsFactors = FALSE, dec = dec, testdec = testdec, OneTailedInTxt = OneTailedInTxt ) # Append, clean and close: Res <- rbind(Res, zRes) rm(zRes) } } #--------------------------- # Chis2-values: if ("chisq" %in% stat) { # Get location of chi values or delta G in text: chi2Loc <- gregexpr( "((\\[CHI\\]|\\[DELTA\\]G)\\s?|(\\s[^trFzQWBn ]\\s?)|([^trFzQWBn ]2\\s?))2?\\(\\s?\\d*\\.?\\d+\\s?(,\\s?N\\s?\\=\\s?\\d*\\,?\\d*\\,?\\d+\\s?)?\\)\\s?[<>=]\\s?\\s?\\d*,?\\d*\\.?\\d+\\s?,\\s?(([^a-z]ns)|(p\\s?[<>=]\\s?\\d?\\.\\d+e?-?\\d*))", txt, ignore.case = TRUE )[[1]] if (chi2Loc[1] != -1) { # Get raw text of chi2-values: chi2Raw <- substring(txt, chi2Loc, chi2Loc + attr(chi2Loc, "match.length") - 1) substr(chi2Raw, 1, 1)[grepl("\\d", substr(chi2Raw, 1, 1))] <- " " # remove sample size if reported for calculations # save full result for "Raw" in final data frame chi2Raw_inclN <- chi2Raw chi2Raw <- gsub("N\\s?=\\s?\\d*\\,?\\d*\\,?\\d*", "", chi2Raw, ignore.case = TRUE) # remove commas (thousands separators) chi2Raw <- gsub("(?<=\\d),(?=\\d+\\.)", "", chi2Raw, perl = TRUE) # bug fix: remove extra opening brackets # if a chi2 result is reported between brackets, and the chi is not read by statcheck # the opening bracket is translated as the chi symbol, and extracting the numerics goes wrong chi2Raw <- gsub("\\((?=2\\s?\\()", "", chi2Raw, perl = TRUE) # Extract location of numbers: nums <- gregexpr( "(\\-?\\s?\\d*\\.?\\d+\\s?e?-?\\d*)|ns", sub("^.*?\\(", "", chi2Raw), ignore.case = TRUE ) # Extract df: df <- as.numeric(substring( sub("^.*?\\(", "", chi2Raw), sapply(nums, '[', 1), sapply(nums, function(x) x[1] + attr(x, "match.length")[1] - 1) )) # Extract chi2-values suppressWarnings(chi2ValsChar <- substring( sub("^.*?\\(", "", chi2Raw), sapply(nums, '[', 2), sapply(nums, function(x) x[2] + attr(x, "match.length")[2] - 1) )) suppressWarnings(chi2Vals <- as.numeric(chi2ValsChar)) # Extract number of decimals test statistic testdec <- attr(regexpr("\\.\\d+", chi2ValsChar), "match.length") - 1 testdec[testdec < 0] <- 0 # Extract (in)equality test statistic testEqLoc <- gregexpr("\\)\\s?[<>=]", chi2Raw) testEq <- substring( chi2Raw, sapply(testEqLoc, function(x) x[1] + attr(x, "match.length")[1] - 1), sapply(testEqLoc, function(x) x[1] + attr(x, "match.length")[1] - 1) ) # Extract p-values suppressWarnings(pValsChar <- substring( sub("^.*?\\(", "", chi2Raw), sapply(nums, '[', 3), sapply(nums, function(x) x[3] + attr(x, "match.length")[3] - 1) )) suppressWarnings(pVals <- as.numeric(pValsChar)) # Extract (in)equality eqLoc <- gregexpr("p\\s?[<>=]", chi2Raw, ignore.case = TRUE) pEq <- substring( chi2Raw, sapply(eqLoc, function(x) x[1] + attr(x, "match.length")[1] - 1), sapply(eqLoc, function(x) x[1] + attr(x, "match.length")[1] - 1) ) pEq[grepl("ns", chi2Raw, ignore.case = TRUE)] <- "ns" # determine number of decimals of p value dec <- attr(regexpr("\\.\\d+", pValsChar), "match.length") - 1 dec[dec < 0] <- 0 # Create data frame: chi2Res <- data.frame( Source = names(x)[i], Statistic = "Chi2", df1 = df, df2 = NA, Test.Comparison = testEq, Value = chi2Vals, Reported.Comparison = pEq, Reported.P.Value = pVals, Computed = pchisq(chi2Vals, df, lower.tail = FALSE), Location = chi2Loc, Raw = chi2Raw_inclN, stringsAsFactors = FALSE, dec = dec, testdec = testdec, OneTailedInTxt = OneTailedInTxt ) # Append, clean and close: Res <- rbind(Res, chi2Res) rm(chi2Res) } } #--------------------------- # Q-values: if ("Q" %in% stat) { # Get location of Q-values in text: QLoc <- gregexpr( "Q\\s?-?\\s?(w|within|b|between)?\\s?\\(\\s?\\d*\\.?\\d+\\s?\\)\\s?[<>=]\\s?[^a-z\\d]{0,3}\\s?\\d*,?\\d*\\.?\\d+\\s?,\\s?(([^a-z]ns)|(p\\s?[<>=]\\s?\\d?\\.\\d+e?-?\\d*))", txt, ignore.case = TRUE )[[1]] if (QLoc[1] != -1) { # Get raw text of t-values: QRaw <- substring(txt, QLoc, QLoc + attr(QLoc, "match.length") - 1) # remove commas (thousands separators) QRaw <- gsub("(?<=\\d),(?=\\d+)", "", QRaw, perl = TRUE) # Replace weird codings of a minus sign with actual minus sign: # First remove spaces QRaw <- gsub("(?<=\\=)\\s+(?=.*\\,)", "", QRaw, perl = TRUE) # Replace any weird string with a minus sign QRaw <- gsub("(?<=\\=)\\s?[^\\d\\.]+(?=.*\\,)", " -", QRaw, perl = TRUE) # Add spaces again: QRaw <- gsub("(?<=\\=)(?=(\\.|\\d))", " ", QRaw, perl = TRUE) # Extract type of Q-test (general, within, or between) QtypeLoc <- gregexpr("Q\\s?-?\\s?(w|within|b|between)?", QRaw, ignore.case = TRUE) QtypeRaw <- substring(QRaw, sapply(QtypeLoc, '[', 1), sapply(QtypeLoc, function(x) x[1] + attr(x, "match.length")[1] - 1)) Qtype <- rep(NA, length(QtypeRaw)) Qtype[grepl("Q\\s?-?\\s?(w|within)", QtypeRaw, ignore.case = TRUE)] <- "Qw" Qtype[grepl("Q\\s?-?\\s?(b|between)", QtypeRaw, ignore.case = TRUE)] <- "Qb" Qtype[is.na(Qtype)] <- "Q" # Extract location of numbers: nums <- gregexpr("(\\-?\\s?\\d*\\.?\\d+\\s?e?-?\\d*)|ns", QRaw, ignore.case = TRUE) # Extract df: df <- as.numeric(substring( QRaw, sapply(nums, '[', 1), sapply(nums, function(x) x[1] + attr(x, "match.length")[1] - 1) )) # Extract Q-values suppressWarnings(QValsChar <- substring( QRaw, sapply(nums, '[', 2), sapply(nums, function(x) x[2] + attr(x, "match.length")[2] - 1) )) suppressWarnings(QVals <- as.numeric(QValsChar)) # Extract number of decimals test statistic testdec <- attr(regexpr("\\.\\d+", QValsChar), "match.length") - 1 testdec[testdec < 0] <- 0 # Extract (in)equality test statistic testEqLoc <- gregexpr("\\)\\s?[<>=]", QRaw) testEq <- substring( QRaw, sapply(testEqLoc, function(x) x[1] + attr(x, "match.length")[1] - 1), sapply(testEqLoc, function(x) x[1] + attr(x, "match.length")[1] - 1) ) # Extract p-values suppressWarnings(pValsChar <- substring( QRaw, sapply(nums, '[', 3), sapply(nums, function(x) x[3] + attr(x, "match.length")[3] - 1) )) suppressWarnings(pVals <- as.numeric(pValsChar)) # Extract (in)equality eqLoc <- gregexpr("p\\s?[<>=]", QRaw, ignore.case = TRUE) pEq <- substring(QRaw, sapply(eqLoc, function(x) x[1] + attr(x, "match.length")[1] - 1), sapply(eqLoc, function(x) x[1] + attr(x, "match.length")[1] - 1)) pEq[grepl("ns", QRaw, ignore.case = TRUE)] <- "ns" # determine number of decimals of p value dec <- attr(regexpr("\\.\\d+", pValsChar), "match.length") - 1 dec[dec < 0] <- 0 # Create data frame: QRes <- data.frame( Source = names(x)[i], Statistic = Qtype, df1 = NA, df2 = df, Test.Comparison = testEq, Value = QVals, Reported.Comparison = pEq, Reported.P.Value = pVals, Computed = pchisq(QVals, df, lower.tail = FALSE), Location = QLoc, Raw = QRaw, stringsAsFactors = FALSE, dec = dec, testdec = testdec, OneTailedInTxt = OneTailedInTxt ) # Append, clean and close: Res <- rbind(Res, QRes) rm(QRes) } } setTxtProgressBar(pb, i) } close(pb) Source <- NULL Res <- ddply(Res, .(Source), function(x) x[order(x$Location), ]) if (nrow(Res) > 0) { # remove p values greater than one Res <- Res[Res$Reported.P.Value <= 1 | is.na(Res$Reported.P.Value), ] } ###--------------------------------------------------------------------- ErrorTest <- function(x, ...) { computed <- as.vector(x$Computed) comparison <- as.vector(x$Reported.Comparison) reported <- as.vector(x$Reported.P.Value) testcomp <- as.vector(x$Test.Comparison) # replace 'ns' for > alpha reported[comparison == "ns"] <- alpha comparison[comparison == "ns"] <- ">" Match <- paste(computed, comparison, reported) #----------------------------------------------- # select inexactly reported p values (p<../p>..) InExTests <- grepl("<|>", Match) # evaluate errors when test statistics are reported exactly (t()=.../F(,)=...) if (any(InExTests)) { InExTests[InExTests] <- sapply(Match[InExTests], function(m) ! eval(parse(text = m))) } # evaluate errors when test statistics are reported inexactly (t().../F(,)...) smallsmall <- testcomp == "<" & comparison == "<" smallgreat <- testcomp == "<" & comparison == ">" greatsmall <- testcomp == ">" & comparison == "<" greatgreat <- testcomp == ">" & comparison == ">" if (any(smallsmall)) { InExTests[smallsmall] <- round(computed[smallsmall], x$dec[smallsmall]) <= round(reported[smallsmall], x$dec[smallsmall]) } if (any(greatgreat)) { InExTests[greatgreat] <- round(computed[greatgreat], x$dec[greatgreat]) >= round(reported[greatgreat], x$dec[greatgreat]) } # these combinations of < & > are logically always correct InExTests[smallgreat] <- FALSE InExTests[greatsmall] <- FALSE #----------------------------------------------- # select exactly reported p values (p=..) ExTests <- comparison == "=" # evaluate errors when test statistics are reported exactly (t()=.../F(,)=...) if (any(ExTests)) { ExTests[ExTests] <- !(round(computed[ExTests], x$dec[ExTests]) == round(reported[ExTests], x$dec[ExTests])) } # evaluate errors when test statistics are reported inexactly (t().../F(,)...) smallequal <- x$Test.Comparison == "<" & comparison == "=" greatequal <- x$Test.Comparison == ">" & comparison == "=" if (any(smallequal)) { ExTests[smallequal] <- round(computed[smallequal], x$dec[smallequal]) >= round(reported[smallequal], x$dec[smallequal]) } if (any(greatequal)) { ExTests[greatequal] <- round(computed[greatequal], x$dec[greatequal]) <= round(reported[greatequal], x$dec[greatequal]) } #----------------------------------------------- # a result is an error if InExactError and/or ExactError are TRUE Error <- !(InExTests == FALSE & ExTests == FALSE) return(Error) } ###--------------------------------------------------------------------- DecisionErrorTest <- function(x, ...) { computed <- x$Computed comparison <- x$Reported.Comparison reported <- x$Reported.P.Value testcomp <- as.vector(x$Test.Comparison) # replace 'ns' by > alpha reported[comparison == "ns"] <- alpha comparison[comparison == "ns"] <- ">" #----------------------------------------------- equalequal <- testcomp == "=" & comparison == "=" equalsmall <- testcomp == "=" & comparison == "<" equalgreat <- testcomp == "=" & comparison == ">" smallequal <- testcomp == "<" & comparison == "=" smallsmall <- testcomp == "<" & comparison == "<" smallgreat <- testcomp == "<" & comparison == ">" greatequal <- testcomp == ">" & comparison == "=" greatsmall <- testcomp == ">" & comparison == "<" greatgreat <- testcomp == ">" & comparison == ">" AllTests <- grepl("=|<|>", comparison) if (any(AllTests)) { if (pEqualAlphaSig == TRUE) { AllTests[equalequal] <- (reported[equalequal] <= alpha & computed[equalequal] > alpha) | (reported[equalequal] > alpha & computed[equalequal] <= alpha) AllTests[equalsmall] <- reported[equalsmall] <= alpha & computed[equalsmall] > alpha AllTests[equalgreat] <- reported[equalgreat] >= alpha & computed[equalgreat] <= alpha AllTests[smallequal] <- reported[smallequal] <= alpha & computed[smallequal] >= alpha AllTests[smallsmall] <- reported[smallsmall] <= alpha & computed[smallsmall] >= alpha AllTests[greatequal] <- reported[greatequal] > alpha & computed[greatequal] <= alpha AllTests[greatgreat] <- reported[greatgreat] >= alpha & computed[greatgreat] <= alpha } else { AllTests[equalequal] <- (reported[equalequal] < alpha & computed[equalequal] >= alpha) | (reported[equalequal] >= alpha & computed[equalequal] < alpha) AllTests[equalsmall] <- reported[equalsmall] < alpha & computed[equalsmall] >= alpha AllTests[equalgreat] <- reported[equalgreat] >= alpha & computed[equalgreat] < alpha AllTests[smallequal] <- reported[smallequal] < alpha & computed[smallequal] >= alpha AllTests[smallsmall] <- reported[smallsmall] <= alpha & computed[smallsmall] >= alpha AllTests[greatequal] <- reported[greatequal] >= alpha & computed[greatequal] < alpha AllTests[greatgreat] <- reported[greatgreat] >= alpha & computed[greatgreat] < alpha } # these combinations of < & > are logically always correct AllTests[smallgreat] <- FALSE AllTests[greatsmall] <- FALSE } AllTests <- as.logical(AllTests) #----------------------------------------------- return(AllTests) } ###--------------------------------------------------------------------- if (nrow(Res) > 0) { # if indicated, count all tests as onesided if (OneTailedTests == TRUE) { Res$Computed <- Res$Computed / 2 } # check for errors Res$Error <- ErrorTest(Res) Res$DecisionError <- DecisionErrorTest(Res) ###--------------------------------------------------------------------- # check if there would also be a decision error if alpha=.01 or .1 DecisionErrorAlphas <- logical() alphas <- c(.01, .1) for (a in alphas) { alpha <- a DecisionErrorAlphas <- c(DecisionErrorAlphas, DecisionErrorTest(Res)) } if (any(DecisionErrorAlphas[!is.na(DecisionErrorAlphas) & !is.nan(DecisionErrorAlphas)])) { message( "\n Check the significance level. \n \n Some of the p value incongruencies are decision errors if the significance level is .1 or .01 instead of the conventional .05. It is recommended to check the actual significance level in the paper or text. Check if the reported p values are a decision error at a different significance level by running statcheck again with 'alpha' set to .1 and/or .01. \n " ) } ###--------------------------------------------------------------------- if (OneTailedTests == FALSE) { # check if there could be one-sided tests in the data set computed <- Res$Computed comparison <- Res$Reported.Comparison reported <- Res$Reported.P.Value raw <- Res$Raw onetail <- computed / 2 OneTail <- ifelse( Res$Error == TRUE & ( grepl("=", comparison) & round(reported, 2) == round(onetail, 2) ) | ( grepl("<", comparison) & reported == .05 & onetail < reported & computed > reported ), TRUE, FALSE ) Res$OneTail <- OneTail if (any(OneTail[!is.na(OneTail)] == TRUE & OneTailedTxt[!is.na(OneTailedTxt)] == FALSE)) { message( "\n Check for one tailed tests. \n \n Some of the p value incongruencies might in fact be one tailed tests. It is recommended to check this in the actual paper or text. Check if the p values would also be incongruent if the test is indeed one sided by running statcheck again with 'OneTailedTests' set to TRUE. To see which Sources probably contain a one tailed test, try unique(x$Source[x$OneTail]) (where x is the statcheck output). \n " ) } } ###--------------------------------------------------------------------- # count errors as correct if they'd be correct one-sided # and there was a mention of 'one-sided','one-tailed', or 'directional' in the text if (OneTailedTxt == TRUE) { Res1tailed <- Res Res1tailed$Computed <- Res1tailed$Computed / 2 Res1tailed$Error <- ErrorTest(Res1tailed) Res1tailed$DecisionError <- DecisionErrorTest(Res1tailed) Res$Error[!(( Res$Statistic == "F" | Res$Statistic == "Chi2" | Res$Statistic == "Q" ) & Res$df1 > 1) & Res$OneTailedInTxt == TRUE & Res1tailed$Error == FALSE] <- FALSE Res$DecisionError[!(( Res$Statistic == "F" | Res$Statistic == "Chi2" | Res$Statistic == "Q" ) & Res$df1 > 1) & Res$OneTailedInTxt == TRUE & Res1tailed$DecisionError == FALSE] <- FALSE } ###--------------------------------------------------------------------- # "correct" rounding differences # e.g. t=2.3 could be 2.25 to 2.34999999... with its range of p values correct_round <- numeric() lower <- Res$Value - (.5 / 10 ^ Res$testdec) upper <- Res$Value + (.5 / 10 ^ Res$testdec) for (i in seq_len(nrow(Res))) { if (Res[i, ]$Statistic == "F") { upP <- pf(lower[i], Res[i, ]$df1, Res[i, ]$df2, lower.tail = FALSE) lowP <- pf(upper[i], Res[i, ]$df1, Res[i, ]$df2, lower.tail = FALSE) } else if (Res[i, ]$Statistic == "t") { if (lower[i] < 0) { lowP <- pt(lower[i], Res[i, ]$df2) * 2 upP <- pt(upper[i], Res[i, ]$df2) * 2 } else{ upP <- pt(-1 * lower[i], Res[i, ]$df2) * 2 lowP <- pt(-1 * upper[i], Res[i, ]$df2) * 2 } } else if (Res[i, ]$Statistic == "Chi2" | Res[i, ]$Statistic == "Q" | Res[i, ]$Statistic == "Qw" | Res[i, ]$Statistic == "Qb") { upP <- pchisq(lower[i], Res[i, ]$df1, lower.tail = FALSE) lowP <- pchisq(upper[i], Res[i, ]$df1, lower.tail = FALSE) } else if (Res[i, ]$Statistic == "r") { if (lower[i] < 0) { lowP <- pmin(pt(r2t(lower[i], Res[i, ]$df2), Res[i, ]$df2) * 2, 1) upP <- pmin(pt(r2t(upper[i], Res[i, ]$df2), Res[i, ]$df2) * 2, 1) } else { upP <- pmin(pt(-1 * r2t(lower[i], Res[i, ]$df2), Res[i, ]$df2) * 2, 1) lowP <- pmin(pt(-1 * r2t(upper[i], Res[i, ]$df2), Res[i, ]$df2) * 2, 1) } } else if (Res[i, ]$Statistic == "Z" | Res[i, ]$Statistic == "z") { if (lower[i] < 0) { lowP <- pnorm(abs(lower[i]), lower.tail = FALSE) * 2 upP <- pnorm(abs(upper[i]), lower.tail = FALSE) * 2 } else { upP <- pnorm(lower[i], lower.tail = FALSE) * 2 lowP <- pnorm(upper[i], lower.tail = FALSE) * 2 } } if (OneTailedTests == TRUE) { upP <- upP / 2 lowP <- lowP / 2 } if (Res[i, "Reported.Comparison"] == "=") { correct_round[i] <- ifelse( Res[i, ]$Error == TRUE & Res$Reported.P.Value[i] >= round(lowP, Res$dec[i]) & Res$Reported.P.Value[i] <= round(upP, Res$dec[i]), TRUE, FALSE ) } if (Res[i, "Reported.Comparison"] == "<") { correct_round[i] <- ifelse(Res[i, ]$Error == TRUE & Res$Reported.P.Value[i] > lowP, TRUE, FALSE) } if (Res[i, "Reported.Comparison"] == ">") { correct_round[i] <- ifelse(Res[i, ]$Error == TRUE & Res$Reported.P.Value[i] < upP, TRUE, FALSE) } } CorrectRound <- as.logical(correct_round) ###--------------------------------------------------------------------- # p values smaller or equal to zero are errors if (pZeroError == TRUE) { ImpossibleP <- (Res$Reported.P.Value <= 0) } else { ImpossibleP <- (Res$Reported.P.Value < 0) } Res$Error[ImpossibleP] <- TRUE ###--------------------------------------------------------------------- # p values that are not an error can also not be a decision error # this happens sometimes when reported= "p=.05" and e.g. computed=.052... # this should be counted as correct NoErrorDecisionError <- Res$Error == FALSE & Res$DecisionError == TRUE Res$DecisionError[NoErrorDecisionError] <- FALSE ###--------------------------------------------------------------------- # APAfactor: proportion of APA results (that statcheck reads) of total number of p values # select only the results of pRes that are from articles with at least 1 statcheck result pRes_selection <- pRes[pRes$Source %in% Res$Source, ] # select only the statcheck results that are from an article with at least one p value # this is relevant, because it sometimes happens that statcheck extracts less p values # p values than statcheck results. For instance in cases when a p value appears to be # greater than 1. Res_selection <- Res[Res$Source %in% pRes_selection$Source, ] APA <- by(Res_selection, Res_selection$Source, nrow) / by(pRes_selection, pRes_selection$Source, nrow) Res$APAfactor <- round(as.numeric(apply(Res, 1, function(x) APA[which(names(APA) == x["Source"])])), 2) ###--------------------------------------------------------------------- Res$Error[CorrectRound] <- FALSE Res$DecisionError[CorrectRound] <- FALSE # final data frame Res <- data.frame( Source = Res$Source, Statistic = Res$Statistic, df1 = Res$df1, df2 = Res$df2, Test.Comparison = Res$Test.Comparison, Value = Res$Value, Reported.Comparison = Res$Reported.Comparison, Reported.P.Value = Res$Reported.P.Value, Computed = Res$Computed, Raw = Res$Raw, Error = Res$Error, DecisionError = Res$DecisionError, OneTail = Res$OneTail, OneTailedInTxt = Res$OneTailedInTxt, APAfactor = Res$APAfactor ) class(Res) <- c("statcheck", "data.frame") } ###--------------------------------------------------------------------- if (AllPValues == FALSE) { # Return message when there are no results if (nrow(Res) > 0) { return(Res) } else { Res <- cat("statcheck did not find any results\n") } } else { return(pRes) } } ########################### r2t <- function(# Transform r values into t values ### Function to transform r values into t values by use of raw r and degrees of freedom. r, ### Raw correlation value df ### Degrees of freedom (N-1) ){ r / (sqrt((1 - r ^ 2) / df)) }r-cran-statcheck-1.3.0/R/statcheckReport.R000066400000000000000000000017661374132532100203500ustar00rootroot00000000000000statcheckReport <- function(statcheckOutput, outputFileName, outputDir) { # set working directory to output file in statcheck package library setwd(system.file("rmd", package = "statcheck")) # temporarily save statcheck output as RData in the selected working directory save(statcheckOutput, file = "statcheckOutput.RData") # run the markdown/knitr script statcheckReport_template <- system.file("rmd/statcheckReport_template.Rmd", package = "statcheck") render(statcheckReport_template) # save/move the file in/to the specified output directory curDir <- system.file("rmd", package = "statcheck") file.rename( from = paste(curDir, "statcheckReport_template.html", sep = "/"), to = paste(outputDir, "/", outputFileName, ".html", sep = "") ) # remove .RData file from package library folder file.remove(paste(curDir, "statcheckOutput.RData", sep = "/")) }r-cran-statcheck-1.3.0/R/summary.statcheck.R000066400000000000000000000017531374132532100206440ustar00rootroot00000000000000summary.statcheck <- function(object, ...) { x <- object # Source Source <- c(as.vector(ddply(x, "Source", function(x) unique(x$Source))[, 1]), "Total") # Number of p values extracted per article and in total pValues <- c(ddply(x, "Source", function(x) nrow(x))[, 2], nrow(x)) # Number of errors per article and in total Errors <- c(ddply(x, "Source", function(x) sum(x$Error, na.rm = TRUE))[, 2], sum(x$Error, na.rm = TRUE)) # Number of decision errors per article and in total DecisionErrors <- c(ddply(x, "Source", function(x) sum(x$DecisionError, na.rm = TRUE))[, 2], sum(x$DecisionError, na.rm = TRUE)) # Results in dataframe res <- data.frame( Source = Source, pValues = pValues, Errors = Errors, DecisionErrors = DecisionErrors ) class(res) <- c("statcheck", "data.frame") return(res) }r-cran-statcheck-1.3.0/inst/000077500000000000000000000000001374132532100156225ustar00rootroot00000000000000r-cran-statcheck-1.3.0/inst/rmd/000077500000000000000000000000001374132532100164045ustar00rootroot00000000000000r-cran-statcheck-1.3.0/inst/rmd/statcheckReport_template.Rmd000066400000000000000000000121731374132532100241140ustar00rootroot00000000000000--- title: 'Report: statcheck results' output: html_document --- ```{r setup, include=FALSE} knitr::opts_chunk$set(echo = FALSE, eval=TRUE, message=FALSE, warning=FALSE) ``` ```{r} library(statcheck) pack <- sessionInfo()$otherPkgs info_statcheck <- pack[[which(names(pack)=="statcheck")]] version <- info_statcheck$Version date <- info_statcheck$Date year <- strsplit(date,"-")[[1]][1] ``` *These results were automatically generated with the R package "statcheck" (Epskamp & Nuijten, `r year`), version `r version`.* ## Output statcheck The table below reports the statcheck results of your manuscript. The table lists all Null Hypothesis Significance Tests that are reported according to the APA style, and indicates whether the reported p-value matches a recomputed p-value based on the reported test statistic and degrees of freedom. The first column provides green dots to indicate a consistent result, yellow dots for an inconsistent result not bearing on significance, and red dots to indicate an inconsistent result that bears on significance at the .05 level. ```{r} setwd(system.file("rmd", package="statcheck")) load("statcheckOutput.RData") stat <- statcheckOutput stat$Consistency[stat$Error==FALSE & stat$DecisionError==FALSE] <- "Consistent" stat$Consistency[stat$Error==TRUE & stat$DecisionError==FALSE] <- "**Inconsistency**" stat$Consistency[stat$Error==TRUE & stat$DecisionError==TRUE] <- "**Decision Inconsistency**" stat_sparse <- stat[c("Source","Raw","Computed","Consistency")] colnames(stat_sparse) <- c("Article","Result As Given In Text","Computed P-Value","Consistency") ## # create color coding for the errors red <- "$\\bullet$" yellow <- "$\\bullet$" green <- "$\\bullet$" Code <- NA Code[stat_sparse$Consistency == "**Decision Inconsistency**"] <- red Code[stat_sparse$Consistency == "**Inconsistency**"] <- yellow Code[stat_sparse$Consistency == "Consistent"] <- green ## stat_sparse <- cbind(Code,stat_sparse) knitr::kable(stat_sparse) ``` *** ## What is statcheck? statcheck (Epskamp & Nuijten, `r year`) is an R package that automatically extracts statistical results from papers and checks the internal consistency of those results. statcheck roughly works as follows: 1. Convert PDF or HTML to raw text 2. Use regular expressions to search for APA reported t-tests, F-tests, $\chi^2$-tests, Q-tests, Z-tests, and correlations. 3. Use reported test statistics and degrees of freedom to recalculate the p-value 4. Compare the reported p-value with the recalculated p-value 5. Flag inconsistent results as an Inconsistency 6. When the reported p-value is significant ($\alpha$ = .05) and the recalculated p-value is not, or vice versa, flag this result as a Decision Inconsistency (also sometimes identified as a "Gross Inconsistency") statcheck takes into account one-sided testing as follows. If somewhere in the paper the words "one-sided", "one-tailed", or "directional" are mentioned, *and* the reported p-value would have been consistent if it was a one-sided test, statcheck counts it as a one-sided test and does not flag it as inconsistent. ## Interpretation The variables in the table above can be interpreted as follows. Variable | Interpretation ---------|----------------------------------------------------------------------------- Code | Color coding for inconsistencies. Green = Consistent, Yellow = Inconsistency, Red = Decision Inconsistency Source | The name of the file that was checked Raw | The full raw statistical result that was extracted Computed | The recomputed p-value based on the reported test statistic and degrees of freedom Consistency | Consistent = The reported p-value is consistent; Inconsistency = The reported p-value is not consistent; Decision Inconsistency = The reported p-value is not consistent and bears on significance ($\alpha$ = .05) ## Disclaimer Please note that statcheck is an automated procedure and does not offer any explanations for detected inconsistencies (e.g., incorrect rounding, erroneous retrieval from computer output, a copy-paste error, or a a typo). Also note that in the case of a flagged inconsistency, statcheck assumes that the p-value is the number that is misreported. However, it could well be the case that an inconsistent result is caused by a wrong test statistic or degrees of freedom. For more details on what statcheck can and cannot do, and a list of common reasons why statcheck either does not find statistics or flags them as inconsistent, see the manual at . *** ## References Nuijten, M.B., Hartgerink, C. H. J., van Assen, M. A. L. M., Epskamp, S., & Wicherts, J. M. (2016). The prevalence of statistical reporting errors in psychology (1985-2013). *Behavior Research Methods*, *48 (4)*, 1205-1226.. DOI: 10.3758/s13428-015-0664-2 Epskamp, S., & Nuijten, M. B. (`r year`). statcheck: Extract statistics from articles and recompute p-values. R package version `r version`.r-cran-statcheck-1.3.0/man/000077500000000000000000000000001374132532100154205ustar00rootroot00000000000000r-cran-statcheck-1.3.0/man/checkHTML.Rd000066400000000000000000000041201374132532100174460ustar00rootroot00000000000000\name{checkHTML} \alias{checkHTML} \title{Extract test statistics from HTML file.} \description{Extracts statistical references from given HTML files.} \usage{checkHTML(files, ...)} \arguments{ \item{files}{Vector of strings containing file paths to HTML files to check.} \item{\dots}{Arguments sent to \code{\link{statcheck}}.} } \details{See \code{\link{statcheck}} for more details. Use \code{\link{checkHTMLdir}} to import al HTML files in a given directory at once. Note that the conversion to plain text and extraction of statistics can result in errors. Some statistical values can be missed, especially if the notation is unconvetional. It is recommended to manually check some of the results.} \value{A data frame containing for each extracted statistic: \item{Source}{Name of the file of which the statistic is extracted} \item{Statistic}{Character indicating the statistic that is extracted} \item{df1}{First degree of freedom} \item{df2}{Second degree of freedom (if applicable)} \item{Value}{Reported value of the statistic} \item{Reported.Comparison}{Reported comparison, when importing from pdf this will often not be converted properly} \item{Reported.P.Value}{The reported p-value, or NA if the reported value was NS} \item{Computed}{The recomputed p-value} \item{Raw}{Raw string of the statistical reference that is extracted} \item{InExactError}{Error in inexactly reported p values as compared to the recalculated p values} \item{ExactError}{Error in exactly reported p values as compared to the recalculated p values} \item{DecisionError}{The reported result is significant whereas the recomputed result is not, or vice versa.}} \author{Sacha Epskamp & Michele B. Nuijten } \seealso{\code{\link{statcheck}}, \code{\link{checkPDF}}, \code{\link{checkPDFdir}}, \code{\link{checkHTMLdir}}, \code{\link{checkdir}}} \examples{ # given that my HTML file is called "article.html" # and I saved it in "C:/mydocuments/articles" #checkHTML("C:/mydocuments/articles/article.html") } r-cran-statcheck-1.3.0/man/checkHTMLdir.Rd000066400000000000000000000055771374132532100201660ustar00rootroot00000000000000\name{checkHTMLdir} \alias{checkHTMLdir} \title{Extract test statistics from all HTML files in a folder.} \description{Extracts statistical references from a directory with HTML versions of articles. By default a GUI window is opened that allows you to choose the directory (using tcltk).} \usage{checkHTMLdir(dir, subdir = TRUE, extension=TRUE, ...)} \arguments{ \item{dir}{String indicating the directory to be used.} \item{subdir}{Logical indicating whether you also want to check subfolders. Defaults to TRUE} \item{extension}{Logical, indicating whether the HTML extension should be checked. Defaults to TRUE} \item{\dots}{Arguments sent to \code{\link{statcheck}}} } \details{See \code{\link{statcheck}} for more details. Use \code{\link{checkHTML}} to import individual HTML files. Note that the conversion to plain text and extraction of statistics can result in errors. Some statistical values can be missed, especially if the notation is unconventional. It is recommended to manually check some of the results.} \value{A data frame containing for each extracted statistic: \item{Source}{Name of the file of which the statistic is extracted} \item{Statistic}{Character indicating the statistic that is extracted} \item{df1}{First degree of freedom} \item{df2}{Second degree of freedom (if applicable)} \item{Test.Comparison}{Reported comparison of the test statistic, when importing from pdf this will often not be converted properly} \item{Value}{Reported value of the statistic} \item{Reported.Comparison}{Reported comparison, when importing from pdf this might not be converted properly} \item{Reported.P.Value}{The reported p-value, or NA if the reported value was NS} \item{Computed}{The recomputed p-value} \item{Raw}{Raw string of the statistical reference that is extracted} \item{Error}{The computed p value is not congruent with the reported p value} \item{DecisionError}{The reported result is significant whereas the recomputed result is not, or vice versa.} \item{OneTail}{Logical. Is it likely that the reported p value resulted from a correction for one-sided testing?} \item{OneTailedInTxt}{Logical. Does the text contain the string "sided", "tailed", and/or "directional"?} \item{CopyPaste}{Logical. Does the exact string of the extracted raw results occur anywhere else in the article?}} \author{Sacha Epskamp & Michele B. Nuijten } \seealso{\code{\link{statcheck}}, \code{\link{checkPDF}}, \code{\link{checkPDFdir}}, \code{\link{checkHTML}}, \code{\link{checkdir}}} \examples{ # with this command a menu will pop up from which you can select the directory with HTML articles # checkHTMLdir() # you could also specify the directory beforehand # for instance: # DIR <- "C:/mydocuments/articles" # checkHTMLdir(DIR) } r-cran-statcheck-1.3.0/man/checkPDF.Rd000066400000000000000000000043731374132532100173250ustar00rootroot00000000000000\name{checkPDF} \alias{checkPDF} \title{Extract statistics and recompute p-values from pdf files.} \description{Extracts statistical values (currently only t and F statistics) from PDF files. To this end the "pdftotext" program is used to convert PDF files to plain text files. This must be installed and PATH variables must be properly set so that this program can be used from command line.} \usage{checkPDF(files, ...)} \arguments{ \item{files}{Vector with paths to the PDF files.} \item{\dots}{Arguments sent to \code{\link{statcheck}}} } \details{See \code{\link{statcheck}} for more details. Use \code{\link{checkPDFdir}} to import every PDF file in a given directory. Currently only statistics in the form "(stat (df1, df2) = value, p = value)" are extracted. Note that this function is still in devellopment. Some statistical values can be missed, especially if the notation is unconvetional. It is recommended to manually check some of the results.} \value{A data frame containing for each extracted statistic: \item{Source}{Name of the file of which the statistic is extracted} \item{Statistic}{Character indicating the statistic that is extracted} \item{df1}{First degree of freedom} \item{df2}{Second degree of freedom (if applicable)} \item{Value}{Reported value of the statistic} \item{Reported.Comparison}{Reported comparison, when importing from pdf this will often not be converted properly} \item{Reported.P.Value}{The reported p-value, or NA if the reported value was NS} \item{Computed}{The recomputed p-value} \item{Raw}{Raw string of the statistical reference that is extracted} \item{InExactError}{Error in inexactly reported p values as compared to the recalculated p values} \item{ExactError}{Error in exactly reported p values as compared to the recalculated p values} \item{DecisionError}{The reported result is significant whereas the recomputed result is not, or vice versa.}} \author{Sacha Epskamp & Michele B. Nuijten } \seealso{\code{\link{statcheck}}, \code{\link{checkPDFdir}}} \examples{ # given that my PDF file is called "article.pdf" # and I saved it in "C:/mydocuments/articles" # checkPDF("C:/mydocuments/articles/article.pdf") } r-cran-statcheck-1.3.0/man/checkPDFdir.Rd000066400000000000000000000073271374132532100200260ustar00rootroot00000000000000\name{checkPDFdir} \alias{checkPDFdir} \title{Extract statistics and recompute p values from a directory with pdf files.} \description{Extracts statistical references from a directory with PDF files. The "pdftotext" program (http://www.foolabs.com/xpdf/download.html) is used to convert PDF files to plain text files. This must be installed and PATH variables must be properly set so that this program can be used from command line. By default a GUI window is opened that allows you to choose the directory (using tcltk).} \usage{checkPDFdir(dir, subdir = TRUE, ...)} \arguments{ \item{dir}{String indicating the directory to be used.} \item{subdir}{Logical indicating whether you also want to check subfolders. Defaults to TRUE} \item{\dots}{Arguments sent to \code{\link{statcheck}}.} } \details{See \code{\link{statcheck}} for more details. Use \code{\link{checkPDF}} to import individual PDF files. Currently only statistics in the form "stat (df1, df2) = value, p = value" are extracted. Because the Chi-square symbol can not be repressented in plain text it is often lost in the conversion. Because of this Chi-square values are extracted by finding all statistical references with one degree of freedom that do not follow the symbol "t" or "r". While this does extract most Chi-square values it is possible that other statistics, possibly due to unconventional notation, are also extracted and reported as chi-square values. Depending on the PDF file the comparison operators can sometimes not be converted correctly, causing these to not be reported in the output. Using html versions of articles and the similar function \code{\link{checkHTMLdir}} is recommended for more stable results. Note that the conversion to plain text and extraction of statistics can result in errors. Some statistical values can be missed, especially if the notation is unconventional. It is recommended to manually check some of the results.} \value{A data frame containing for each extracted statistic: \item{Source}{Name of the file of which the statistic is extracted} \item{Statistic}{Character indicating the statistic that is extracted} \item{df1}{First degree of freedom} \item{df2}{Second degree of freedom (if applicable)} \item{Test.Comparison}{Reported comparison of the test statistic, when importing from pdf this will often not be converted properly} \item{Value}{Reported value of the statistic} \item{Reported.Comparison}{Reported comparison, when importing from pdf this might not be converted properly} \item{Reported.P.Value}{The reported p-value, or NA if the reported value was NS} \item{Computed}{The recomputed p-value} \item{Raw}{Raw string of the statistical reference that is extracted} \item{Error}{The computed p value is not congruent with the reported p value} \item{DecisionError}{The reported result is significant whereas the recomputed result is not, or vice versa.} \item{OneTail}{Logical. Is it likely that the reported p value resulted from a correction for one-sided testing?} \item{OneTailedInTxt}{Logical. Does the text contain the string "sided", "tailed", and/or "directional"?} \item{CopyPaste}{Logical. Does the exact string of the extracted raw results occur anywhere else in the article?}} \author{Sacha Epskamp & Michele B. Nuijten } \seealso{\code{\link{statcheck}}, \code{\link{checkPDF}}, \code{\link{checkHTMLdir}}, \code{\link{checkHTML}}, \code{\link{checkdir}}} \examples{ # with this command a menu will pop up from which you can select the directory with PDF articles # checkPDFdir() # you could also specify the directory beforehand # for instance: # DIR <- "C:/mydocuments/articles" # checkPDFdir(DIR) } r-cran-statcheck-1.3.0/man/checkdir.Rd000066400000000000000000000063621374132532100174720ustar00rootroot00000000000000\name{checkdir} \alias{checkdir} \title{Extract test statistics from all HTML and PDF files in a folder.} \description{Extracts statistical references from a directory with HTML and PDF files. The "pdftotext" program is used to convert PDF files to plain text files. This must be installed and PATH variables must be properly set so that this program can be used from command line. By default a gui window is opened that allows you to choose the directory (using tcltk).} \usage{checkdir(dir, subdir = TRUE, ...)} \arguments{ \item{dir}{String indicating the directory to be used. If this is left empty, a window will pop up from which you can choose a directory.} \item{subdir}{Logical indicating whether you also want to check subfolders. Defaults to TRUE} \item{\dots}{Arguments sent to \code{\link{statcheck}}.} } \details{See \code{\link{statcheck}} for more details. This function is a wrapper around both \code{\link{checkPDFdir}} for PDF files and \code{\link{checkHTMLdir}} for HTML files. Depending on the PDF file the comparison operators (=/) can sometimes not be converted correctly, causing these to not be reported in the output. Using html versions of articles is reccomended for more stable results. Note that the conversion to plain text and extraction of statistics can result in errors. Some statistical values can be missed, especially if the notation is unconventional. It is recommended to manually check some of the results.} \value{A data frame containing for each extracted statistic: \item{Source}{Name of the file of which the statistic is extracted} \item{Statistic}{Character indicating the statistic that is extracted} \item{df1}{First degree of freedom} \item{df2}{Second degree of freedom (if applicable)} \item{Test.Comparison}{Reported comparison of the test statistic, when importing from pdf this will often not be converted properly} \item{Value}{Reported value of the statistic} \item{Reported.Comparison}{Reported comparison, when importing from pdf this might not be converted properly} \item{Reported.P.Value}{The reported p-value, or NA if the reported value was NS} \item{Computed}{The recomputed p-value} \item{Raw}{Raw string of the statistical reference that is extracted} \item{Error}{The computed p value is not congruent with the reported p value} \item{DecisionError}{The reported result is significant whereas the recomputed result is not, or vice versa.} \item{OneTail}{Logical. Is it likely that the reported p value resulted from a correction for one-sided testing?} \item{OneTailedInTxt}{Logical. Does the text contain the string "sided", "tailed", and/or "directional"?} \item{CopyPaste}{Logical. Does the exact string of the extracted raw results occur anywhere else in the article?}} \author{Sacha Epskamp & Michele B. Nuijten } \seealso{\code{\link{statcheck}}, \code{\link{checkPDF}}, \code{\link{checkHTMLdir}}, \code{\link{checkHTML}}, \code{\link{checkHTMLdir}}} \examples{ # with this command a menu will pop up from which you can select the directory with articles # checkdir() # you could also specify the directory beforehand # for instance: # DIR <- "C:/mydocuments/articles" # checkdir(DIR) } r-cran-statcheck-1.3.0/man/identify.statcheck.Rd000066400000000000000000000023101374132532100214660ustar00rootroot00000000000000\name{identify.statcheck} \alias{identify.statcheck} \title{Identify specific points in a \code{statcheck} plot.} \description{With this function you can simply point and click on the datapoints in the plot to see the corresponding statcheck details, such as the paper from which the data came and the exact statistical results.} \usage{\method{identify}{statcheck}(x, alpha = 0.05, ...)} \arguments{ \item{x}{a \code{statcheck} object.} \item{alpha}{assumed level of significance in the scanned texts. Defaults to .05.} \item{\dots}{additional arguments to be passed on to the plot method.} } \value{This function returns both a plot and a dataframe. For the contents of the dataframe see \code{\link{statcheck}}.} \author{Sacha Epskamp & Michele B. Nuijten } \seealso{\code{\link{statcheck}}} \examples{ # given that the articles of interest are saved in "DIR" # DIR <- "C:/mydocuments/articles" # stat_result <- checkdir(DIR) # identify(stat_result) ## Further instructions: # click on one or multiple points of interest # press Esc # a dataframe with information on the selected points will appear } r-cran-statcheck-1.3.0/man/plot.statcheck.Rd000066400000000000000000000027521374132532100206430ustar00rootroot00000000000000\name{plot.statcheck} \alias{plot.statcheck} \title{Plot method for "statcheck"} \description{Function for plotting of "statcheck" objects. Reported p values are plotted against recalculated p values, which allows the user to easily spot if articles contain miscalculations of statistical results. } \usage{\method{plot}{statcheck}(x, alpha = 0.05, APAstyle = TRUE, group = NULL, ...)} \arguments{ \item{x}{a "statcheck" object. See \code{\link{statcheck}}.} \item{alpha}{assumed level of significance in the scanned texts. Defaults to .05. } \item{APAstyle}{if TRUE, prints plot in APA style} \item{group}{indicate grouping variable to facet plot. Only works when APAstyle==TRUE} \item{\dots}{arguments to be passed to methods, such as graphical parameters (see \code{\link{par}}).} } \details{If APAstyle = FALSE, inconsistencies between the reported and the recalculated p value are indicated with an orange dot. Recalculations of the p value that render a previously non significant result (p >= .5) as significant (p < .05), and vice versa, are considered gross errors, and are indicated with a red dot. Exactly reported p values (i.e. p = ..., as opposed to p < ... or p > ...) are indicated with a diamond.} \author{Sacha Epskamp & Michele B. Nuijten . Many thanks to John Sakaluk who adapted the plot code to create graphs in APA style.} \seealso{\code{\link{statcheck}}} r-cran-statcheck-1.3.0/man/statcheck-package.Rd000066400000000000000000000015271374132532100212560ustar00rootroot00000000000000\name{statcheck-package} \alias{statcheck-package} \docType{package} \title{Extract statistics from articles and recompute p values} \description{Extract statistics from articles and recompute p values.} \details{ \tabular{ll}{Package: \tab statcheck\cr Type: \tab Package\cr Title: \tab Extract statistics from articles and recompute p values\cr Version: \tab 1.0.0\cr Date: \tab 2014-11-15\cr Author: \tab Sacha Epskamp & Michele B. Nuijten \cr Maintainer: \tab Michele B. Nuijten \cr Depends: \tab R (>= 2.14.2), plyr\cr License: \tab GPL-2\cr LazyLoad: \tab yes\cr ByteCompile: \tab yes\cr} } \author{Sacha Epskamp & Michele B. Nuijten } \keyword{ package } r-cran-statcheck-1.3.0/man/statcheck.Rd000066400000000000000000000132251374132532100176630ustar00rootroot00000000000000\name{statcheck} \alias{statcheck} \title{Extract statistics and recompute p-values.} \description{This function extracts statistics from strings and returns the extracted values, reported p-values and recomputed p-values. The package relies on the program "pdftotext", see the paragraph "Note" for details on the installation.} \usage{statcheck(x, stat = c("t", "F", "cor", "chisq", "Z", "Q"), OneTailedTests = FALSE, alpha = 0.05, pEqualAlphaSig = TRUE, pZeroError = TRUE, OneTailedTxt = FALSE, AllPValues = FALSE)} \arguments{ \item{x}{A vector of strings.} \item{stat}{"t" to extract t-values, "F" to extract F-values, "cor" to extract correlations, "chisq"to extract chi-square values, "Z" to extract Z-values, and "Q" to extract Q-values (within, between, or in general).} \item{OneTailedTests}{Logical. Do we assume that all reported tests are one tailed (TRUE) or two tailed (FALSE, default)?} \item{alpha}{Assumed level of significance in the scanned texts. Defaults to .05.} \item{pEqualAlphaSig}{Logical. If TRUE, statcheck counts p <= alpha as significant (default), if FALSE, statcheck counts p < alpha as significant} \item{pZeroError}{Logical. If TRUE, statcheck counts p=.000 as an error (because a p-value is never exactly zero, and should be reported as < .001), if FALSE, statcheck does not count p=.000 automatically as an error.} \item{OneTailedTxt}{Logical. If TRUE, statcheck searches the text for "one-sided", "one-tailed", and "directional" to identify the possible use of one-sided tests. If one or more of these strings is found in the text AND the result would have been correct if it was a one-sided test, the result is assumed to be indeed one-sided and is counted as correct.} \item{AllPValues}{Logical. If TRUE, the output will consist of a dataframe with all detected p values, also the ones that were not part of the full results in APA format} } \details{statcheck uses regular expressions to find statistical results in APA format. When a statistical result deviates from APA format, statcheck will not find it. The APA formats that statcheck uses are: t(df) = value, p = value; F(df1,df2) = value, p = value; r(df) = value, p = value; [chi]2 (df, N = value) = value, p = value (N is optional, delta G is also included); Z = value, p = value; Q(df) = value, p = value (including Qw, Qwithin, Qb, and Qbetween). All regular expressions take into account that test statistics and p values may be exactly (=) or inexactly (< or >) reported. Different spacing has also been taken into account. This function can be used if the text of articles has already been imported in R. To import text from pdf files and automatically send the results to this function use \code{\link{checkPDFdir}} or \code{\link{checkPDF}}. To import text from HTML files use the similar functions \code{\link{checkHTMLdir}} or \code{\link{checkHTML}}. Finally, \code{\link{checkdir}} can be used to import text from both PDF and HTML files in a folder. Note that the conversion from PDF (and sometimes also HTML) to plain text and extraction of statistics can result in errors. Some statistical values can be missed, especially if the notation is unconventional. It is recommended to manually check some of the results. PDF files should automatically be converted to plain text files. However, if this does not work, it might help to manually install the program "pdftotext". You can obtain pdftotext from \code{http://www.foolabs.com/xpdf/download.html}. Download and unzip the precompiled binaries. Next, add the folder with the binaries to the PATH variables so that this program can be used from command line. Also, note that a seemingly inconsistent p value can still be correct when we take into account that the test statistic might have been rounded after calculating the corresponding p value. For instance, a reported t value of 2.35 could correspond to an actual value of 2.345 to 2.354 with a range of p values that can slightly deviate from the recomputed p value. Statcheck will not count cases like this as errors.} \value{A data frame containing for each extracted statistic: \item{Source}{Name of the file of which the statistic is extracted} \item{Statistic}{Character indicating the statistic that is extracted} \item{df1}{First degree of freedom (if applicable)} \item{df2}{Second degree of freedom} \item{Test.Comparison}{Reported comparison of the test statistic, when importing from pdf this will often not be converted properly} \item{Value}{Reported value of the statistic} \item{Reported.Comparison}{Reported comparison, when importing from pdf this might not be converted properly} \item{Reported.P.Value}{The reported p-value, or NA if the reported value was NS} \item{Computed}{The recomputed p-value} \item{Raw}{Raw string of the statistical reference that is extracted} \item{Error}{The computed p value is not congruent with the reported p value} \item{DecisionError}{The reported result is significant whereas the recomputed result is not, or vice versa.} \item{OneTail}{Logical. Is it likely that the reported p value resulted from a correction for one-sided testing?} \item{OneTailedInTxt}{Logical. Does the text contain the string "sided", "tailed", and/or "directional"?} \item{APAfactor}{What proportion of all detected p-values was part of a fully APA reported result?} } \author{Sacha Epskamp & Michele B. Nuijten } \seealso{\code{\link{checkPDF}}, \code{\link{checkHTMLdir}}, \code{\link{checkHTML}}, \code{\link{checkdir}}} \examples{ txt <- "blablabla the effect was very significant (t(100)=1, p < 0.001)" statcheck(txt) } r-cran-statcheck-1.3.0/man/statcheckReport.Rd000066400000000000000000000037501374132532100210610ustar00rootroot00000000000000\name{statcheckReport} \alias{statcheckReport} \title{Generate HTML report for statcheck output.} \description{This function uses R Markdown to generate a nicely formatted HTML report of statcheck output.} \usage{statcheckReport(statcheckOutput, outputFileName, outputDir)} \arguments{ \item{statcheckOutput}{statcheck output of one of the following functions: statcheck(), checkPDF(), checkHTML(), checkdir(), checkPDFdir(), checkHTMLdir().} \item{outputFileName}{String specifying the file name under which you want to save the generated HTML report. The extension ".html" is automatically added, so doesn't need to be specified in this argument.} \item{outputDir}{String specifying the directory in which you want to save the generated HTML report.} } \details{This function temporarily saves the inserted statcheck output as an .RData file in the "output" folder in the statcheck package directory. This file is then called by the .Rmd template that is saved in the folder "rmd", also in the statcheck package directory. After the HTML report is generated, the .RData file is removed again.} \value{An HTML report, saved in the directory specified in the argument "outputDir".} \author{Sacha Epskamp & Michele B. Nuijten } \seealso{\code{\link{statcheck}}, \code{\link{checkPDF}}, \code{\link{checkHTMLdir}}, \code{\link{checkHTML}}, \code{\link{checkHTMLdir}}} \examples{\dontrun{ # first generate statcheck output, for instance by using the statcheck() function txt <- "blablabla the effect was very significant (t(100)=1, p < 0.001)" stat <- statcheck(txt) # next, use this output to generate a nice HTML report of the results statcheckReport(stat, outputFileName="statcheckHTMLReport", outputDir="C:/mydocuments/results") } # you can now find your HTML report in the folder # "C:/mydocuments/results" under the name "statcheckHTMLReport.html". } r-cran-statcheck-1.3.0/man/summary.statcheck.Rd000066400000000000000000000020041374132532100213500ustar00rootroot00000000000000\name{summary.statcheck} \alias{summary.statcheck} \title{Summary method for \code{statcheck}.} \description{Gives the summaries for a \code{statcheck} object. } \usage{\method{summary}{statcheck}(object, ...)} \arguments{ \item{object}{a \code{statcheck} object.} \item{\dots}{additional arguments affecting the summary produced.} } \value{A data frame containing for each extracted statistic: \item{Source}{Name of the file of which the statistic is extracted} \item{pValues}{The number of reported p values per article} \item{Errors}{The number of errors per article} \item{DecisionErrors}{The number of errors that caused a non-significant result to be reported as significant (or vice versa) per article}} \author{Sacha Epskamp & Michele B. Nuijten } \seealso{\code{\link{statcheck}}} \examples{ Text <- "blablabla the effect was very significant (t(100)=1, p < 0.001)" Stat <- statcheck(Text) summary(Stat) }