DiagnosisMed/0000755000175100001440000000000011344274315012642 5ustar hornikusersDiagnosisMed/gnugpl2.txt0000644000175100001440000003516411073666231014773 0ustar hornikusers GNU GENERAL PUBLIC LICENSE Version 2, June 1991 Copyright (C) 1989, 1991 Free Software Foundation, Inc. 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA Everyone is permitted to copy and distribute verbatim copies of this license document, but changing it is not allowed. Preamble The licenses for most software are designed to take away your freedom to share and change it. By contrast, the GNU General Public License is intended to guarantee your freedom to share and change free software--to make sure the software is free for all its users. This General Public License applies to most of the Free Software Foundation's software and to any other program whose authors commit to using it. (Some other Free Software Foundation software is covered by the GNU Library General Public License instead.) You can apply it to your programs, too. When we speak of free software, we are referring to freedom, not price. Our General Public Licenses are designed to make sure that you have the freedom to distribute copies of free software (and charge for this service if you wish), that you receive source code or can get it if you want it, that you can change the software or use pieces of it in new free programs; and that you know you can do these things. To protect your rights, we need to make restrictions that forbid anyone to deny you these rights or to ask you to surrender the rights. These restrictions translate to certain responsibilities for you if you distribute copies of the software, or if you modify it. For example, if you distribute copies of such a program, whether gratis or for a fee, you must give the recipients all the rights that you have. You must make sure that they, too, receive or can get the source code. And you must show them these terms so they know their rights. We protect your rights with two steps: (1) copyright the software, and (2) offer you this license which gives you legal permission to copy, distribute and/or modify the software. Also, for each author's protection and ours, we want to make certain that everyone understands that there is no warranty for this free software. If the software is modified by someone else and passed on, we want its recipients to know that what they have is not the original, so that any problems introduced by others will not reflect on the original authors' reputations. Finally, any free program is threatened constantly by software patents. We wish to avoid the danger that redistributors of a free program will individually obtain patent licenses, in effect making the program proprietary. To prevent this, we have made it clear that any patent must be licensed for everyone's free use or not licensed at all. The precise terms and conditions for copying, distribution and modification follow. TERMS AND CONDITIONS FOR COPYING, DISTRIBUTION AND MODIFICATION 0. This License applies to any program or other work which contains a notice placed by the copyright holder saying it may be distributed under the terms of this General Public License. The "Program", below, refers to any such program or work, and a "work based on the Program" means either the Program or any derivative work under copyright law: that is to say, a work containing the Program or a portion of it, either verbatim or with modifications and/or translated into another language. (Hereinafter, translation is included without limitation in the term "modification".) Each licensee is addressed as "you". Activities other than copying, distribution and modification are not covered by this License; they are outside its scope. The act of running the Program is not restricted, and the output from the Program is covered only if its contents constitute a work based on the Program (independent of having been made by running the Program). Whether that is true depends on what the Program does. 1. You may copy and distribute verbatim copies of the Program's source code as you receive it, in any medium, provided that you conspicuously and appropriately publish on each copy an appropriate copyright notice and disclaimer of warranty; keep intact all the notices that refer to this License and to the absence of any warranty; and give any other recipients of the Program a copy of this License along with the Program. You may charge a fee for the physical act of transferring a copy, and you may at your option offer warranty protection in exchange for a fee. 2. You may modify your copy or copies of the Program or any portion of it, thus forming a work based on the Program, and copy and distribute such modifications or work under the terms of Section 1 above, provided that you also meet all of these conditions: a) You must cause the modified files to carry prominent notices stating that you changed the files and the date of any change. b) You must cause any work that you distribute or publish, that in whole or in part contains or is derived from the Program or any part thereof, to be licensed as a whole at no charge to all third parties under the terms of this License. c) If the modified program normally reads commands interactively when run, you must cause it, when started running for such interactive use in the most ordinary way, to print or display an announcement including an appropriate copyright notice and a notice that there is no warranty (or else, saying that you provide a warranty) and that users may redistribute the program under these conditions, and telling the user how to view a copy of this License. (Exception: if the Program itself is interactive but does not normally print such an announcement, your work based on the Program is not required to print an announcement.) These requirements apply to the modified work as a whole. If identifiable sections of that work are not derived from the Program, and can be reasonably considered independent and separate works in themselves, then this License, and its terms, do not apply to those sections when you distribute them as separate works. But when you distribute the same sections as part of a whole which is a work based on the Program, the distribution of the whole must be on the terms of this License, whose permissions for other licensees extend to the entire whole, and thus to each and every part regardless of who wrote it. Thus, it is not the intent of this section to claim rights or contest your rights to work written entirely by you; rather, the intent is to exercise the right to control the distribution of derivative or collective works based on the Program. In addition, mere aggregation of another work not based on the Program with the Program (or with a work based on the Program) on a volume of a storage or distribution medium does not bring the other work under the scope of this License. 3. You may copy and distribute the Program (or a work based on it, under Section 2) in object code or executable form under the terms of Sections 1 and 2 above provided that you also do one of the following: a) Accompany it with the complete corresponding machine-readable source code, which must be distributed under the terms of Sections 1 and 2 above on a medium customarily used for software interchange; or, b) Accompany it with a written offer, valid for at least three years, to give any third party, for a charge no more than your cost of physically performing source distribution, a complete machine-readable copy of the corresponding source code, to be distributed under the terms of Sections 1 and 2 above on a medium customarily used for software interchange; or, c) Accompany it with the information you received as to the offer to distribute corresponding source code. (This alternative is allowed only for noncommercial distribution and only if you received the program in object code or executable form with such an offer, in accord with Subsection b above.) The source code for a work means the preferred form of the work for making modifications to it. For an executable work, complete source code means all the source code for all modules it contains, plus any associated interface definition files, plus the scripts used to control compilation and installation of the executable. However, as a special exception, the source code distributed need not include anything that is normally distributed (in either source or binary form) with the major components (compiler, kernel, and so on) of the operating system on which the executable runs, unless that component itself accompanies the executable. If distribution of executable or object code is made by offering access to copy from a designated place, then offering equivalent access to copy the source code from the same place counts as distribution of the source code, even though third parties are not compelled to copy the source along with the object code. 4. You may not copy, modify, sublicense, or distribute the Program except as expressly provided under this License. Any attempt otherwise to copy, modify, sublicense or distribute the Program is void, and will automatically terminate your rights under this License. However, parties who have received copies, or rights, from you under this License will not have their licenses terminated so long as such parties remain in full compliance. 5. You are not required to accept this License, since you have not signed it. However, nothing else grants you permission to modify or distribute the Program or its derivative works. These actions are prohibited by law if you do not accept this License. Therefore, by modifying or distributing the Program (or any work based on the Program), you indicate your acceptance of this License to do so, and all its terms and conditions for copying, distributing or modifying the Program or works based on it. 6. Each time you redistribute the Program (or any work based on the Program), the recipient automatically receives a license from the original licensor to copy, distribute or modify the Program subject to these terms and conditions. You may not impose any further restrictions on the recipients' exercise of the rights granted herein. You are not responsible for enforcing compliance by third parties to this License. 7. If, as a consequence of a court judgment or allegation of patent infringement or for any other reason (not limited to patent issues), conditions are imposed on you (whether by court order, agreement or otherwise) that contradict the conditions of this License, they do not excuse you from the conditions of this License. If you cannot distribute so as to satisfy simultaneously your obligations under this License and any other pertinent obligations, then as a consequence you may not distribute the Program at all. For example, if a patent license would not permit royalty-free redistribution of the Program by all those who receive copies directly or indirectly through you, then the only way you could satisfy both it and this License would be to refrain entirely from distribution of the Program. If any portion of this section is held invalid or unenforceable under any particular circumstance, the balance of the section is intended to apply and the section as a whole is intended to apply in other circumstances. It is not the purpose of this section to induce you to infringe any patents or other property right claims or to contest validity of any such claims; this section has the sole purpose of protecting the integrity of the free software distribution system, which is implemented by public license practices. Many people have made generous contributions to the wide range of software distributed through that system in reliance on consistent application of that system; it is up to the author/donor to decide if he or she is willing to distribute software through any other system and a licensee cannot impose that choice. This section is intended to make thoroughly clear what is believed to be a consequence of the rest of this License. 8. If the distribution and/or use of the Program is restricted in certain countries either by patents or by copyrighted interfaces, the original copyright holder who places the Program under this License may add an explicit geographical distribution limitation excluding those countries, so that distribution is permitted only in or among countries not thus excluded. In such case, this License incorporates the limitation as if written in the body of this License. 9. The Free Software Foundation may publish revised and/or new versions of the General Public License from time to time. Such new versions will be similar in spirit to the present version, but may differ in detail to address new problems or concerns. Each version is given a distinguishing version number. If the Program specifies a version number of this License which applies to it and "any later version", you have the option of following the terms and conditions either of that version or of any later version published by the Free Software Foundation. If the Program does not specify a version number of this License, you may choose any version ever published by the Free Software Foundation. 10. If you wish to incorporate parts of the Program into other free programs whose distribution conditions are different, write to the author to ask for permission. For software which is copyrighted by the Free Software Foundation, write to the Free Software Foundation; we sometimes make exceptions for this. Our decision will be guided by the two goals of preserving the free status of all derivatives of our free software and of promoting the sharing and reuse of software generally. NO WARRANTY 11. BECAUSE THE PROGRAM IS LICENSED FREE OF CHARGE, THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF ALL NECESSARY SERVICING, REPAIR OR CORRECTION. 12. IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MAY MODIFY AND/OR REDISTRIBUTE THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS), EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES. END OF TERMS AND CONDITIONS DiagnosisMed/man/0000755000175100001440000000000011344256710013414 5ustar hornikusersDiagnosisMed/man/diagnosis.Rd0000644000175100001440000002130411343034560015657 0ustar hornikusers\name{diagnosis} \alias{diagnosis} \alias{summary.diag} \alias{print.diag} \alias{plot.diag} \title{Diagnostic test accuracy evaluation} \description{\command{diagnosis} estimate sensitivity, specificity, predictive values, likelihood ratios, area under ROC curve and other validity measures for binary diagnostic test evaluation. It accepts as input either columns from a dataset or vectors, a 2 x 2 table or numbers representing true positives, false negatives, false positives and true negatives. \command{plot} for \command{diagnosis} draw a simple nomogram and \command{summary} for \command{diagnosis} creates an output in a table format that allows the output to be easily exported to a spreadsheet. } \usage{ diagnosis(a,b,c,d, CL = 0.95, print = TRUE, plot = FALSE) \method{plot}{diag}(x, print = FALSE, ...) \method{print}{diag}(x, ...) \method{summary}{diag}(object, ...) } \arguments{ \item{a}{A number representing True Positives from a 2x2 table. Also, TP could either a 2x2 table (see below) or a column in a dataset representing the reference standard.} \item{b}{A number representing False Negatives from a 2x2 table. If TP is a column in a dataset FN should also be a columns in a dataset, however representing the index test.} \item{c}{A number representing False Positives from a 2x2 table} \item{d}{A number representing True Negatives from a 2x2 table} \item{print}{If TRUE, diagnosis will print in the output window the statistics resulted from the 2x2 table. For plot, this will print in the output window a table with all pre-test and its corresponding post-test probabilities.} \item{plot}{There are two options of plot. If plot is called in \command{diagnosis}, then a ROC curve of the test under evaluation plotted. If plot is called from an object storing diagnosis output (see example) than a nomogram is plotted. These plots may later be edited, as any other plot, with title, legends etc. Default is FALSE.} \item{x}{For plot and print functions, x is an object assigned with diagnois output.} \item{CL}{Confidence limits for confidence intervals. Must be a numeric value between 0 and 1. Default is 0.95.} \item{object}{For summry function, 'object' is an object assigned with diagnois output.} \item{...}{Other options passed to \link[base]{print}, \link[graphics]{plot.default} or \link[base]{summary}.} } \details{ In \command{diagnosis}, the values entered must be eather two variables in a data frame, a 2 x 2 table or numbers corresponding to 2 x 2 table cells. If vectors or columns from a dataset, the first one should be the gold standard and the second should be the index test. These two variables must be coded either as numeric - 0 for negative and 1 for a positive test - or with the words "positive" and "negative" or "presence" and "absence". In a older version, there was diagnosisI function that was replace by diagnosis function. The values of a 2 x 2 table can be inputted as: TP is true positive; TN is true negative; FP is false positive and FN is false negative. Sensitivity, Specificity, Predictive values and Accuracy confidence limits rely on binomial distribution, which does not give result outside [0:1] such as normal distribution or asymptotic theory. DOR, Likelihood ratios and Youden J index confidence limits rely on normal approximation (Wald method for likelihoods). The AUC (area under the ROC curve) is estimated by trapezoidal method (see below). Also, this functions have a summary function wich creates a matrix as a result (identical to the default print option) wich allows to easily export the results to a spreadsheet or to a odt file (with OdfWeave) in a table format (see example). If the input is a 2 x 2 table it should be formated as: \tabular{cll}{ \tab TN \tab FN \cr \tab FP \tab TP \cr } \command{plot.diag} will draw a very simple nomogram as many examples from wikipedia \url{http://en.wikipedia.org/wiki/Nomogram}. This is not a generic nomogram as shown in many evidenced based medicine texts, because this one shows only pre-test and post-test variations with a fixed positive likelihood ratio. This likelihood is a statistic from an object created by \command{diagnosis} function. Its usage is the same as applying the Bayes theorem where the pre-test odds times positive likelihood ratio equals the pos-test odd (transforming the odds to probabilities). To use it, draw, with a rule, a vertical line from a desired pre-test probability, and to find the corresponding post-test probability, draw a horizontal line from the intersection of the curve and the vertical line toward the vertical axis. } \value{ A 2x2 table from which the validity measures are calculated. \item{Sample size}{The number of subjects analyzed.} \item{Prevalence}{The proportion classified as with the target condition by the reference standard} \item{Sensitivity}{the probability of the test to correctly classify subjects with the target condition (TP/(TP+FN))} \item{Specificity}{the probability of the test to correctly classify subjects without the target condition (TN/(TN+FP))} \item{Predictive values}{the probabilities of being with (positive predictive value) (TP/(TP+FP)) or without (negative predictive value) the target condition given a test result (TN/(TN+FN)).} \item{Likelihood ratios}{the probability of test a result in people with the target condition, divided by the probability of the same test result in people without the target condition (PLR = Se/(1-Sp); NLR = (1-Sp)/Se)} \item{Diagnostic odds ratio}{represents the overall discrimination of a dichotomous test, and is equivalent to the ratio of PLR and NLR.} \item{Error trade off}{Is the amount of false positives traded with false negatives for each decision threshold; here expressed as an odd - for binary results there is only one threshold} \item{Error rate}{Expresses how many errors we make when we diagnose patients with an abnormal test result as diseased, and those with a normal test result as non-diseased ((FP+FN)/sample size).} \item{Accuracy}{overall measure that express the capacity of the test to correctly classify subjects with and without the target condition ((TP+TN)/(sample size))} \item{Area under ROC curve}{overall measure of accuracy - here the method is the trapezoidal. It gives identical results as (Se+SP)/2.} } \references{ Knotterus. The Evidence Based Clinical Diagnosis; BMJBooks, 2002. Xiou-Hua Zhou, Nancy A Obuchowsky, Donna McClish. Statistical Mehods in diagnostic Medicine; Wiley, 2002. Simel D, Samsa G, Matchar D (1991). Likelihood ratios with confidence: Sample size estimation for diagnostic test studies. Journal of Clinical Epidemiology 44: 763 - 770 } \author{Pedro Brasil - \email{diagnosismed-list@lists.r-forge.r-project.org}} \note{Bug reports, malfunctioning, or suggestions for further improvements or contributions can be sent, preferentially, through the DiagnosisMed email list, or R-Forge website \url{https://r-forge.r-project.org/projects/diagnosismed/}. } \seealso{\code{\link{LRgraph}}, \code{\link{ROC}},\code{\link{LRgraph}},\link[epitools]{binom.conf.int}, \link[epibasix]{sensSpec},\link[epiR]{epi.tests},\link[Design]{nomogram},\link[epiR]{epi.nomogram}} \examples{ # Simulating a dataset mydata <- as.data.frame(rbind( cbind(rep(c("positive"),18),rep(c("negative"),18)), cbind(rep(c("positive"),72),rep(c("positive"),72)), cbind(rep(c("negative"),25),rep(c("positive"),25)), cbind(rep(c("negative"),149),rep(c("negative"),149)) )) colnames(mydata) <- c('culture','serology') # A little description of the data set to check if it is ok! str(mydata) # Attaching the data set and checking attach(mydata) # Running the diagnosis analysis diagnosis(culture,serology) #Simulating a table mytable <- matrix(c(149,18,25,72), nrow = 2, ncol=2, byrow=TRUE, dimnames = list(enzyme=c('absent','present'), citology=c('absent','present'))) # Running analysis from a 2 x 2 table diagnosis(mytable) #Inserting values as isolated numbers diagnosis(72,18,25,149) #--------------------------------- # Export results to a spreadsheet: #--------------------------------- # Assing diagnosis to an object mytest <- diagnosis(364,22,17,211,print=FALSE) # Assign the summary to an object mt.sum <- summary(mytest) # Export to a spreadsheet using csv format - could also work to text with OdfWeave export. # write.csv(mt.sum,'MytestResults.csv',quote = F,na='') # Draw a nomogram from a test plot(mytest) rm(mydata,mytable,mytest,mt.sum) } \keyword{univar} \keyword{htest} DiagnosisMed/man/ROC.Rd0000644000175100001440000002714511343034560014333 0ustar hornikusers\name{ROC} \alias{ROC} \alias{print.ROC} \alias{plot.ROC} \title{Draw a ROC curve, estimate good cut-offs and compute validity measures for each cut-off} \description{ Draw a non-parametric (empirical) ROC - receiver operating characteristic - curve and compute test sensitivity, specificity, predictive values and likelihood ratios (and respective confidence limits) for each decision threshold. Estimate good decision threshold by a variety of methods.} \usage{ ROC(gold, test, CL = 0.95, Cost = 1, Prevalence = 0, Plot = TRUE, Plot.point = "Min.ROC.Dist", p.cex=1, Full = FALSE, Print = TRUE) \method{plot}{ROC}(x, Plot.point = "Min.ROC.Dist", cex.sub=.85, p.cex=1, ...) \method{print}{ROC}(x,Full=FALSE,...) } \arguments{ \item{gold}{The reference standard. A column in a data frame indicating the classification by the reference test. The reference standard must have two levels: must be coded either as 0 - without target condition - or 1 - with the target condition; or could be coded \link[base]{as.factor} with the words "negative" - without target condition - and "positive" - with the target condition.} \item{test}{The index test or test under evaluation. A column in a dataframe or vector indicating the test results in a continuous scale. Might also work with discrete or ordinal scale.} \item{CL}{Confidence limit. The limits of the confidence interval. Must be coded as number in a range from 0 to 1. Default value is 0.95} \item{Cost}{Cost = cost(FN)/cost(FP). MCT will be used to estimate a good cut-off. It is a value in a range from 0 to infinite. Could be financial cost or a health outcome with the perception that FN are more undesirable than FP (or the other way around). This item will run into MCT (misclassification cost term) - (1-prevalence)*(1-Sp)+Cost*prevalence(1-Se). Cost=1 means FN and FP have even cost. Cost = 0.9 means FP are 10 percent more costly. Cost = 0.769 means that FP are 30 percent more costly. Cost = 0.555 means that FP are 80 percent more costly. Cost = 0.3 means that FP are 3 times more costly. Cost = 0.2 means that FP are 5 times more costly. Also, it can be more easily inserted as any ratio such as 1/2.5 or 1/4.} \item{Prevalence}{Prevalence of the disease in the population who the test will be performed. It must be a value from 0 to 1. If left 0 (the default value), this will be replaced by the disease prevalence in the sample. This value will be used in the MCT and Efficiency formulas to estimate good cut-offs.} \item{Plot}{If FALSE, the ROC curve plot will not be displayed. Default is TRUE. If default options of graphics parameters from \link[graphics]{par} are not satisfactory, then the sugestion is to assign the output into a object and call options from \command{plot} } \item{Plot.point}{The method of best cut-off estimation which will be displayed at ROC curve as a dot. Default is "Min.ROC.Dist". Possible options are: "None" - only the AUC in the legend will appear; "Max.Accuracy" - the cut-off which maximize the accuracy; "Max.DOR" - the cut-off which maximize the diagnostic odds ratio; "Error.rate" - the cut-off which minimizes the error rate; "Max.Accuracy.area" - the cut-off which maximize the accuracy area; "Max.Sens+Spec" - the cut-off which maximize the sum of sensitivity with specificity; "Max.Youden" - the cut-off which maximize the Youden index; "Se=Sp" - the cut-off which Sensitivity is equal to Specificity; "Min.ROC.Dist" - the cut-off which minimize the distance between the curve and the upper left corner of the graph; "Max.Efficiency" - the cut-off which maximize the efficiency; "Min.MCT" - the cut-off which minimize the misclassification cost term. } \item{cex.sub}{The magnification to be used for sub-titles relative to the current setting of cex. See \link[graphics]{par}} \item{p.cex}{Symbol expansion - a numerical vector - passed to points. See \link[graphics]{points}} \item{Print}{If FALSE, no results (detailed below in values section) will be displayed on the output window. Default is TRUE} \item{Full}{If TRUE, a table with sensitivity, specificity, predictive values and likelihood ratios (and respective confidence limits) for each decision threshold will be displayed. Default is FALSE.} \item{x}{For the \command{plot} and \command{print} functions, x is an object storing \command{ROC} function output.} \item{...}{Other plot or print parameters form \link[graphics]{plot.default}} } \details{ Tests results matching the cut-off values will be considered a positive test. \command{ROC} assumes that subjects with higher values of the test are with the target condition and those with lower values are without the target condition. Tests that behave like glucose (middle values are supposed to be normal and extreme values are supposed to be abnormal) and immunefluorescence (lower values - higher dilutions - are suppose to be abnormal) will not be correctly analyzed. In the latter, multiplying the test results by -1 or other transformation before analysis could make it work. The AUC (area under the ROC curve) is estimated by the trapezoidal method (also known as Mann-Whitney statistic), its confidence interval is estimated by DeLong method. The AUC confidence limits should be used only to compare the AUC with the null value for AUC which is 0.5 and not to compare the AUC from different tests. The validity measures such as Sensitivity, Specificity and Likelihood ratios and its confidence limits are estimated as in \code{\link{diagnosis}} function. If \command{ROC} output is assign to an object (see example), tests results could be easily exported to a spreadsheet and other graphics that might be of interest could be easily done. Diagnostic odds ratio: \eqn{DOR = (TP*TN)/(FN*FP); the same as: DOR = PLR/NLR} Accuracy area: \eqn{AA = (TP*TN)/((TP+FN)*(FP+TN))} Youden index: \eqn{Y = Se+Sp-1; the same as: Y = Se-FPR} Minimum ROC distance: \eqn{m ROC Dis = (Sp-1)^2+(1-Se)^2} Efficiency: \eqn{Ef = Se*prevalence+(1-prevalence)*Sp} Misclassification Cost Term: \eqn{MCT = (1-prevalence)*(1-Sp)+(cost(FN)/cost(FP))*prevalence*(1-Se)} } \value{ \item{pop.prevalence}{The disease prevalence informed by the user. If not informed, it will be the same as the sample prevalence.} \item{sample.prevalence}{The disease prevalence in the sample} \item{sample.size}{The number of subjects analyzed} \item{test.summary}{A table showing the quintiles, mean and standard deviation of overall test results, test results from those with the target condition and without the target condition} \item{AUC.summary}{A table showing the AUC estimated by DeLong method (trapezoidal) and its confidence limits.} \item{test.best.cutoff}{A table showing the best cut-offs estimated by methods described above, its corresponding sensitivity, specificity and positive likelihood ratio (and their confidence limits)} } \references{ Knotterus. The Evidence Based Clinical Diagnosis; BMJBooks, 2002. Xiou-Hua Zhou, Nancy A Obuchowsky, Donna McClish. Statistical Methods in diagnostic Medicine; Wiley, 2002. Simel D, Samsa G, Matchar D (1991). Likelihood ratios with confidence: Sample size estimation for diagnostic test studies. Journal of Clinical Epidemiology 44: 763 - 770 S.B. Cantor, C.C. Sun, G. Tortolero-Luna, R. Richards-Kortum, and M. Follen. (1999) A comparison of C/B ratios from studies using receiver operating characteristic curve analysis. Journal of Clinical Epidemiology, 52(9):885-892. Greiner, M. (1996) Two-graph receiver operating characteristic (TG-ROC): update version supports optimisation of cut-off values that minimize overall misclassification costs. J.Immunol.Methods 191:93-94. Gengsheng Qin, Lejla Hotilovac. Comparison of non-parametric confidence intervals for the area under the ROC curve of a continuous-scale disagnostic test. Statistical Methods in Medical Research 2008; 17:207-221. } \author{Pedro Brasil; Beranrdo Rangel Tura - \email{diagnosismed-list@lists.r-forge.r-project.org}} \note{Bug reports, malfunctioning, or suggestions for further improvements or contributions can be sent, preferentially, through the DiagnosisMed email list, or R-Forge website \url{https://r-forge.r-project.org/projects/diagnosismed/}. } \seealso{\link[epitools]{binom.conf.int},\code{\link{diagnosis}},\code{\link{interact.ROC}},\code{\link{TGROC}},\link[ROCR]{performance},\link[nonbinROC]{contROC}} \examples{ # loading a dataset data(rocdata) # Attaching the data set. attach(rocdata) # A little description of the data set to check if it is ok! str(rocdata) # Running ROC analysis with the full table option # and storing ROC objects into 'x' from which there are tables to draw the graphs below. x<-DiagnosisMed::ROC(Gold,test2,Full=TRUE) # There is no need to stick the package name before the function if it is loaded as first in search path! # Adding a title to the graph. title(main="ROC graph") # Some graphs that may be of interest. Validity measures at each test value. # Setting the plot window to get nine graphs # Some graphs showing some validity measures and some indexes variations used to choose good cut-offs par(mfrow=c(3,3)) plot(x$test.diag.table$test.values,x$test.diag.table$DOR,type="l",ylab="DOR",xlab="Test values") title(main="Test values x DOR") plot(x$test.diag.table$test.values,x$test.diag.table$MCT,type="l",ylab="MCT",xlab="Test values") title(main="Test values x MCT") plot(x$test.diag.table$test.values,x$test.diag.table$Efficiency,type="l",ylab="Efficiency",xlab="Test values") title(main="Test values x Efficiency") plot(x$test.diag.table$test.values,x$test.diag.table$Youden,type="l",ylab="Youden index",xlab="Test values") title(main="Test values x Youden index") plot(x$test.diag.table$test.values,x$test.diag.table$PLR,type="l",ylim=c(0,49),ylab="Likelihood ratios",xlab="Test values") lines(x$test.diag.table$test.values,x$test.diag.table$NLR,type="l",lty=2) legend("right",lty=c(1,2),legend=c("PLR","NLR"),bty = 'n') title(main="Test values x Likelihood ratios") plot(x$test.diag.table$test.values,x$test.diag.table$PPV,type="l",ylab="Predictive values",xlab="Test values") lines(x$test.diag.table$test.values,x$test.diag.table$NPV,type="l",lty=2) legend("bottomright",lty=c(1,2),legend=c("PPV","NPV"),bty = 'n') title(main="Test values x Predictive values") plot(x$test.diag.table$test.values,x$test.diag.table$Accuracy.area,type="l",,ylab="Accuracy area",xlab="Test values") title(main="Test values x Accuracy area") plot(x$test.diag.table$test.values,x$test.diag.table$MinRocDist,type="l",,ylab="ROC distance",xlab="Test values") title(main="Test values x ROC distance") plot(x$test.diag.table$test.values,x$test.diag.table$Accuracy,type="l",ylab="Error rate & Accuracy",xlab="Test values",ylim=c(0,1)) lines(x$test.diag.table$test.values,x$test.diag.table$Error.rate,type="l",lty=2)#,xlim=c(0,2.5)) legend("bottomright",lty=c(1,2),legend=c("Accuracy","Error rate"),bty = 'n') par(mfrow=c(1,1)) # Also, results from ROC analysis could easily exported to a spreadsheet file or to a odt file by OdfWeave. # Exporting the full table: # write.csv(x$test.diag.table[,-c(2:5,24:34)],'MytestFulltable.csv') # Exporting AUC summary: # write.csv(x$AUC.summary,'MytestAUC.csv') # Exporting Test summary: # write.csv(x$test.summary,'MytestSummary.csv') # Exporting Test best-cut-offs table: # write.csv(x$test.best.cutoff,'MytestBestcutof.csv') rm(rocdata,x) } \keyword{iplot} \keyword{univar} \keyword{htest} DiagnosisMed/man/tutorial.rd0000644000175100001440000000070111073666231015606 0ustar hornikusers\name{tutorial} \docType{data} \alias{tutorial} \title{Example data set for continuos tests results} \description{ This data set gives results of trhee serological tests - continuous scale - and its correspondig reference test classfication. Also gives age as four ordinal groups. } \usage{tutorial} \format{A data.frame containing 5 variables and 170 observations.} \source{Not informed} \references{ Not informed } \keyword{datasets}DiagnosisMed/man/LRgrgaph.Rd0000644000175100001440000001224011343034560015404 0ustar hornikusers\name{LRgraph} \alias{LRgraph} \title{Comparing diagnositic tests: a simple graphic using likelihood ratios.} \description{ LRgraph graphically compares two or more (all of them with the first test) diagnostic tests with binary results through their likelihood ratios, based on the rationale that the predictive ability of a test is a more interesting characteristic than sensitivity and/or specificity. It is possible to see through the graph that if the tests with smaller sensitivity or specificity may have superior predictive ability, that is, increases the prediction ability with small sensitivity/specificity trade-off. } \usage{ LRgraph(tests, lwd = 2, lty = 1, cex = 1, leg.cex = 1.5, pt.cex = 2,...) } \arguments{ \item{tests}{a is a object composed by two or more tests. This object should be created binding two objects created by diagnosis functions as 'cbind(mytest1,mytest2)'.The user may insert as many tests as one wishes. See below.} \item{lwd}{Line width. See \link[graphics]{par},\link[graphics]{points},\link[graphics]{legend}} \item{lty}{Line type. See \link[graphics]{par}} \item{cex}{Symbols and text size. See \link[graphics]{par},\link[graphics]{points}} \item{leg.cex}{Legend text size, this will replace the cex option in the legend. See \link[graphics]{legend}} \item{pt.cex}{Size of the symbols in the legend. See \link[graphics]{legend}} \item{...}{Other graph parameters. See \link[graphics]{plot.default}} } \details{When a diagnostic test has both sensitivity and specificity higher than a competing test is easy to see that the former is superior than the later. However, sometimes a test may have superior sensitivity and inferior specificity (or the other way around). In this case, a good decision may be toward the test that have a better prediction ability. The graph visually helps the user to see and compare these abilities. The graph is very similar to the ROC graph. The vertical and horizontal axis have the same length as the ROC graph. However, the diagnostic tests are represented as dots instead of curves. The solid line passing through (0,0) is the likelihood ratio positive-line and the solid line passing through (1,1) is the likelihood ratio negative-line. Both negative and positive likelihood are numerically equivalent to the slopes of the solid lines. The solid lines split the graph into four areas (run the example). Also, there are dashed lines representing the sensitivity and specificity of the first test plotted. One may see that there are areas that a test may have superior sensitivity (or specificity) and yet the dot may be below the likelihood solid line. That is because the sensitivity / specificity trade-off is not reasonable, making the test with less predictive ability. } \value{ Returns only a graph which is divided in four areas by the black solid lines (the likelihood ratios of the firts test). The interpretation of the comparisons will depend on which area the competing tests will fall in. See and run the example to have the idea on how interpretation must be done. } \references{Biggerstaff, B.J. Comparing diagnostic tests: a simple graphic using likelihood ratios. Statistics in Medicine. 2000; 19(5):649-663} \author{Pedro Brasil - \email{diagnosismed-list@lists.r-forge.r-project.org}} \note{Bug reports, malfunctioning, or suggestions for further improvements or contributions can be sent, preferentially, through the DiagnosisMed email list, or R-Forge website \url{https://r-forge.r-project.org/projects/diagnosismed/}. } \seealso{\code{\link{diagnosis}}} \examples{ # Making tests with diagnosis function with different performances for comparison. # mytest5 is the one which all others will be compared with. mytest5<-diagnosis(80,20,20,80,print=FALSE) # mytest1 has higher sensitivity and specificity. # mytest1 is overall superior compared to mytest5. mytest1<-diagnosis(90,10,10,90,print=FALSE) # mytest2 has lower sensitivity but higher specificity. # mytest2 is better to identify the presence of the target condition compared to mytest5. mytest2<-diagnosis(72,28,3,97,print=FALSE) # mytest3 has higher sensitivity but lower specificity. # mytest3 is better to identify the absence of the target condition compared to mytest5. mytest3<-diagnosis(92,8,37,63,print=FALSE) # mytest41 has lower sensitivity and specificity. # mytest41 is overall inferior compared to mytest5. mytest41<-diagnosis(72,28,35,65,print=FALSE) # mytest42 has lower specificity but higher sensitivity. # Nevertheless, mytest42 still is overall inferior compared to mytest5. mytest42<-diagnosis(82,18,42,58,print=FALSE) # But that becomes clear only after ploting the tests. LRgraph(cbind(mytest5,mytest1,mytest2,mytest3,mytest41,mytest42),cex=2.5) # The texts below are not part of the function but helps to understand the areas text(x=.5, y =.5, labels ="Area 4: Overall inferior", col="lightgray",cex=.8) text(x=.5, y =1, labels ="Area 2: Absence", col="lightgray",cex=.8) text(x=.07, y =.68, labels ="Area 3: Presence", col="lightgray",cex=.8) text(x=.1, y =1, labels ="Area 1: Overall superior", col="lightgray",cex=.8) rm(mytest1) rm(mytest2) rm(mytest3) rm(mytest41) rm(mytest42) rm(mytest5) } \keyword{iplot}DiagnosisMed/man/interact.ROC.Rd0000644000175100001440000000663511216333762016151 0ustar hornikusers\name{interact.ROC} \alias{interact.ROC} \title{Interactively draw a ROC curve with your data} \description{ Draw a ROC curve with the user data, interactively, sliding a button, and watch how changes at the cutoff, correlate with the changes of Sensitivity and Specificity while building the ROC curve itself. } \usage{ interact.ROC(gold, test) } \arguments{ \item{gold}{A column in a data frame or a vector indicating the classification by the reference test. Must be coded either as 0 - without target condition - or 1 - with the target condition.} \item{test}{A column in a data frame or a vector indicating the test under study (index test) results. Must be \link[base]{numeric}.} } \details{ interact.ROC is a call from roc.demo function in TeachingDemo package. The difference is that interact.ROC allow the input data be displayed as usually is in diagnostic studies, a column with the test result and the other with the reference standard results. Inside this function the "ask" option - which controls the "next" button to see the next graph - is turned off - par(ask=FALSE). Also, other options must be set to this function to work fine. Type "options()". The option ask and device.ask.default should be set as FALSE, as they usually are by default. To turn it on again later, type par(ask=TRUE). The test must have a rationale that higher values of the index test belong to those with the target disease and those with lower values belong to those without the target disease. If this is not the case, the suggestion is to transform the tests results by multiplying it by -1 before running interact.ROC. } \value{ interact.ROC generates two graphs in the same window: the upper graph is a ROC graph (Sensitivity on the vertical axis and 1-Specificity on the horizontal axis); the lower graph is a density plot (the density on the vertical axis and the test cut-off (or threshold) on the horizontal axis). With a sliding button is possible to interact and see how the Sensitivity and Specificity changes while changing the cut-off. In the upper graph the cut-off is represented by the different dots and the purple line represents the distance to the "optimal" threshold. At the lower graph, the red line and dashes represent the density and the test result from those without the target condition respectively. While the blue ones represent those with the target condition. If the dashes are at the bottom of the lower graph then they are classified as without the target condition, if at the top, with the target condition. The green vertical line represents the cut-off and changes with the sliding button. The cut-off itself can be seen right above the sliding button and the respective sensitivity and specificity at the bottom of the graph window. } \references{ JA Knotterus, The Evidence Based Clinical Diagnosis; BMJBooks, 2002 } \author{Pedro Brasil - \email{diagnosismed-list@lists.r-forge.r-project.org}} \note{Bug reports, malfunctioning, or suggestions for further improvements or contributions can be sent, preferentially, through the DiagnosisMed email list, or R-Forge website \url{https://r-forge.r-project.org/projects/diagnosismed/}. } \seealso{\code{\link{diagnosis}},\code{\link{ROC}},\code{\link{TGROC}},\link[ROCR]{performance},\link[TeachingDemos]{roc.demo},\link[nonbinROC]{contROC}} \examples{ data(rocdata) attach(rocdata) interact.ROC(Gold,test2) rm(rocdata) } \keyword{dynamic} \keyword{iplot} DiagnosisMed/man/rocdata.Rd0000644000175100001440000000067111073666231015326 0ustar hornikusers\name{rocdata} \docType{data} \alias{rocdata} \title{Example data set for continuos tests results} \description{ This data set gives results of two serological tests - continuous scale - and its correspondig reference test classfication - 0 for negative and 1 for positive. } \usage{rocdata} \format{A data.frame containing 3 variables and 148 observations.} \source{Not informed} \references{ Not informed } \keyword{datasets}DiagnosisMed/man/TGROC.Rd0000644000175100001440000003030311343034560014554 0ustar hornikusers\name{TGROC} \alias{TGROC} \alias{print.TGROC} \alias{plot.TGROC} \title{TG-ROC - Two Graphic Receiver Operating Characteristic} \description{ TGROC draws a graph of sensitivity and specificity with the variations of a diagnostic test scale. Also, it demonstrates which cut-offs (or decision thresholds) may trichotomize the test results into a range where the test is good to identify those with the target condition, a inconclusive range and a range where the test is good to identify those without the target condition according with the researcher tolerance. Also, it estimates and graphically demonstrates good cut-offs by different methods. TGROC estimates non-parametric statistics and uses the AMORE package to simulate the parametric curve and values with a neural network. } \usage{ TGROC(gold, test, Cost=1, CL=0.95, Inconclusive=0.95, Prevalence=0, t.max=NULL, t.min=NULL, precision=.0001, n.neurons=c(1,5,1), learning.rate.global=1e-2, momentum.global=0.3, error.criterium="LMS", Stao=NA, hidden.layer="sigmoid", output.layer="sigmoid", method="ADAPTgdwm", report=FALSE, show.step=5000, n.shows=1, Plot="Both", Plot.inc.range=TRUE, Plot.Cl=FALSE, Plot.cutoff="None", cex=0.5, cex.sub=0.85, Print=TRUE) \method{plot}{TGROC}(x,..., Plot="Both", Plot.inc.range=TRUE, Plot.Cl=FALSE, Plot.cutoff="None", cex=0.5, cex.sub=0.85) } \arguments{ \item{gold}{The reference standard. A column in a data frame or a vector indicating the classification by the reference test. The reference standard must have two levels: must be coded either as 0 - without target condition - or 1 - with the target condition; or could be coded \link[base]{as.factor} with the words "negative" - without target condition - and "positive" - with the target condition.} \item{test}{ The index test or test under evaluation. A column in a dataset or vector indicating the test results in a continuous scale. It may also work with discrete ordinal scale.} \item{Cost}{Cost = cost(FN)/cost(FP). MCT (misclassification cost term) will be used to estimate a good cut-off. It is a value in a range from 0 to infinite. Could be financial cost or a health outcome with the perception that FN are more undesirable than FP (or the other way around). This item will run into MCT - (1-prevalence)*(1-Sp)+Cost*prevalence(1-Se). Cost=1 means FN and FP have even cost. Cost = 0.9 means FP are 10 percent more costly. Cost = 0.769 means that FP are 30 percent more costly. Cost = 0.555 means that FP are 80 percent more costly. Cost = 0.3 means that FP are 3 times more costly. Cost = 0.2 means that FP are 5 times more costly. Also, it can be inserted as any ratio such as 1/2.5 or 1/4.} \item{CL}{Confidence limit. The limits of the confidence interval. Must be coded as number in range from 0 to 1. Default value is 0.95} \item{Inconclusive}{Inconclusive is a value that ranges from 0 to 1 that will identify the test range where the performance of the test is not acceptable and thus considered inconclusive. It represents the researcher tolerance of how good the test should be. If it is set to 0.95 (which is the default value), test results that have less than 0.95 sensitivity and specificity will be in the inconclusive range.} \item{Prevalence}{Prevalence of the disease in the population who the test will be performed. It must be a value from 0 to 1. If left 0 (the default value), this will be replaced by the disease prevalence in the sample. This value will be used in the MCT and Efficiency formulas to estimate good cut-offs.} \item{t.max}{Test upper range limit to be set as numeric. It will be used to simulate the parametric curve. If left NULL TGROC will assume that the sample maximum value is the upper limit of the test.} \item{t.min}{Test lower range limit to be set as numeric. It will be used to simulate the parametric curve. If left NULL TGROC will assume that the sample minimum value is the lower limit of the test.} \item{precision}{The test precision is the unit of variation of the test and should be set as numeric. It will be used to simulate the parametric curve. It will express how many estimations the network will do between each test unit. It is interesting the precision to be something between 1/2 to 1/10 of the test unit. The higher the precision, smoother the parametric curve will look. However, if too much precision is set the function may give an error as a result. This also may depends on the amount of observations in the dataset} \item{n.neurons}{Numeric vector containing the number of neurons of each layer. See \link[AMORE]{newff}.} \item{learning.rate.global}{Learning rate at which every neuron is trained. See \link[AMORE]{newff}.} \item{momentum.global}{Momentum for every neuron. See \link[AMORE]{newff}.} \item{error.criterium}{Criteria used to measure to proximity of the neural network prediction to its target. See \link[AMORE]{newff}.} \item{Stao}{Stao parameter for the TAO error criteria. See \link[AMORE]{newff}.} \item{hidden.layer}{Activation function of the hidden layer neurons. See \link[AMORE]{newff}.} \item{output.layer}{Activation function of the hidden layer neurons. See \link[AMORE]{newff}.} \item{method}{Preferred training method. See \link[AMORE]{newff}.} \item{report}{Logical value indicating whether the training function should keep quiet. See \link[AMORE]{train}.} \item{show.step}{Number of epochs to train non-stop until the training function is allow to report. See \link[AMORE]{train}.} \item{n.shows}{Number of times to report (if report is TRUE).See \link[AMORE]{train}.} \item{Plot}{Possible values are: "None", "Both", "Parametric" and "Non-parametric". TGROC may plot parametric, non-parametric, both or no plot at all depending of this option. Default is to plot both curves.} \item{Plot.inc.range}{Plot inconclusive range. If TRUE, the lines representing the limits of the inconclusive range will be displayed. Default is TRUE. Parametric inconclusive range will be displayed if Plot = "Parametric" or Plot = "Both" and non-parametric inconclusive range otherwise. If Plot is FALSE then Plot.inc.range is not considered.} \item{Plot.Cl}{Plot confidence limits. If TRUE, confidence bands for sensitivity and specificity curves will be displayed. If Plot = "Parametric" or Plot = "Both" than parametric bands are displayed and non-paramentric otherwise. Default is FLASE. If Plot is FALSE than Plot.Cl is not considered.} \item{Plot.cutoff}{A line representing the estimated best cut-off (threshold) will be displayed. If Plot is FALSE then Plot.cutoff is not considered. If Plot = "Parametric" or Plot = "Both" than the parametric values are represented and non-parametric otherwise. Default is "None". Possible values are: "Se=Sp" - the cut-off which Sensitivity is equal to Specificity; "Max.Efficiency" - the cut-off which maximize the efficiency; "Min.MCT" - the cut-off which minimizes the misclassification cost term.} \item{cex}{See \link[graphics]{par}. A numerical value giving the amount by which plotting text and symbols should be magnified relative to the default.} \item{cex.sub}{See \link[graphics]{par}. Controls the font size in the subtitle. If Plot is FALSE than cex.sub is not considered.} \item{Print}{If FALSE, statistics estimated by TGROC will not be displayed in the output window. Default is TRUE.} \item{x}{For the plot function, x is an object created TGROC function.} \item{...}{Other plot parameters form \link[graphics]{plot.default}} } \details{ There are two main advantages of TG-ROC over ROC analysis: (1) for the uninitiated is much easier to understand how sensitivity and specificity changes with different cut-offs; (2) and because of the graphical display is much easier to understand and estimate reasonable inconclusive test ranges. Occasionally the MCT or Efficiency cut-offs may be set outside the inconclusive range. This may happens with extreme values of Cost and population prevalence. If this is the case, perhaps the inconclusive range may not be of interest or not applicable. Also, if the test is too good or inconclusive range tolerance is set too low, then there may be no inconclusive range at all, because sensitivity and specificity may not be below this tolerance at the same time. If this is the case, setting a higher inconclusive tolerance may work. Tests results matching the cut-off values will be considered a positive test. TGROC assumes that subjects with higher values of the test are with the target condition and those with lower values are without the target condition. Tests that behave like glucose (middle values are supposed to be normal and extreme values are supposed to be abnormal) and immunefluorescence (lower values - higher dilutions - are suppose to be abnormal) will not be correctly analyzed. In the latter, multiplying the test results by -1 or other transformation before analysis could make it work. The validity measures such as Sensitivity, Specificity and Likelihood ratios and its confidence limits are estimated as in \code{\link{diagnosis}} function. MCT and Efficiency are estimated as in \code{\link{ROC}} function. Non-parametric confidence bands are estimated by binomial confidence interval and parametric with normal confidence interval. The parametric curve and validity measures are estimated with a neural network strategy using the AMORE package. Usually neural networks, uses a subset of the data to estimate weights, a subset to calibrate/validate the weights and a third subset to simulate the function. TGROC uses only the estimate and simulate steps, therefore there is no stopping rule for the neural network parametric estimation. The only way to check the fit of the neural network is to visually compare with the non-parametric curve. If the curve looks weird or not good enough, than progressive slight changes in the momentum, learning rate, number of layers and / or other parameters should work fine.} \value{ \item{Sample size}{Amount of subjects analyzed.} \item{Sample prevalence}{Prevalence of target condition in the sample.} \item{Population prevalence.}{Informed prevalence in the population.} \item{Test summary}{A summary of central and dispersion tendencies of test results.} \item{Non-parametric inconclusive limits.}{Estimate of the inconclusive limits of the tests and its corresponding validity measures.} \item{Non-parametric best cut-offs.}{The cut-offs estimated by different methods and its corresponding validity measures.} \item{Parametric inconclusive limits.}{Estimate of the inconclusive limits of the tests and its corresponding validity measures with the parametric simulation.} \item{Parametric best cut-off}{The cut-offs estimated by different methods and its corresponding validity measures with the parametric simulation.} } \references{Greiner, M. (1996) Two-graph receiver operating characteristic (TG-ROC): update version supports optimization of cut-off values that minimize overall misclassification costs. J.Immunol.Methods 191:93-94. M. Greiner (1995) Two-graph receiver operating characteristic (TG-ROC): a Microsoft-EXCEL template for the selection of cut-off values in diagnostic tests. Journal of Immunological Methods. 185(1):145-146. M. Greiner, D. Sohr, P. Gobel (1995) A modified ROC analysis for the selection of cut-off values and the definition of intermediate results of serodiagnostic tests. Journal of immunological methods. 185(1):123-132. } \author{Pedro Brasil; - \email{diagnosismed-list@lists.r-forge.r-project.org}} \note{Bug reports, malfunctioning, or suggestions for further improvements or contributions can be sent, preferentially, through the DiagnosisMed email list, or R-Forge website \url{https://r-forge.r-project.org/projects/diagnosismed/}. } \seealso{\code{\link{interact.ROC}},\code{\link{ROC}},\code{\link{diagnosis}},\link[ROCR]{performance},\link[epitools]{binom.conf.int},\link[nonbinROC]{contROC}} \examples{ # Loading a dataset. data(tutorial) # Attaching dataset attach(tutorial) # Running the analysis TGROC(gold=Gold,test=Test_B) rm(tutorial) } \keyword{iplot} \keyword{univar} \keyword{htest} DiagnosisMed/DESCRIPTION0000644000175100001440000000217411345524260014352 0ustar hornikusersPackage: DiagnosisMed Version: 0.2.3 Date: 2010-03-04 Author: Pedro Brasil Maintainer: Pedro Brasil Depends: R (>= 2.7.2),epitools, TeachingDemos, tcltk, AMORE,utils Title: Diagnostic test accuracy evaluation for medical professionals. Description: DiagnosisMed is a package to analyze data from diagnostic test accuracy evaluating health conditions. It is being built to be used by health professionals. This package is able to estimate sensitivity and specificity from categorical and continuous test results including some evaluations of indeterminate results, or compare different categorical tests, and estimate reasonble cut-offs of tests and display it in a way commonly used by health professionals. No graphical interface is avalible yet. Partners are most welcome. License: GPL (>= 2) URL: http://r-forge.r-project.org/projects/diagnosismed/ Repository: CRAN Repository/R-Forge/Project: diagnosismed Repository/R-Forge/Revision: 19 Date/Publication: 2010-03-09 19:59:44 Packaged: 2010-03-05 21:34:05 UTC; rforge DiagnosisMed/data/0000755000175100001440000000000011344256710013552 5ustar hornikusersDiagnosisMed/data/rocdata.rda0000644000175100001440000000313411073666231015662 0ustar hornikusers‹ •— h•eÇÏÙVæ(a&af7ÂËw9ç;‘GK±215BDÄé”u6G™™ !!ZR!åm^Z¢Sç6çœæ%ò®èÔÓÙØ4ÝÎV’f›}ï»çù¿ë´­ú`~Ïû}ïûüžûw7|‚9!3¤Òºi龘‘æÿ ²ºû÷ny¹Ór²ó³ôGü¥zßÃÿ[I{êf}òòg‹iß»‡wF'dQë×Z¢ò¢þ=ÞÊxœâ¿|ôdÉØt~ݹÌ!s×SlÉÃsV-{ƒÊöòwþ@±©£O6mþ“š–¹28g"žWî¹èÎŧ±ß‹}ªB1Ö³'gbïÂ,J®Ò©ô±_L{ð5Šõ©üjCâMJªå«eTœÑsEþÞ™´ã÷/ø[è|Þ6G›ôü­Šå“H=uô)º6W)~‡êO·üZ¤C¼ï†Ò>ÿ %gè‹’¾syÛž¢RUî'›QOÒ_'U»ö\n…jË»uÔäW“ï]g¤~8¾×BíÎûÔܦ}µW—ó ðWúc7ëýëUê°ŠíݾYÌpªg=’©;ô!×Û%UU÷?C•¼Ÿó…9%~Jþ[¹¾Å‰W\…©ÏAÚÆsIìÜÝ6Ÿ†˜³çšäGæD1Ç w®ß³ÌkÐê†à\ò˜ºŽÓ® ÐÐs„ãx›óÊ|já8ÝSfô«F|ÅÎŽÏ® áKŸHÉûž£•úögÿ›ÙΞ‡ëx~r¼†¹Oku™Ì¦©kî7Ö?4õJÿ¬ã¾­â|®ç~ÛÀv Gž×³×¹?åy5Ï©—öï×Eë/Há\é¥?P´‘ã…÷÷Ëþ”ñEJð¾Ã쿼—¹,ëKÜ¿¢ÿ Ïñs e}ŒëGÎË]ìXÛ¤ Ÿ 8o2§ë¹¿â<_Rã$ëoyŽ~¿P7<úRÞ§ž;Ïýs—ý-â9P“bß&Î÷&Žów/ßj¿°y‘12wVNªŠi³²ç‰ y˜©þo0`Fž¯=e{÷¼Ü÷´§®dÝA!…tDpE‰Á!"B”…4k$ ’ ÉäB A Cò E aƒaƒaƒaƒaƒaƒaƒaƒaƒaƒá€á€á€á€á€á€á€á€á€á€á‚á‚á‚á‚á‚á‚á‚á‚á‚á‚#FŒ!0B`„À#FŒ0a0Â`„Áƒ# FŒ00"`DÀˆ€#FŒ0"`DÁˆ‚# FŒ(Q0¢`DÁˆ #Ý4Ȉ–m#:Ft2b؈ž#F44ËÐ,C³ Í24ËÐ,C³ Í24ËÐ,C³ Í64ÛÐlC³ Í64ÛÐlC³ Í64ÇÐCs Í14ÇÐCs Í14ÇÐCs Í54×Ð\Cs Í54×Ð\CÓƒ¡Uýýlb¶éeDiagnosisMed/data/tutorial.rda0000644000175100001440000003076111073666231016116 0ustar hornikusers‹ í]\×Ë­uÔÝÚˆ+ÈL˜*øqOœ¨m€€iC‚I@©Újk­Öºº´V먳n´hE­{¶Ö=P7Z¢¨ÿÝåÝ»Ëå’Fÿ-~Ë»÷~ßß~ã^Þ ê:LVcX ‘Hd/²¯j'²w .í‰ÿìD¢ºÕ‰¿ƒ„ŠDU8ÅUtšDm´‚¸ªK%þ’¥ubÕÑz¥F-‰Ó¨bÜPi±Ñ^¡ÓÓ·ß íäåÑÁ—¾ÛsW£ÓIé›Ío†iIr•B­ò¢«4áTQiôAƒw£ïKLï{$h”j}s?¥ÚƒÜ£«R§w¦ë¿Ã…$ªzÄ&ªTAÝCú†w3KV£pÑýºÆ÷]& òÝxèåQ*E ;õGBi‹Ô¦ Ä2V,‰QÆK˜Ú.‘²Qâ–Ab&ÖË@\§×$Hœ{éÅ:…"^'Ö‘ëÅÉ„éÄ$Iâ®\#ׯˆÇÈuâxVAVP‹eâh¹^§Ñ*:g×A,žÄ0ª“Ç'¨:凧ºD#^P-?\+Û&Ð]b\9ÒM6ÊÅ“EÍ…±=)&ˬA^X¸6¤h š#Ê&`¦Œ¿É%Ú™¡I©KÂ%ÊTå×IN©óP'Æ+´ÊhÊí]‚(ccÂY ÁãåÚä@w­&Q#‰– J.nºÃ…‹[Œ2N©×ù҈Rª µé27>š‘$7„ÚFaâL ÅF0R)/L¸X­Q©åñ m—@÷h wÎnÎR^<0‘¼ê§ˆQÊÕÔõÇ›ðVtG>žüÞ»h˜X¬ÕŒ3Gz@’B+W©Ä¨h:T©£IÔ‹c”:…\§pvSEø3Mw€X28Tè.–Hä:M"™„WÄéÇ0‹ƒ‚Ĥf]ÄíÅ–«!²½QR\…7­‡(öa¥XN3W±ÔÅÅS&v'å—håê ©EOĸG¼‚PQYì,'•Gˆ‰Hö@$ÙÔç4/i^jœ×¨ZËï–ô…È··çVŒ„? Ô ŸÍ9N€=Œ³idµ ß/N2ãèÂZW`¬yZumÐ%°ÁšénE3¬FT,ÅKЉ—F»ü]†+åyLJ–Ý žæ†~] Y°!_±ÒéJu,‘,‰ÿTÊx¥žÌOD)‘¸È{ºÄö=š ˆá¥UD˜˜3IRvT’=Ç"§M¬&Ý…4‘â&-H”þÜ&ÒŽJw)§Vw3µxšóލg"¦K‰ 1Ñ“3Ó"œÞÕÑ*¹NÇf‹qcwDè9âèI0i^¹^î«%ÚKXt£¶bq¬FKtø„ÖÅR–Ð "EÑMY·[‡+Ô”—(õÉ‘J*ízPŽàS•Rf =³•Á#yæ·toêC3=„Ÿ Âq<¢Uôˆ„¥C¢Ÿ×Ä{ŒSªt5—5Ä™qÔr¼Å1Ò=TŠ$…*(´¯K¤O1x%¹Byõåáµ?/¯ Šhe¬2šk^ûRžM… ‘†É›ì•6‘nì©-FãG/±Uiü>Þ‰z¤ÂL9À ƒ%5%"Û¥ââ1Èg¿>| †… a±fÆ8D‘ô@Ì kæjV©œzú[t]Üß’n~±}X¡OÕt5× 4EÿÒû`wO¦Úg±ËkÓé£^³Ä¾Õ¿ô¾U.ñù k Å5}ÑòX±xJ¤îz< *£ACx‘SЀŠñ •&Θ¨éâÎIÉXµFOð !~ 9=e.nñÄ”?ÈËMDLMÛëÆjõ>&Œg,í3Õ©!€K{K’Ò¹›ZÏ2U«¦§ÒÀí/C—fQ·¬ÌÕ;ˆ_½¢^W•²g>%S¯„÷xÂHˆz%l>J¬^‚Uf¸o™Ÿzyâ¤?'Öô`Í)‚I`œô'ã„'6í~T†à3¤3šèíKùÒ±%:„)-û5„hö4šÑ˜³¸æ3£TaÑÑŸ7:dF͔݃ÊLp)cÙD©¬ ìÊ„%S*=÷D£õšØXÁ3èvÔ ÚtÍ&ÚB·&9o¨•G'‹•:±$"Ì5¢¿ÑS\“›M¬5ݸØÃz:ÅûòNµYã)+&½î~¦ºÄæGCíÍpÞçtjÇšfÊFñZ¾/¿(Ý´ZÖC+×+,IÄP6ƒàa±E-¤ÜDl`E¯•Ç(<ÅîbY"&×ï5hLç&Ö¨UÉâZN-W‹£âD"F<Ž\Ö§ sG颵ʽŽ-~+%²YYÝynÈ(% šÝÐl_ÜJøKTéÊŽLLʯGa±â!×*äÅ ¶!ˆÄHz«Å cLwã].í9Ÿ- d#ÌÄV »2V¡Ò)Ls y7(¨—:Ö­ˆï]¦ mòyR¸Â#<kÆBš¦ ÓÞÚØÃDÄÁ|ØÃ5‰1 u ¥ŽÁÕŠ»ÍGœJ16Q®ÒQ²É£t–:‹}B‰æUŸR=HMns"¢Æ"¯.£e¼} h´ŒéŠˆØ%›“$W´gˆ$Ãv¯ð¦«åÉf˜ ‡ z{ÎæZlj¼Kä8rÌÀ©ayċڿGÊÒO©ÓQK×äM"iÈÍh b$Ò/4Â%ÈX âSx‚‹«–%2Y¤p!ÒŸ6(šh)éNô—†‹0îØÈØ<¡„Ô|œ[n¸JHéÙ Õa £aê7ôø‚èÈ¢‘×qË]DŽ£Œã/ÏñtRs%ÂÏR©ZFô£©GÛ¬"7î€I{FL'ÃŽþ£JFÍMêÓQê7ŠEÕF:!óÜ(N‘f=égëĺ–ÑÞ,XcÄŠ‚DøÉþ1G©‹•F‹¯/¥š‡:3È)[µ 4U›5~,jO©F„e–µÇ¦ÇQâ0›"5º([=Ft?c–„9¢¯eUrHr´a3mâ¡HÙj2L &v„iQjY‹,r ´™ ª²U_?êC¼Ó‡eÝÑ´8Šj³lÈ,–­úÂMÕg#aÑDz)–YÇÂŒˆ+GÇÂâÇ& ›ž-”È<Ì ½Ü•ham­wfÑã(±DCB~ËS‹òšeD˜ïYR„8úÀÝþÉbÜ%Ð]Ln/%º1Ý‹“û©¨b|Iî'Uª= ËLbr°„¿¬Cæ)ã†b²ûG;PÅä<Ç•œ0цMîLU…'Є %÷•êåÄ„ “n‰I3n@qE¶ DÄ»ºê¢Å³ ÁRÕ#w+K¨/&ïP¦–¶ˆrËÓ<®=ØÓ77}r‚"ÈYålô€€ÚÆ$u¯’G9“ULÌÙ-™*dÑp&ê)モ%^nRâ6¾¦©ARŒÓ*c$t õÝ".^¡Ö눺Dmâ ê¬RÆÑÇiåÉXAÎde¤ôÕ« J³ØØ."£ngUIgY+çž½ˆ-ï +Mæ‹mu'D ’v0Ò¹96NAªs”F¯×ÄkIí8» ƒ$Ñô‚,³á?A®Ó+$ÎÑÔÚilGg·X6^®åáˆ7OóŒ–- — ið*˜ -X.7,q9²¦É²á‡ñKüðy‹-øqãò28”Ź”i„ÚÔD-\\\X_cà 2éq–4JÔÚ-³º¤á`DÎr$´Wh(xY1µA’r‰Û±Rê()+6ò}1Ÿï3Ë!Øý—Àý­.%¹Ç–°(Lµ2»ù[1»‘@eeÇQICÃVÙ(BÚñE{ØD­tá@V‚@²\HKÅLÂJ;€â®Œá‚³dG+®À•©\z‘²`ªÔýI陲QÜ´27Ìlµ*l›˜a- ÓÒ0+%¢•yÆágÅØâ”K˜Øš¡R‡Hé*ˉŒÖFXëý¶‰ z¹Ÿ–ƒY*i`p)VêÄۊѱ0å6å¦ÔQ nl Íù¢Á°8H·Z‚@ðø†„yÐWÒ9?ÝÊx„ÝÁŠáY"…•×<¤ x*õL¤Ô<Ù(Lø—pÙÇgÙ|¾Î~Bg»ù:ÕJ)Öæ¢FUŠùz±9*óùºŽl#­yc„X±ž•&J¬>‚-Q”¦Z£$¬®Ø°*—–Í9*õ`«”•é3¢'a?R-É3ó×KôÌÃ:¹Ê8%Z[Ý7HR.ý…íX)uGQRVläû¬oP(è³e8Ûµ©ãez©IÊŠú°+1S<Šh'–ëй›ì{ÊX±Z£'O¦¡šãpêFƒ™œž‰NߊÒ)´I‹VT²ñ願XcCø2Ç¢ïip1h)NÁ5Œ›áÔ  粩3VJw±X«Ð“¸î*¢‹æh$Èø£ë,Aì¯&™ô}èÇDä “Ãþ tÀOûƒé’úaĺvãnm 2Ùë‚(’‡”ª“”:%¡ ’Ÿ¾×ŠÑ0R.™ ɸ&­<;ò¢ø-àÝëHø›Ï-'â)·íKü_‡,DZÔ·%ùU\'[„~ŒêÔcŸËZÇp‡·¦#™ÿÉ¨Ž %Ü?ˆ¨ã‡©ZäZìi‡)>í–,Ú\l'ª–18¥Ä*¬Jv¤CÛs íÝQ)F¤ L98O}'Ö Ç"Ã_‡"„VÈ Œå+mÄSjÉS裌7×)baP¦fla­œ Y«ˆ‡¼#yÔ0*g¥S)f©ƒ_›†§¡kóÔDp"9‚[¢Vï a Kø =™ÂF,Õ/bÁ“eùtŒXkÈf•Ån]>ÝÛ!/¨ZĦ,2¦Ìc=áëçTÃÝÆÅáÅ®sÉXiÂÃVó"­BëL4ãc¨s>³È8ÌÈšÜ@¢¤Z쎀щú""Ÿn¢y «r,™Ä?GöºNBî™ì†ö«7ùüŠ-P >u9Q›JM+¼Ãº³®[ò8º#M,ù‡Ag[ÒDjjUdA1V”Óš§Ì¶ú`zMÛéà ÍêÃÈIªÑgÁ²ôÂjƒõÁîÉÞ™§¼*:5–çVÃ1²¼Ä¼åUÑA³¼MäãùÊíû •šÞ§\ ©ßàœ_ËW…s -O•Zìn-p‹»<á©èvS3i–ÉKî|…®|…í­ªŽää¹YÅpF'sÇÁÍV›%WÖµ›ˆ‹kNÃíÍ”—ž·âëÃ¥"y±¦O\'Ø”ÂæQ@ÉóižúÆš¬ÓiE8¶ ý/:©ÖÍVZ¯LP.a¥ ¡±Úñæ¡@ÎMh¦9—˜cަa-OV1ëËÇÊh~š‹"K9ְ“$×Z¯.1SÎvxw–¹H'÷ ?ÛØ£9P¥õL!lVÇ­ˆ`µ€mÎSdê”»7YÆ–Ì›LiV¸7 ´feðfã8Ól;„gu6]¿YƒÅM “éâL¥¸sbgn¹)ü2Þ±‡ñÕ賬ø•_ìižÚBtSiôCŸ×_ú1C»„ú!¯Û°õªóÿ5ým[)ËuºËC éÙc·2— ŒÇ2éú:¿\g®¥ÔO™Ì`‹ËSÎdm5+¥š+£éË54¬¤}¾)¬'·¬Œf®^,L©;wŠZÙfÅ1{Fa…vEÎ,JꥩxøGz›åñgé¼Í”v¥ñ6V®L^oÜgŠèkù/³öæ0ÕÄZfîQƒy˧0/©l£’ÿ¼ÅÔS*¼—¨Á¼BUôŸWý{U9veý˺+³ÙèšZmèfåÍsùó,2*/·®±tåbͰ÷Uoù"³ÞWñÝmÿ ènÿóÔÿO-~nÁc-ÌDû™uɦHs\™wAÀÌÓ .­nìÑ¿¸˜D‰Ä¬Á¼žÜ¼´ŒRãD¼~ÀTQiâÌãöçŽæª˜ßÃ<Æ£ÞJ,\‰B”[¢áÄeÒÊcæ‘T¼aã g3YÇÌ|Þ`èoVfÊ]J)E'µÌ†/?`Çðh©8ÎnMn‹4ÊD&áyÔÁné×ö‰€Û+sA˜™žm „â„ü bh¯¼£<kš©"4²#Ìèí¿hþ/šË$šú ½”…ÛYR…m“V£—0> ­$­!ÿ%­b'-‹ÞQ ɦž]Œ$VêDQÁ‰®4‰ÌöÑ]C’ÿ¢û¿èþ/ºÍðgËè¦#¹žÉ‘L厨2ßîÑNèûÁ¥'ò>ýàh8=!›BÝž-D5úM* Í’<Õq±bà ÙW¥WQú 6[­3›ØË¡ëÖØ’©„äíöd¶J¹hl£\V&Ï€+…Þxìi£Ñ’ ׯÁ¼)M$(„$«J©ÑŠô´òLfµ^ãU:£VT*«Té«©‹æ«!úÛœ#·Ñg!N `2D;@$ǾÌáJÊX…JÇG5J…âêcSQç‹LÆZ¶H§0›qñk¶Df]Ö\î±öŒÍÚ¨¶$qiV*†×#·4wB©8·äÈÅá¸ûUEfÊAÅúšÄ‚ݬˆÀ;½*»5˜‡·f+ÖÜ´snuŸu°ÈèžPÜÒ,ž èê„iy©„y šË}Bhsó^Irªµ¥s‚§±ÅÑšC¿Ðóê*+QlþÁ ¿tyžûeÁ£ÕӯŲÎîgªY*áõnb«Î~!‹™Æ#„¤,š`]îËx@¢™ƒžc‹L%èÀ*‹³ •c¨ŽñØ(ëŸUö.ã´-|7|/|3JåÓëZâÙd”ƒ;†Õ¾5O™ÒBuYÁ¶€_<ÛÙÐ~&ÑI½+ÆLã‘åhÐ÷-1² ZJü 3¨%GUIrT9د”ø•Ò~ïVû½[Áö€_)í÷^%±ß{l?øf?Ky_^Iú?y9د”ø•Ò~Ñ•Ä~Ñl?ø•2ÆT’üSö+%~¥Œ?E%‰?E9د”ø¥¶yÝ–­?VÞ3×j“Ĭ½Ü Õ¨ †‡ùÜ{oç>‹©§bô´“[¥¾}1z|Æ­U‡ªEžöåJž»ÃS£&U/ýso;…+‚ÂxnÔ£X4 ”õ"H¶·svèñêËDÞäÓáz†ÁnqZeŒ¯©¦SÄÅ+Ôz5+ØÕ]“lŒ®[~Ékò÷>èšüîŘ"SV˜/Þ+ÕIJŸÅ9‰c>UÖ,¦*Ãá„K1šhò½ò"‘C⣊ˆo ížjfè·S`OÒÑm†Õ…Â%ÔìNq¯3ı)Üøë³;>„K?]¬”°ö~Z_=VìölFÔ<{£úyºö5<5àz§˜¸üà®÷?úûJ+\ÏßWœOÝyö":i1#ÞZ_M Ùó©†ðk£Ï]íkõ†½âƒ?¬¼ÙäÇ^;!Õ±É×ú]ïÃÖ—'/Uà’v3Qc\)ÉÝ?g$LGõ; ÷HÂc!ë\Á³ÔnËa'I½í}ØF~\vÎê)†LpèÏÛû^ýdq ì¼Ñcã›Þ€GËŽ¬k€jå|n z·iþß;GûÂÄGóàWJ¼.ðñu䞸—uç½þ}á×ïÉŸ…ðË”*­¯y6‡l¤‡{¤z\‡uIEþ ™¨|Y­ÙRH#ɆÌÀíh~n!ynP <†½JŸÃk€‚ësw£€ª¹?d">"}ìÙ1´,ì&?ß9„²ÆŸ+„½†ô»‘>iùn䓎ò·òS°álAy¶'R‘žð_ä¿ÞcŠ\n—ý;ùóü²ŸTèLç8Òã+dW„HOE$­¯aýÒ|>FúÙŠü‚Ƨã„ö3ú~Ê£©?íà$âsʇ?¡ü‰ôl‡âôå&ñP@û5Š7D?ØÑ¥ãç'·‡‘=W x[‰ø¢qèò,$ߟtù5”§V#½ÜFòF~± Ñ_ÅÁ¹ÞŒê `5Ò¾ôžAdân¡zG‘üô}:/ÓŸ¯¢ø¥éŸGy€–ógÔÒŸGþC·§ÿÒ|,J9>¬Bv£ótН(¿põD^‡òèo©€ÇqIßç¶»„â'É›‚òÀmk½× =o@ù“þLÓ{‚èÜ[LQÂöy|™2 ÖÇFdOºý5äÿP=Z~Œ‡Æë‘?ÐüÐôo¢x»ü—n—Iv·á¯pÿHë¹(œR¶ -Whþh}ÞBùö²/Ïhýž@ñH닦KëÅ–ƒ £5Õópí@˜ 6™§DÌ篈çß»§¼¸4¹´ÌѳTf‰¦%:|tùÚš«oÛÖt`î‡Ë›%½qi ¹6‡e®«ŒÆSSÃcø` Œáƒ1|1†/ÆðžÃcøb _Œá‹1|1†/ÆðÃ~Ãcøa ?Œá‡1ü0†ÆðÃ~Ãcøc Œá1ü1†?ÆðÇþÃcøcŒŒ€10FÆÀ#c`ŒŒ€1:`Œ£Æè€1:`Œ£Æè€1:`Œ4†ƒÔË‹¹”2—2æÒ›¹ôa.}™K?æÒŸ¹ `.4)ƒ&eФ š”A“2hRMÊ I4)ƒ&eÐd šŒA“1h2MÆ É4ƒ&cÐd šŒAófм4oÍ›Aófм4oÍ›Aófм4͇AóaÐ|4͇AóaÐ|4*1pªéõ­’\Z£Vœˆ¿äˆçg‘q/ùßoÅÿÚÿ÷Ë÷k¼Ø¬R$)Tô‡¼C9¿ZGz’ ÑÚ|B¾pÛXy4 "ÃR+; Я#úkÏúëȺO×á¶¡-èÈúe[”MËŽƒáÀjkÏjïÀ¡åÀªÃþe·uà©kÇ¡Ë.£¯í8íÙ<³1ؼñy0Wn>=²ÿrq¸zfãÙ‹øådæê‡Í×¾ìû|xÜ6\™Ó[§¶ô+r=íg8¼¬ï¡…]ÂàØ´?Îç$̓]›c’þ¨ûöæ©êìâû·Ž¿Ù{"¹)®znCm8óÅþ:¾“u°ùàÈ/§6™¿d¼ _— Ýþ×?=nœ(°Ÿ­ü.6iÐyÁìdXἨVÇ¥wa«W³Kk¾©úË}Cë¿_sw‘ÏH˜í}÷Ù‡G6ÂñfK§ÜœÛ. tΪ¥îgº/[õ°Åy?Øëß{¡ý;ªO_\§ToýõíçÞp¬e•I¯k]‡ó—;%&åM‡…)ƒÛõ?©}NŽŠ©îuaÛÉö?ßûnÏ_‘ô¨?³_rtg<êQ·vó¼»X'¢ßÜù =\í³*â‡ïÁ¥á¹÷4);àhrÞ˜åó^•‹B<¯öƒ#¶Õ;Ý¢RÒ]›n“ŠáÏÌ;^Éã›Á¶#6ý"n ©¹Y3ÛO „-Ù¶ìñmLÿÈì&M§ÞÏ‚-¬uzæÓö›·ÀåkoX·ìêàŸºT_nõÜÙºdL9êÚ¨§œÔ=Ù¯å)Ø1júØpœªöÓo›V†À"·o5ìï gwÎ\7ãúðÙLÍÅ¥)¿À®¾N?Ýøfø»9pŽô]’ã“4޽=ö’¨è üš»hËïOƒ“¾ˆS?Õº]…Y!Û[ÀÕ  uôÈ…c[ÎkXRTéýÜŽ€“¯Ú{{ ßûkÅ´»àôÚ#Á{ÜBnP¼ø›ã…û¤æhûàÙŸ3'Ÿq¹kG¬þ@ Oª/iâ½&žvŸÝºÙ¼‰ðÌuC»oŸ6…g±>_ôéQ73vzîòê9[R2¿ó†¼ÂÏ#ÏCnÿæNN}9÷ZŸéñ^=x¹bÆ7{ö¨àù¸ëÕÓ žlm¸çrS?xq¨hüá¹=á|îðà° 4úòZÛÎ7!Oò ¿f«¥½pq¡{È­s»{Úëfpã §ýž©P¸áἚÈûÊkÚ• ÷‚íºŸ~»Á—­!§­[`¯*S!Ï}Á¹W߆ÃÃíCò~<;rV¿yú¹KUxñÞ㨷/‚£?K"ä©÷iªý4ÈwjÑð+¼ª7|¸ÿÚÝÿDß…µ¦ÁËì¾:Ñ¢/üýû­ÇõO!{vÏÀôUçáïv_?ÚÚr0¼­¹òè@xP5êã·gŸƒÜðÄwvm=/ΦÖ_Ö8žU=»gã„"È])~çûÖè¿eòxîyêž|áù×?Ÿ;t5%·ÍjXî]•~|×@xz¾ÖÔ‡®kräåkÈÙõ¥lÒ…àþ…óÎ'OBÁ©Á W\…œMS—Ï›“ O<Ú÷xû€¼ê=qXC¸/ò\únCÞ½…c–}Üò:žŸyËy%<ÿÑÀΩ¹ðjvý/ºÃÇð4qª×éM5àUÑŒ³ƒ^€çü]ª/ã~’æužÕ]1ëë Ïàåoûö­»‡óêE/INܼêpôù_;$ö[à÷]Ý'œk³ή;yá¤ò;H—uË Y<®l;{бç¸f·A”Ô ¼ýÉÎ[ºpiØú[¶]ƒ³›V¤¹´ žìÐwž8~+ÜkW«IµõátLjêÇYøåü%wS sjSéùŽžpcó²J¿ßàò£6N7·œ‡;½º”ßð…»[Þ+\ÕhÜ ˜Ø»Èí0\ú[ïO¥pgrä™ÈV_À¾Î«ú˲2!=ýë\¸6nÖÎÈë˜ò ®-8-:sfìràuhÊ5HO¹öÖ*És¸ü´¡ó€pÈå@¢ÝúwàFÞÕÈW¼ sŽtòx»{°xMÈӿõ‡åO<áfMm‹u±¿ÃíÁ#zÃõĦŸÈZÜ€WÓw¶µk>NÌ?dçøæ·pÝ©ÞІŠpÇ[ýæÃ”¡p¶Á1|þ 2˜•öÖϰ÷„änä‘Íp-¸Õ˜Ï{‡ÌHÇ%sô¬å ›Ä¾˜i+®•“÷RÓ»x^‚³Á_94M ƒ{}VÖÙÙ: ²7¿ªŸx2nOÞžúÒ{'œ?ßiûòˆ8ûªñ¤€¢o!£Ç7G÷th"®šüÆ,¸YweþŽtÈL>z}ϤŽð爧½ÛÕ&ò¦× —m|áñâL˜5ŸÃ¥í§êÍ,úî„Í­1öÍYð0'èÓ”‡Áð`Ô¸gYë¶Ãõ¼k‡«µ·VÅ…ø®œ:Ò`æ 8—7ué¯o/€Ì +f~”õ œé½Áqà_«áH#—ç®c7AvÐWëÎ^†+kN’¯ƒ[cõ›ÀæoÔµ[7œyuZš{‡¬Ý×°ë8T?&}П·!£]êñ½³!kþƒ‚ÈKá²Ê¯÷Ê×ipY‘pè±2®Äl*Ù—ï¤8¼™ç§7–×{7œ:’’®ÈnW~ù÷Á‡=áʆ.Ù7OÖ†;ƒöœMéô#Ü<¿áÌ‘c¿Á­Í©Þ˜Ïž¾þ¨¯'dtUá Ï¿”nÚX ®Ÿkq÷®péím|tޝñ›t¬–\}3íãa©7áÉÀ4é¸ä,xn·?J±òwÈžóõG}¿Ãsÿ«ŸMœâ÷Æìp붸*ÜyzO:[²îæ=ð<¶žü«×ŽÍÑqSØOvÀ½©yS–×ï Ï¿»Þþ­ð`ù¤ÉÛº©à¯x¿W/RfBöÅÃ{Þü»&<yüB¯ìipw÷–Cyu¾†çö-|{wøžµ™x8/Üß®}ä·KàEDhŸ©þpkk£½¯·‡lÍÔ¯’bRáòëýÃ>^œ Õ†Èv}{žWë’t´Z|°(bÞÝ»pçÎõbýáe¾WJ>L„;ÓŒšY»ß™Üﱞå©r§å'ðR“pRr×|cÔ áɨ Œ´÷‡Âó~=ó©kÀ½™“ÚmrÛ ÏVþt鿦ðbO‡íNw[ÀãéqŸÿ=£ üÝ"LÑÓ£ <Ï»>GÛj4Üì»râ›»ƒáéø9§CÏ=„g{z´Ù¨÷¿¾˜Ôv²3ŠìÚi߃{ºü;¾?¤Á“ÁéTý3 ÛmËÚÍ»_Až¯]Dý×)pfrÛdM¡ð×¹w¤Õq†ìaƒ®Ÿ¸ä÷Ÿ·’îó n:ÿÔ:é‡ ðú Ó*™Gä8­½Uÿú¸¥z¨ ™8ŠÈ‹º´Ú)?@îÛN®·2¾„\ÿño5Wz~cJë Û†ÃãEYÁy­NCö"ɳ£&Óô¤rWÖƒç³ìzo8é 9óõöÓºýSïWÒT…ðtVã¹0é ¼ ®=zVD=œWœuº÷¢F[á¸SófÆÁß/Mýy:lnê·ò¦ã Hóþ®íƒ#aïɇîçRaÖ©&^'#jÂéãË×'‹>…©¾§Ž82 ¾Þ6ýãã™ñ¡ã,ñ‡¥p¼÷¢»;» ýçOÚÝî Ó/(˜Òá}8ðg•W§úÁ”®µÛÍŽ‡ý)݆kÓßÅí—vîœæŸÙ2]:ýE¨ Ruݽoöx8|èÕÏwL€u#âúÖ¼?ö7<åý[8\-*µó™ê°6~bTú MÙöøÂÆßÃü½m;6ì¬ïöùíiÎKH©ZÚ ­¶ úa®_Çصr©ç'ÝkÃ×ßr.¼ë© ë–Ü[¾¢ï??õû`÷Ü´Iû®Âžü6÷¼ë kF'M{»Œy:øj6, ­Ò5ð0l«»çÃwϦÂÛ{½ÊÑ,‘ƒeO „5 ‹æT‘~ ßïâà¸_W¿ ýëúTX’uÙo}x/˜×ø³ ÂV àü™Þg6~6 ö^8òîåZ.p~ȹù¾ƒñ¾:Ø»5ï›ùQj8±¤s—ãßoÌYÝêu»û\#Ò›Žz¿¶~÷JÊíð[òGÖ¼KCòª¬~ñnŸùýÕ=o8+àSÝŸªßŽìé·D4ö5¬¾ròò¤´¸ÞÚ©ó®îó\ .¨Sw2œÿ c¬¸Ã#|õQQÏ€ÇÛlÉŠ-©—`URäÃv™;´?ìl4žÐÇc}ö‰(øqº|ÇÅåJÜnò:‘çÌ'Ïáôæ3ßœrö<šÚxÈuX™¶"ÑcaMص½¡&äpm˜Û Þ°jdÞˆí’Öw#l™Ñ,³i.œžšý·ãw‡á÷ŽžéÁW!»WèøGîÃú–’ŸÏªa}Õßú…ÎÍ„%õCæ\m&‡Ôlͤ6ipövÏž²Z[à矞rú& vÞ[¯èÄ|8#¥°‡…K&öQ_‚Ó)G?¼¿kœ ™2ø_صGüêãø}ý¿‰¥OSýc(¨ûEÝ;“ÇCÁ9×~©ó]‰òØä«snÛ5ŸU°©ñ•`û3ë/guî,j3ïÄÁÛ/¡pÓÀöƒvÂëVöê®CºCÁè|<&8Øþêí*+}7Á뮟ÄÛlצÁgñYó ÷«—îÏG8‹Îúg%¯ú ü Ýá¹mš'Nöž/éXµÖ(»ãæ»^ðäíßÚ, †§ú*- rªÃ³˜Bƒ³çB¾kƒÐ¢P´¥ëš×Ñõ!Æßi:¶ƒŸo_›wg'ä-œòQÓáðrNßV]Ûýyk ãÓ‚eïë;rãý÷Lï4Šî,k2¶ä¥Ð<yž]#“/ºÜ þ`²ürÍ3á³<ÈϾèmï{^Êz'G´‡ÜËaÄô[ðºKäçŠßALÝQ³Å~(ø£õ‹ô¸÷¡°ê¡ï'î鯶¶¨:ñÅKx™¸ôÇf)Û ¯†Ì7¥ó”`‡gµš?j>òï¦èÓäd7OõËø òÕÝÚ­Ž¾Y}/7޲òwï?ûçÒ8xíS_4jñ_·b_þ-Å^ÈŸ4µÇò éÁöm|W®íEŽÝܾ0^oŸÜôÛ³ˆ~àñ[k6xCÁ«:]ëNp†üÇË?>â;ò Ü:á1¼Ì ÷'«voŸ?-X4æ¥l`ÍÁ_j—W+ ¶OŽ~'"r‚·=Ëjº)X$¹›úI†Šó\ÑÉøé¡a,„Ü©iXðŠPèôï†}êbô)ÔÌR™ÉSÆOÉä’™ýdâ¿¢¢¢!"´cñÂÕ'×#DiagnosisMed/R/0000755000175100001440000000000011344256710013042 5ustar hornikusersDiagnosisMed/R/print.ROC.r0000644000175100001440000000167411343034560015007 0ustar hornikusersprint.ROC<-function(x,Full=FALSE,...){ if (Full==TRUE){ View(x$test.diag.table[,-c(2:5,24:34)])} cat(" Sample size:",x$sample.size,"\n") cat(" Sample prevalence:",round(x$sample.prevalence,digits = 4),"\n") cat("Population prevalence:",round(x$pop.prevalence,digits = 4)," - same as sample prevalence if not informed\n") cat("Informed Cost - cost(FN)/cost(FP):",x$cost,"\n") cat("\n\n") cat("Non-parametric AUC (trapezoidal method) and its",x$CL,"confidence limits (DeLong method)\n") cat(" Area under ROC curve:",paste(round(x$AUC.summary[2],digits = 4)),"[",paste(round(x$AUC.summary[1],digits = 4)),"-", paste(round(x$AUC.summary[3],digits = 4)),"]\n") cat("\n\n") cat("Test summary-----------------------------------------------------\n") print(x$test.summary) cat("\n\n") cat("Best cut-off estimations with",x$CL,"confidence limits -----------\n") print(x$test.best.cutoff) } DiagnosisMed/R/plot.ROC.r0000644000175100001440000001570711343034560014633 0ustar hornikusers # the plot commands plot.ROC<-function(x,Plot.point="Min.ROC.Dist",cex.sub=.85,p.cex=1,...){ if(Plot.point!="None" & Plot.point!="Min.ROC.Dist" & Plot.point!="Max.Accuracy" & Plot.point!="Max.DOR" & Plot.point!="Error.rate" & Plot.point!="Max.Accuracy.area" & Plot.point!="Max.Sens+Spec" & Plot.point!="Max.Youden" & Plot.point!="Se=Sp" & Plot.point!="Min.ROC.Dist" & Plot.point!="Max.Efficiency" & Plot.point!="Min.MCT"){ stop("The Plot.point option is not correctly set! Type '?ROC' to check possible options.")} plot(1-x$test.diag.table$Specificity,x$test.diag.table$Sensitivity,type="l", col=1,xlab="1-Specificity",ylab="Sensitivity",xlim=c(0,1),ylim=c(0,1)) grid() segments(0,0,1,1,col="lightgray") if(Plot.point=="None"){ legend("bottomright",legend=( paste("AUC:",formatC(x$AUC.summary[2],digits=4,format="f")) ),bty="n")} if(Plot.point=="Max.Accuracy") {points(1-x$test.best.cutoff[1,5],x$test.best.cutoff[1,2],col=1,pch=19,cex=p.cex) title(sub="Cut-off estimated by maximazing accuracy.",cex.sub=cex.sub) legend("bottomright",legend=(c(paste("cut off:",formatC(x$test.best.cutoff[1,1],digits=4)), paste("Sensitivity:",formatC(x$test.best.cutoff[1,2],digits=4,format="f")), paste("Specificity:",formatC(x$test.best.cutoff[1,5],digits=4,format="f")), paste("AUC:",formatC(x$AUC.summary[2],digits=4,format="f")) )),bty="n")} if(Plot.point=="Max.DOR") {points(1-x$test.best.cutoff[2,5],x$test.best.cutoff[2,2],col=1,pch=19,cex=p.cex) title(sub="Cut-off estimated by maximazing diagnostic odds ratio.",cex.sub=cex.sub) legend("bottomright",legend=(c( paste("cut off:",formatC(x$test.best.cutoff[2,1],digits=4)), paste("Sensitivity:",formatC(x$test.best.cutoff[2,2],digits=4,format="f")), paste("Specificity:",formatC(x$test.best.cutoff[2,5],digits=4,format="f")), paste("AUC:",formatC(x$AUC.summary[2],digits=4,format="f")) )),bty="n")} if(Plot.point=="Error.rate") {points(1-x$test.best.cutoff[3,5],x$test.best.cutoff[3,2],col=1,pch=19,cex=p.cex) title(sub="Cut-off estimated by minimizing error rate.",cex.sub=cex.sub) legend("bottomright",legend=(c( paste("cut off:",formatC(x$test.best.cutoff[3,1],digits=4)), paste("Sensitivity:",formatC(x$test.best.cutoff[3,2],digits=4,format="f")), paste("Specificity:",formatC(x$test.best.cutoff[3,5],digits=4,format="f")), paste("AUC:",formatC(x$AUC.summary[2],digits=4,format="f")) )),bty="n")} if(Plot.point=="Max.Accuracy.area") {points(1-x$test.best.cutoff[4,5],x$test.best.cutoff[4,2],col=1,pch=19,cex=p.cex) title(sub="Cut-off estimated by maximazing the area related to accuracy.",cex.sub=cex.sub) legend("bottomright",legend=(c( paste("cut off:",formatC(x$test.best.cutoff[4,1],digits=4)), paste("Sensitivity:",formatC(x$test.best.cutoff[4,2],digits=4,format="f")), paste("Specificity:",formatC(x$test.best.cutoff[4,5],digits=4,format="f")), paste("AUC:",formatC(x$AUC.summary[2],digits=4,format="f")) )),bty="n")} if(Plot.point=="Max.Sens+Spec") {points(1-x$test.best.cutoff[5,5],x$test.best.cutoff[4,2],col=1,pch=19,cex=p.cex) title(sub="Cut-off value where the sum Se + Sp is maximized.",cex.sub=cex.sub) legend("bottomright",legend=(c( paste("cut off:",formatC(x$test.best.cutoff[5,1],digits=4)), paste("Sensitivity:",formatC(x$test.best.cutoff[5,2],digits=4,format="f")), paste("Specificity:",formatC(x$test.best.cutoff[5,5],digits=4,format="f")), paste("AUC:",formatC(x$AUC.summary[2],digits=4,format="f")) )),bty="n")} if(Plot.point=="Max.Youden") {points(1-x$test.best.cutoff[6,5],x$test.best.cutoff[6,2], col=1,pch=19,cex=p.cex) title(sub="Cut-off estimated by maximazing Youden Index.",cex.sub=cex.sub) legend("bottomright",legend=(c( paste("cut off:",formatC(x$test.best.cutoff[6,1],digits=4)), paste("Sensitivity:",formatC(x$test.best.cutoff[6,2],digits=4,format="f")), paste("Specificity:",formatC(x$test.best.cutoff[6,5],digits=4,format="f")), paste("AUC:",formatC(x$AUC.summary[2],digits=4,format="f")) )),bty="n")} if(Plot.point=="Se=Sp") {points(1-x$test.best.cutoff[7,5],x$test.best.cutoff[7,2],col=1,pch=19,cex=p.cex) title(sub="Cut-off value where Se is the closest to Sp.",cex.sub=cex.sub) legend("bottomright",legend=(c( paste("cut off:",formatC(x$test.best.cutoff[7,1],digits=4)), paste("Sensitivity:",formatC(x$test.best.cutoff[7,2],digits=4,format="f")), paste("Specificity:",formatC(x$test.best.cutoff[7,5],digits=4,format="f")), paste("AUC:",formatC(x$AUC.summary[2],digits=4,format="f")) )),bty="n")} if(Plot.point=="Min.ROC.Dist") {points(1-x$test.best.cutoff[8,5],x$test.best.cutoff[8,2],col=1,pch=19,cex=p.cex) title(sub="Cut-off that minimizes the distance between the curve and upper left corner.",cex.sub=cex.sub) legend("bottomright",legend=(c( paste("cut off:",formatC(x$test.best.cutoff[8,1],digits=4)), paste("Sensitivity:",formatC(x$test.best.cutoff[8,2],digits=4,format="f")), paste("Specificity:",formatC(x$test.best.cutoff[8,5],digits=4,format="f")), paste("AUC:",formatC(x$AUC.summary[2],digits=4,format="f")) )),bty="n")} if(Plot.point=="Max.Efficiency") {points(1-x$test.best.cutoff[9,5],x$test.best.cutoff[9,2],col=1,pch=19,cex=p.cex) title(sub=paste("Cut-off maximizing efficiency: population prevalence =", formatC(x$pop.prevalence,digits=2)),cex.sub=cex.sub) legend("bottomright",legend=(c( paste("cut off:",formatC(x$test.best.cutoff[9,1],digits=4)), paste("Sensitivity:",formatC(x$test.best.cutoff[9,2],digits=4,format="f")), paste("Specificity:",formatC(x$test.best.cutoff[9,5],digits=4,format="f")), paste("AUC:",formatC(x$AUC.summary[2],digits=4,format="f")) )),bty="n")} if(Plot.point=="Min.MCT") {points(1-x$test.best.cutoff[10,5],x$test.best.cutoff[10,2],col=1,pch=19,cex=p.cex) title(sub=paste("Cut-off minimazing MCT: population prevalence =", formatC(x$pop.prevalence,digits=2,format="f"),"; cost(FN)/cost(FP) =", formatC(x$cost,digits=2)),cex.sub=cex.sub) legend("bottomright",legend=(c( paste("cut off:",formatC(x$test.best.cutoff[10,1],digits=4)), paste("Sensitivity:",formatC(x$test.best.cutoff[10,2],digits=4,format="f")), paste("Specificity:",formatC(x$test.best.cutoff[10,5],digits=4,format="f")), paste("AUC:",formatC(x$AUC.summary[2],digits=4,format="f")) )),bty="n")} }DiagnosisMed/R/plot.diag.r0000644000175100001440000000211411343034560015100 0ustar hornikusersplot.diag<-function(x,print=FALSE,...){ #to do - include an error rate curve #consider ohter graphic parameters # make the scale and ticks appear only in the superior axis #pre-test odd p/(1-p) #post-test probability =pto/(1+pto) pre.test<-seq(0,1,by=.01) #pre.test.odd<-pre.test/(1-pre.test) #post.test.odd<-numeric(100) #post.test.odd<-pre.test.odd*x$PLR post.test<-numeric(100) #post.test<-post.test.odd/(1+post.test.odd) post.test<-((pre.test/(1-pre.test))*x$PLR)/(1+((pre.test/(1-pre.test))*x$PLR)) #error.rate<-numeric(100) #ER<-((FN/(FN+TN))*p)+(((FP/(FP+TP))*(TN+FP)) #error.rate<-((1-x$NPV)*pre.test)+(((1-x$PPV)*(x$n-pre.test)) Result<-as.data.frame(cbind(pre.test,post.test)) if(print==TRUE) {print(Result)} plot(pre.test,post.test,xlab="Pre-test probability (prevalence)" ,ylab="Post-test proabbility (PPV)",type="l") #if(error.rate==T) # lines(error.rate,lty=2,col=2) grid() axis(3) legend("bottomright",legend=paste("Positive likelihood ratio: ",formatC(x$PLR, digits=4)),bty='n') invisible(Result) } DiagnosisMed/R/LRgraph.r0000644000175100001440000000200711343034560014557 0ustar hornikusersLRgraph <- function (tests, lwd = 2, lty = 1, cex = 1, leg.cex = 1.5, pt.cex = 2, ...){ plot(1 - tests[[6, 1]], tests[[4, 1]], xlim = c(0, 1), ylim = c(0,1), xlab = "False positive rate", ylab = "True positive rate", col = 1, cex = cex, lwd = lwd, lty = lty) abline(coef = c(0, ((tests[[4, 1]])/(1 - tests[[6, 1]]))), lwd = lwd) abline(coef = c(1 - 1 * ((1 - tests[[4, 1]])/(1 - (1 - tests[[6,1]]))), (1 - tests[[4, 1]])/(1 - (1 - tests[[6, 1]]))), lwd = lwd) abline(v = 1 - tests[[6, 1]], lty = 6, col = "lightgray", lwd = lwd) abline(h = tests[[4, 1]], lty = 6, col = "lightgray", lwd = lwd) fill.col <- c(1) symbol <- c(1) for (i in 2:ncol(tests)) { points(1 - tests[[6, i]], tests[[4, i]], col = i, pch = i, cex = cex, lwd = lwd, lty = lty) fill.col <- c(fill.col, i) symbol <- c(symbol, i) } legend("bottomright", legend = colnames(tests), col = fill.col, pch = symbol, bty = "n", cex = leg.cex, pt.cex = pt.cex, pt.lwd = lwd) } DiagnosisMed/R/diagnosis.r0000644000175100001440000001346411343034560015211 0ustar hornikusersdiagnosis <- function(a,b=NULL,c=NULL,d=NULL,CL=0.95,print=TRUE,plot=FALSE){ #require(epitools) if(is.numeric(a)){ if(all(length(a)==1 & length(b)==1 & length(c)==1 & length(d)==1 & !is.matrix(a))){ reference.name <- 'Not informed' index.name <- 'Not informed' tab<-as.table(cbind(rbind(d,c),rbind(b,a))) dimnames(tab)<-list(index.test=c("negative","positive"),reference.standard=c("negative","positive")) TN<-d FN<-b FP<-c TP<-a } if(all(length(a) > 1 & length(b) > 1 & is.null(c) & is.null(d) & !is.matrix(a))){ if(any(is.na(a),is.na(b))){stop('There are NAs either in index test or reference standard. Consider removing or inputing!')} if(nlevels(as.factor(a))!=2 | nlevels(as.factor(b))!=2){ stop('It seems there are more levels then 0 and 1.') } if(!all(levels(as.factor(a))==c(0,1) & levels(as.factor(b))==c(0,1))){ stop('Either the index test or the reference test is not correctly coded. 0 and 1 were expected!') } else{reference.name <- deparse(substitute(a)) index.name <- deparse(substitute(b)) tab<-table(b,a,dnn=c(deparse(substitute(b)),deparse(substitute(a)))) TN<-tab[1,1] FN<-tab[1,2] FP<-tab[2,1] TP<-tab[2,2] } } if(any(is.table(a) | is.matrix(a))){ if(!all(dim(a)==c(2,2))){ stop('It seems the inputed table is not 2x2. Check your table output.') } else{tab<-a reference.name <- names(dimnames(tab)[2]) index.name <- names(dimnames(tab)[1]) TN<-tab[1,1] FN<-tab[1,2] FP<-tab[2,1] TP<-tab[2,2] } } } if(all(any(is.factor(a) | is.character(a)) & any(is.factor(b) | is.character(b)) & !is.matrix(a))){ if(any(is.na(a) | is.na(b))){ stop('There seem to be NAs either in the reference standard or index test. Consider removing or inputing!') } if(nlevels(as.factor(a))!=2 | nlevels(as.factor(b))!=2){ stop('It seems there are more levels then negative/absence and positive/presence.') } if(!all(levels(as.factor(a))==c("negative","positive") & levels(as.factor(b))==c("negative","positive")) & !all(levels(as.factor(a))==c("absence","presence") & levels(as.factor(b))==c("absence","presence"))){ stop('It seems categories are not correctly coded in either the reference or index test.') } else{reference.name <- deparse(substitute(a)) index.name <- deparse(substitute(b)) tab<-table(b,a,dnn=c(deparse(substitute(b)),deparse(substitute(a)))) TN<-tab[1,1] FN<-tab[1,2] FP<-tab[2,1] TP<-tab[2,2] } } tabmarg<-addmargins(tab) Conf.limit<-CL # sample size n<-sum(tab) # prevalence p<-(TP+FN)/n # sensitivity and confidence limits Se<-TP/(TP+FN) Se.cl<-as.numeric(binom.wilson(TP, TP+FN, conf.level = CL)[4:5]) # especificity and confidence limits Sp<-TN/(FP+TN) Sp.cl<-as.numeric(binom.wilson(TN, FP+TN, conf.level = CL)[4:5]) # positive and negative likelyhood ratios and confidence limits PLR<-Se/(1-Sp) # LR confidence limists inspired in epi.tests{epiR} PLR.inf.cl<-exp(log(PLR)-(qnorm(1-((1-CL)/2),mean=0,sd=1))*sqrt((1-Se)/( (TP+FN)*Sp)+(Sp)/((FP+TN)*(1-Sp)))) PLR.sup.cl<-exp(log(PLR)+(qnorm(1-((1-CL)/2),mean=0,sd=1))*sqrt((1-Se)/( (TP+FN)*Sp)+(Sp)/((FP+TN)*(1-Sp)))) NLR<-(1-Se)/Sp NLR.inf.cl<-exp(log(NLR)-(qnorm(1-((1-CL)/2),mean=0,sd=1))*sqrt((Se)/((TP+ FN)*(1-Se))+(1-Sp)/((FP+TN)*(Sp)))) NLR.sup.cl<-exp(log(NLR)+(qnorm(1-((1-CL)/2),mean=0,sd=1))*sqrt((Se)/((TP+ FN)*(1-Se))+(1-Sp)/((FP+TN)*(Sp)))) #accuracy and confidence limits accu<-(TP+TN)/n accu.cl<-as.numeric(binom.wilson(TP+TN, n, conf.level = CL)[4:5]) # positive and negative predictive values and confidence limits PPV<-TP/(TP+FP) PPV.cl<-as.numeric(binom.wilson(TP, TP+FP, conf.level = CL)[4:5]) NPV<-TN/(TN+FN) NPV.cl<-as.numeric(binom.wilson(TN, TN+FN, conf.level = CL)[4:5]) # diagnostic odds ratio and confidence limits OR<-oddsratio(tab,conf.level = CL) DOR<-OR$measure[2,1] #DOR<-(TP*TN)/(FP*FN) DOR.inf.cl<-OR$measure[2,2] DOR.sup.cl<-OR$measure[2,3] rm(OR) # error rate and error trade #ER<-((FN/(FN+TN))*p)+(((FP/(FP+TP))*(TN+FP)) ER<-(FN+FP)/n ER.cl<-as.numeric(binom.wilson(FN+FP, n, conf.level = CL)[4:5]) ET<-(FN/FP) # pre-test and pos-test odds (to do) # area under ROC curve AUC<-(Se+Sp)/2 if(plot==TRUE){ plot(1-Sp,Se,xlim=c(0,1),ylim=c(0,1)) segments(0,0,1-Sp,Se,col="red") segments(1-Sp,Se,1,1,col="red") grid() } #if(plot==FALSE) # {ROC<-roc.from.table(tab, graph = FALSE)} # gives same results as AUC<-(Se+Sp)/2 Youden<-Se+Sp-1 Youden.inf.cl<-Youden-qnorm(CL/2)*sqrt(((Se * (1 - Se))/(TP+FN) + ((Sp * (1 - Sp))/(TN+FP)))) Youden.sup.cl<-Youden+qnorm(CL/2)*sqrt(((Se * (1 - Se))/(TP+FN) + ((Sp * (1 - Sp))/(FP+TN)))) # rm(ROC) rm(tab) # results evaluations reteval <- list(tabmarg=tabmarg,n=n,p=p,Se=Se,Se.cl=Se.cl,Sp=Sp,Sp.cl=Sp.cl,PLR=PLR, PLR.inf.cl=PLR.inf.cl,PLR.sup.cl=PLR.sup.cl,NLR=NLR,NLR.inf.cl=NLR.inf.cl, NLR.sup.cl=NLR.sup.cl,accu=accu,accu.cl=accu.cl,PPV=PPV,PPV.cl=PPV.cl,NPV=NPV,NPV.cl=NPV.cl, DOR=DOR,DOR.inf.cl=DOR.inf.cl,DOR.sup.cl=DOR.sup.cl,ET=ET,ER=ER,ER.cl=ER.cl, Youden=Youden,Youden.inf.cl=Youden.inf.cl,Youden.sup.cl=Youden.sup.cl,AUC=AUC, Conf.limit=Conf.limit,reference.name=reference.name,index.name=index.name) class(reteval) <- "diag" if(print==TRUE) {print(reteval)} invisible(reteval) }DiagnosisMed/R/summary.diag.R0000644000175100001440000000605011343034560015562 0ustar hornikuserssummary.diag <- function(object,...){ diag.tab <- matrix( c(object$n,NA,paste(formatC(object$p*100,digits=2,format="f")),NA, formatC(object$Se*100,digits=2,format="f"), paste('[',formatC(object$Se.cl[1]*100,digits=2,format="f"),'-',formatC(object$Se.cl[2]*100,digits=2,format="f"),']'), formatC(object$Sp*100,digits=2,format="f"), paste('[',formatC(object$Sp.cl[1]*100,digits=2,format="f"),'-',formatC(object$Sp.cl[2]*100,digits=2,format="f"),']'), formatC(object$PPV*100,digits=2,format="f"), paste('[',formatC(object$PPV.cl[1]*100,digits=2,format="f"),'-',formatC(object$PPV.cl[2]*100,digits=2,format="f"),']'), formatC(object$NPV*100,digits=2,format="f"), paste('[',formatC(object$NPV.cl[1]*100,digits=2,format="f"),'-',formatC(object$NPV.cl[2]*100,digits=2,format="f"),']'), formatC(object$PLR,digits=2,format="f"),paste('[',formatC(object$PLR.inf.cl,digits=2,format="f"),'-',formatC(object$PLR.sup.cl,digits=2,format="f"),']'), formatC(object$NLR,digits=2,format="f"),paste('[',formatC(object$NLR.inf.cl,digits=2,format="f"),'-',formatC(object$NLR.sup.cl,digits=2,format="f"),']'), formatC(object$DOR,digits=2,format="f"),paste('[',formatC(object$DOR.inf.cl,digits=2,format="f"),'-',formatC(object$DOR.sup.cl,digits=2,format="f"),']'), paste(round(object$ET,digits=2),':1',sep=''),NA, formatC(object$ER*100,digits=2,format="f"), paste('[',formatC(object$ER.cl[1]*100,digits=2,format="f"),'-',formatC(object$ER.cl[2]*100,digits=2,format="f"),']'), formatC(object$accu*100,digits=2,format="f"), paste('[',formatC(object$accu.cl[1]*100,digits=2,format="f"),'-',formatC(object$accu.cl[2]*100,digits=2,format="f"),']'), formatC(object$Youden,digits=4,format="f"), paste('[',formatC(object$Youden.inf.cl,digits=4,format="f"),'-',formatC(object$Youden.sup.cl,digits=4,format="f"),']'), round(object$AUC,digits=4),NA ),nrow = 14, ncol=2, byrow=TRUE, dimnames = list(c('Sample size:','Prevalence(%):','Sensitivity(%):','Specificity(%):', 'Postive predictive value(%):','Negative predictive value(%):', 'Positive likelihood ratio:','Negative likelihood ratio:', 'Diagnostic Odds Ratio:','Error trade off (FN : FP):','Error rate(%):', 'Accuracy(%):','Youden index:','Area under ROC curve:'), c('Estimate', paste(object$Conf.limit,'Confidence limits'))) ) diag.tab <- format(diag.tab,justify = "centre",na.encode = FALSE,trim = TRUE) colnames(diag.tab) <-format(colnames(diag.tab),justify = "centre",trim = TRUE) cat("--------------------------------------------------------------------------\n") print(diag.tab,quote=FALSE,na.print='') cat("--------------------------------------------------------------------------\n") invisible(diag.tab) } DiagnosisMed/R/plot.TGROC.r0000644000175100001440000001515011216333762015063 0ustar hornikusersplot.TGROC<-function(x,..., Plot="Both", Plot.inc.range=TRUE, Plot.Cl=FALSE, Plot.cutoff="None", cex=0.5, cex.sub=0.85){ if(Plot!="Both" & Plot!="Non-parametric" & Plot!="Parametric" & Plot!="None"){ stop("Plot must be set either to 'None','Both','Non-parametric' or 'Parametric'!") } if(Plot.cutoff!="Min.MCT" & Plot.cutoff!="Se=Sp" & Plot.cutoff!="Max.Efficiency" & Plot.cutoff!="None"){ stop("Plot.cutoff must be set either to 'None','Max.Efficiency','Min.MCT' or 'Se=Sp'!") } if(Plot!="None"){ if(Plot=="Parametric"|Plot=="Both"){ plot(x$parametric$test.values,x$parametric$Sensitivity,ylim=c(0,1),type="l",col=2, xlab="Test scale", ylab="Sensitivity & Specificity",cex=cex,lty=1) lines(x$parametric$test.values,x$parametric$Specificity,col=4,type="l",lty=2,cex=cex) if(Plot=="Both"){ lines(x$non.parametric$test.values,x$non.parametric$Sensitivity,col=2,type="o",lty=1,cex=cex) lines(x$non.parametric$test.values,x$non.parametric$Specificity,col=4,type="o",lty=2,cex=cex) } leg.txt<-c("Se", "Sp") fill.col<-c(2,4) line.type<-c(1,2) subtitle<-"" } if(Plot=="Non-parametric"){ plot(x$non.parametric$test.values,x$non.parametric$Sensitivity, ylim=c(0,1),type="o",col=2, xlab="test scale",ylab="Sensitivity & Specificity", lty=1,cex=cex) lines(x$non.parametric$test.values,x$non.parametric$Specificity,col=4,type="o",lty=2,cex=cex) leg.txt<-c("Se", "Sp") fill.col<-c(2,4) line.type<-c(1,2) subtitle<-"" } if(Plot.inc.range==TRUE){ abline(h=x$inc,col="lightgray",lty=4) if(Plot=="Parametric"|Plot=="Both"){ abline(v=(x$parametric.inconclusive[1,1]),col="lightgray",lty=4) abline(v=(x$parametric.inconclusive[2,1]),col="lightgray",lty=4) subtitle<-paste("Parametric inconclusive limits at",formatC(x$inc),"level:",formatC(x$parametric.inconclusive[1,1]), "-",formatC(x$parametric.inconclusive[2,1]),".") } if(Plot=="Non-parametric"){ abline(v=(x$non.parametric.inconclusive[1,1]),col="lightgray",lty=4) abline(v=(x$non.parametric.inconclusive[2,1]),col="lightgray",lty=4) subtitle<-paste("Non-parametric inconclusive limits at",formatC(x$inc),"level:",formatC(x$non.parametric.inconclusive[1,1]), "-",formatC(x$non.parametric.inconclusive[2,1]),".") } leg.txt<-c(leg.txt,c("Inc limits")) fill.col<-c(fill.col,c("lightgray")) line.type<-c(line.type,4) } if(Plot.Cl==TRUE){ if(Plot=="Both"|Plot=="Parametric"){ lines(x$parametric$test.values,x$parametric$Se.inf.cl,lty=5,col=2) lines(x$parametric$test.values,x$parametric$Se.sup.cl,lty=5,col=2) lines(x$parametric$test.values,x$parametric$Sp.inf.cl,lty=3,col=4) lines(x$parametric$test.values,x$parametric$Sp.sup.cl,lty=3,col=4) } if(Plot=="Non-parametric"){ lines(x$non.parametric$test.values, x$non.parametric$Se.inf.cl,lty=5,col=2) lines(x$non.parametric$test.values, x$non.parametric$Se.sup.cl,lty=5,col=2) lines(x$non.parametric$test.values, x$non.parametric$Sp.inf.cl,lty=3,col=4) lines(x$non.parametric$test.values, x$non.parametric$Sp.sup.cl,lty=3,col=4) } leg.txt<-c(leg.txt,c("Se conf. band","Sp conf. band")) fill.col<-c(fill.col,c(2,4)) line.type<-c(line.type,5,3) } if(Plot.cutoff=="Se=Sp"){ if(Plot=="Both"|Plot=="Parametric"){ abline(v=(x$par.test.best.cutoff[1,1]),col="lightgray",lty=6) leg.txt<-c(leg.txt,c("Best cut-off")) fill.col<-c(fill.col,c("lightgray")) line.type<-c(line.type,6) subtitle<-paste(subtitle,paste("Cut-off estimated by parametric Se=Sp:",formatC(x$par.test.best.cutoff[1,1]))) } if(Plot=="Non-parametric"){ abline(v=(x$np.test.best.cutoff[1,1]),col="lightgray",lty=6) leg.txt<-c(leg.txt,c("Best cut-off")) fill.col<-c(fill.col,c("lightgray")) line.type<-c(line.type,6) subtitle<-paste(subtitle,paste("Cut-off estimated by Se=Sp:",formatC(x$np.test.best.cutoff[1,1]))) } } if(Plot.cutoff=="Max.Efficiency"){ if(Plot=="Both"|Plot=="Parametric"){ abline(v=(x$par.test.best.cutoff[2,1]),col="lightgray",lty=6) leg.txt<-c(leg.txt,c("Best cut-off")) fill.col<-c(fill.col,c("lightgray")) line.type<-c(line.type,6) subtitle<-paste(subtitle,paste("Cut-off estimated by parametric Max. Efficiency:",formatC(x$par.test.best.cutoff[2,1]),".")) #"Pop. prevalence:",formatC(pop.prevalence)))) Does not fit in the graph } if(Plot=="Non-parametric"){ abline(v=(x$np.test.best.cutoff[2,1]),col="lightgray",lty=6) leg.txt<-c(leg.txt,c("Best cut-off")) fill.col<-c(fill.col,c("lightgray")) line.type<-c(line.type,6) subtitle<-paste(subtitle,paste("Cut-off estimated by Max. Efficiency:",formatC(x$np.test.best.cutoff[2,1]),".")) #"Pop. prevalence:",formatC(pop.prevalence)))) Does not fit in the graph } } if(Plot.cutoff=="Min.MCT"){ if(Plot=="Both"|Plot=="Parametric"){ abline(v=(x$par.test.best.cutoff[3,1]),col="lightgray",lty=6) leg.txt<-c(leg.txt,c("Best cut-off")) fill.col<-c(fill.col,c("lightgray")) line.type<-c(line.type,6) subtitle<-paste(subtitle,paste("Cut-off estimated by minimizing parametric MCT:",formatC(x$np.test.best.cutoff[3,1]),".")) #,"Pop. prevalence:",formatC(pop.prevalence),"Cost FN/FP:",formatC(cost))) Does not fit in the graph } if(Plot=="Non-parametric"){ abline(v=(x$np.test.best.cutoff[3,1]),col="lightgray",lty=6) leg.txt<-c(leg.txt,c("Best cut-off")) fill.col<-c(fill.col,c("lightgray")) line.type<-c(line.type,6) subtitle<-paste(subtitle,paste("Cut-off estimated by Minimizing MCT:",formatC(x$np.test.best.cutoff[3,1],"."))) #,"Pop. prevalence:",formatC(pop.prevalence),"Cost FN/FP:",formatC(cost))) Does not fit in the graph } } legend("right",legend=leg.txt,col=fill.col,lty=line.type, bty="n") title(sub=subtitle,cex.sub=cex.sub) } } DiagnosisMed/R/zzz.r0000644000175100001440000000050411343034560014055 0ustar hornikusers# first and last .First.lib <- function(lib, pkg) { #require("epitools","TeachingDemos","tcltk",quietly=TRUE,warn.conflicts=FALSE) # if (.Platform$OS.type=="windows") see <- packageDescription(pkg,fields="Version") cat("'DiagnosisMed' library",see," loaded\n",sep=" ") } .Last.lib <- function(libpath) { # nothing so far } DiagnosisMed/R/print.diag.r0000644000175100001440000000505111343034560015261 0ustar hornikusersprint.diag <- function(x,...){ cat("Reference standard:",x$reference.name,"\n") cat("Index test :",x$index.name,"\n") cat("---------------------------------------------------------------\n") print(x$tabmarg) cat("The test has the following parameters [",x$Conf.limit*100,"% confidence interval]\n",sep="") cat("---------------------------------------------------------------\n") cat("Sample size: ",x$n,"\n") cat("Prevalence considered(%): ",formatC(x$p*100,digits=2,format="f"),"\n") cat("Sensitivity(%): ",formatC(x$Se*100,digits=2,format="f")," [",formatC(x$Se.cl[1]*100,digits=2,format="f")," - ",formatC(x$Se.cl[2]*100,digits=2,format="f"),"]\n") cat("Specificity(%): ",formatC(x$Sp*100,digits=2,format="f")," [",formatC(x$Sp.cl[1]*100,digits=2,format="f")," - ",formatC(x$Sp.cl[2]*100,digits=2,format="f"),"]\n") cat("Positive predictive value(%): ",formatC(x$PPV*100,digits=2,format="f")," [",formatC(x$PPV.cl[1]*100,digits=2,format="f")," - ",formatC(x$PPV.cl[2]*100,digits=2,format="f"),"]\n") cat("Negative predictive value:(%):",formatC(x$NPV*100,digits=2,format="f")," [",formatC(x$NPV.cl[1]*100,digits=2,format="f")," - ",formatC(x$NPV.cl[2]*100,digits=2,format="f"),"]\n") cat("Positive likelihood ratio: ",formatC(x$PLR,digits=2,format="f")," [",formatC(x$PLR.inf.cl,digits=2,format="f")," - ",formatC(x$PLR.sup.cl,digits=2,format="f"),"]\n") cat("Negative likelihood ratio: ",formatC(x$NLR,digits=2,format="f")," [",formatC(x$NLR.inf.cl,digits=2,format="f")," - ",formatC(x$NLR.sup.cl,digits=2,format="f"),"]\n") cat("Diagnostic odds ratio: ",formatC(x$DOR,digits=2,format="f")," [",formatC(x$DOR.inf.cl,digits=2,format="f")," - ",formatC(x$DOR.sup.cl,digits=2,format="f"),"]\n") cat("Error trade off (FN : FP) ",round(x$ET,digits=2)," : 1 \n",sep='') cat("Error rate(%): ",formatC(x$ER*100,digits=2,format="f")," [",formatC(x$ER.cl[1]*100,digits=2,format="f")," - ",formatC(x$ER.cl[2]*100,digits=2,format="f"),"]\n") cat("Accuracy(%): ",formatC(x$accu*100,digits=2,format="f")," [",formatC(x$accu.cl[1]*100,digits=2,format="f")," - ",formatC(x$accu.cl[2]*100,digits=2,format="f"),"]\n") cat("Youden index: ",formatC(x$Youden,digits=4,format="f")," [",formatC(x$Youden.inf.cl,digits=4,format="f")," - ",formatC(x$Youden.sup.cl,digits=4,format="f"),"]\n") cat("Area under ROC curve: ",round(x$AUC,digits=4),"\n") cat("---------------------------------------------------------------\n") } DiagnosisMed/R/interact.ROC.r0000644000175100001440000000045611073666231015467 0ustar hornikusersinteract.ROC<-function(gold,test){ # require(TeachingDemos) # require(tcltk) i.ROC<-cbind(test,gold) without<-subset(i.ROC, subset=gold==0, select=test, drop = FALSE) with<-subset(i.ROC, subset=gold==1, select=test, drop = FALSE) par(ask=FALSE) roc.demo(x = without, y = with) } DiagnosisMed/R/print.TGROC.r0000644000175100001440000000225411205164441015234 0ustar hornikusersprint.TGROC<-function(x,...){ cat(" Sample size:",x$sample.size,"\n") cat(" Sample prevalence:",round(x$sample.prevalence,digits = 4),"\n") cat(" Population prevalence:",round(x$pop.prevalence,digits = 4)," - same as sample prevalence if not informed\n") cat(" Informed cost - FP/FN:",round(x$cost,digits = 4),"\n") cat("Informed inconclusive level:",round(x$inc,digits = 4),"\n") cat("\n\n") cat("Test summary.-------------------------------------------------------------------\n") print(x$test.summary) cat("\n\n") cat("Non-paramentric inconclusive cut-off limits with",x$inc,"inconclusive tolerance.---\n") print(x$non.parametric.inconclusive) cat("\n\n") cat("Non-paramentric best cut-off estimations with",x$conf.limit,"confidence limits.------------\n") print(x$np.test.best.cutoff) cat("\n\n") cat("Paramentric inconclusive cut-off limits with",x$inc,"inconclusive tolerance.--------\n") print(x$parametric.inconclusive) cat("\n\n") cat("Paramentric best cut-off estimations with",x$conf.limit,"confidence limits.----------------\n") print(x$par.test.best.cutoff) cat("\n\n") } DiagnosisMed/R/TGROC.r0000644000175100001440000003023111216333762014103 0ustar hornikusersTGROC<-function(gold, test, Cost=1, CL=0.95, Inconclusive=0.95, Prevalence=0, t.max=NULL, t.min=NULL, precision=.0001, n.neurons=c(1,5,1), learning.rate.global=1e-2, momentum.global=0.3, error.criterium="LMS", Stao=NA, hidden.layer="sigmoid", output.layer="sigmoid", method="ADAPTgdwm", report=FALSE, show.step=5000, n.shows=1, Plot="Both", Plot.inc.range=TRUE, Plot.Cl=FALSE, Plot.cutoff="None", cex=0.5, cex.sub=0.85, Print=TRUE){ #require(epitools) #TP sum(test.table[i:nrow(test.table),2]) #FP sum(test.table[i:nrow(test.table),1]) #TN sum(test.table[1:i-1,1]) #FN sum(test.table[1:i-1,2]) test.table<-table(test,gold) if (dim(test.table)[2] != 2){ stop("It seems that your gold standard has more than 2 categories!") } if(is.null(precision)||!is.numeric(precision)){ stop("Precision must be set to a numeric value!") } sample.prevalence<-(sum(test.table[,2]))/(sum(test.table)) if (Prevalence==0){ pop.prevalence<-sample.prevalence } if (Prevalence>0){ (pop.prevalence<-Prevalence) } names(sample.prevalence)<-c("Disease prevalence in the sample") names(pop.prevalence)<-c("Informed disease prevalence in the population") sample.size<-sum(test.table) names(sample.size)<-c("Sample size") test.summary<-round(c(summary(test),sd(test)),digits=5) names(test.summary)<-c("Min.","1st Qu.","Median","Mean","3rd Qu.","Max.","SD") cost<-Cost names(cost)<-c("Informed costs(FN)/costs(FP)") conf.limit<-CL inc<-Inconclusive names(inc)<-"Inconclusive tolerance level" D<-sum(test.table[,2]) ND<-sum(test.table[,1]) # Taking the rownames of the test.table to be results first column test.values<-(as.numeric(rownames(unclass(test.table)))) non.parametric<-as.data.frame(test.values) # Making a table with Se Sp PLR NLR PPV NPV and its confidence limits for each cut-off for (i in 1:nrow(non.parametric)) { non.parametric$TP[i] <- sum(test.table[i:nrow(test.table),2]) non.parametric$FN[i] <- sum(test.table[1:i-1,2]) non.parametric$FP[i] <- sum(test.table[i:nrow(test.table),1]) non.parametric$TN[i] <- sum(test.table[1:i-1,1]) } non.parametric$Sensitivity <- round(non.parametric$TP/D,digits=4) non.parametric$Se.inf.cl <- round(binom.wilson(non.parametric$TP,D,conf.level=CL)[4]$lower,digits=4) non.parametric$Se.sup.cl <- round(binom.wilson(non.parametric$TP,D,conf.level=CL)[5]$upper,digits=4) non.parametric$Specificity <- round(non.parametric$TN/ND,digits=4) non.parametric$Sp.inf.cl <- round(binom.wilson(non.parametric$TN,ND,conf.level=CL)[4]$lower,digits=4) non.parametric$Sp.sup.cl <- round(binom.wilson(non.parametric$TN,ND,conf.level=CL)[5]$upper,digits=4) non.parametric$PLR<-round(non.parametric$Sensitivity/(1-non.parametric$Specificity),digits=2) non.parametric$PLR.inf.cl<-round(exp(log(non.parametric$PLR)-(qnorm(1-((1-CL)/2),mean=0,sd=1))*sqrt((1-non.parametric$Sensitivity)/( (D)*non.parametric$Specificity)+(non.parametric$Specificity)/((ND)*(1-non.parametric$Specificity)))),digits=2) non.parametric$PLR.sup.cl<-round(exp(log(non.parametric$PLR)+(qnorm(1-((1-CL)/2),mean=0,sd=1))*sqrt((1-non.parametric$Sensitivity)/( (D)*non.parametric$Specificity)+(non.parametric$Specificity)/((ND)*(1-non.parametric$Specificity)))),digits=2) # Se=Sp cut-off non.parametric$Se.equals.Sp<-abs(non.parametric$Specificity-non.parametric$Sensitivity) # Efficiency= Se*prevalence+(1-prevalence)*Se non.parametric$Efficiency<-(non.parametric$Sensitivity*(pop.prevalence))+((1-(pop.prevalence))*non.parametric$Specificity) # MissClassificatioCost(MCT)=(1-prevalence)(1-Sp)+r*prevalence(1-Se) - r=cost(FN)/cost(FP) non.parametric$MCT<-(1-(pop.prevalence))*(1-non.parametric$Specificity)+(Cost*(pop.prevalence))*(1-non.parametric$Sensitivity) # np.test.best.cutoff<-subset(non.parametric,subset=c( # non.parametric[which.min(non.parametric$Se.equals.Sp)], # non.parametric[which.max(non.parametric$Efficiency)], # non.parametric[which.min(non.parametric$MCT)], # select=test.values:PLR.sup.cl # )) Does Not work... somethig worng with the lines or columns selection np.test.best.cutoff<-as.data.frame(rbind( non.parametric[which.min(non.parametric$Se.equals.Sp),c(1,6:14)], non.parametric[which.max(non.parametric$Efficiency),c(1,6:14)], non.parametric[which.min(non.parametric$MCT),c(1,6:14)])) #best.cutoff,non.parametric[which.min(non.parametric$Se.equals.Sp),1:10]) #best.cutoff<-non.parametric$test.values[which.max(non.parametric$Efficiency)] #np.test.best.cutoff<-rbind(np.test.best.cutoff,cbind(best.cutoff,non.parametric[ # which.max(non.parametric$Efficiency),2:10])) #best.cutoff<-non.parametric$test.values[which.min(non.parametric$MCT)] #np.test.best.cutoff<-rbind(np.test.best.cutoff,cbind(best.cutoff,non.parametric[ # which.min(non.parametric$MCT),2:10])) rownames(np.test.best.cutoff)<- c("Se=Sp","Max. Efficiency","Min. MCT") non.parametric.inconclusive<-as.data.frame(rbind( non.parametric[which.min(abs(Inconclusive-non.parametric$Sensitivity)),c(1,6:14)], non.parametric[which.min(abs(Inconclusive-non.parametric$Specificity)),c(1,6:14)])) rownames(non.parametric.inconclusive)<-c("Lower inconclusive","Upper inconclusive") if(is.null(t.max)){ t.max<-max(non.parametric$test.values) } if(is.null(t.min)){ t.min<-min(non.parametric$test.values) } net <- newff(n.neurons=n.neurons, learning.rate.global=learning.rate.global, momentum.global=momentum.global, error.criterium=error.criterium, Stao=Stao, hidden.layer=hidden.layer, output.layer=output.layer, method=method) net.Se <- train(net,P=non.parametric$test.values,T=non.parametric$Sensitivity,error.criterium=error.criterium, report=report,show.step=show.step,n.shows=n.shows) net.Sp <- train(net,P=non.parametric$test.values,T=non.parametric$Specificity,error.criterium=error.criterium, report=report,show.step=show.step,n.shows=n.shows) test.values<-seq(t.min,t.max,precision) parametric<-as.data.frame(test.values) parametric$Sensitivity <- as.numeric(sim(net.Se$net, test.values)) parametric$Se.inf.cl<-parametric$Sensitivity - qnorm(1 - (1-conf.limit)/2) * sqrt(((parametric$Sensitivity * (1 - parametric$Sensitivity))/(sample.size * sample.prevalence))) parametric$Se.sup.cl<-parametric$Sensitivity + qnorm(1 - (1-conf.limit)/2) * sqrt(((parametric$Sensitivity * (1 - parametric$Sensitivity))/(sample.size * sample.prevalence))) parametric$Specificity <- as.numeric(sim(net.Sp$net, test.values)) parametric$Sp.inf.cl <- parametric$Specificity - qnorm(1 - (1-conf.limit)/2) * sqrt(((parametric$Specificity * (1 - parametric$Specificity))/(sample.size * (1-sample.prevalence)))) parametric$Sp.sup.cl <- parametric$Specificity + qnorm(1 - (1-conf.limit)/2) * sqrt(((parametric$Specificity * (1 - parametric$Specificity))/(sample.size * (1-sample.prevalence)))) parametric$PLR <- parametric$Sensitivity/(1-parametric$Specificity) parametric$PLR.inf.cl<-exp(log(parametric$PLR)-(qnorm(1-((1-conf.limit)/2),mean=0,sd=1))*sqrt((1-parametric$Sensitivity)/ ((sample.size * sample.prevalence)*parametric$Specificity)+(parametric$Specificity)/((sample.size * (1-sample.prevalence))*(1-parametric$Specificity)))) parametric$PLR.sup.cl<-exp(log(parametric$PLR)+(qnorm(1-((1-conf.limit)/2),mean=0,sd=1))*sqrt((1-parametric$Sensitivity)/ ((sample.size * sample.prevalence)*parametric$Specificity)+(parametric$Specificity)/((sample.size * (1-sample.prevalence))*(1-parametric$Specificity)))) #parametric$NLR <- (1-parametric$Specificity)/parametric$Sensitivity #parametric$NLR.inf.cl <- exp(log(parametric$NLR)-(qnorm(1-((1-(1-conf.limit))/2),mean=0,sd=1))* # sqrt((parametric$Sensitivity)/((sample.size * sample.prevalence)*(1-parametric$Sensitivity))+(1-parametric$Specificity)/ # ((sample.size * (1-sample.prevalence))*(parametric$Specificity)))) #parametric$NLR.sup.cl <- exp(log(parametric$NLR)+(qnorm(1-((1-conf.limit)/2),mean=0,sd=1))*sqrt((parametric$Sensitivity)/ # ((sample.size * sample.prevalence)*(1-parametric$Sensitivity))+(1-parametric$Specificity)/((sample.size * # (1-sample.prevalence))*(parametric$Specificity)))) parametric$Se.equals.Sp<-abs(parametric$Sensitivity-parametric$Specificity) parametric$Efficiency<-parametric$Sensitivity*pop.prevalence+(1-pop.prevalence)*parametric$Specificity parametric$MCT<-(1-(pop.prevalence))*(1-parametric$Specificity)+(cost*(pop.prevalence))*(1-parametric$Sensitivity) parametric.inconclusive<-as.data.frame(rbind( parametric[which.min(abs(inc-parametric$Sensitivity)),1:10], parametric[which.min(abs(inc-parametric$Specificity)),1:10] )) rownames(parametric.inconclusive)<-c("Lower inconclusive","Upper inconclusive") par.test.best.cutoff<-as.data.frame(rbind( parametric[which.min(parametric$Se.equals.Sp),1:10], parametric[which.max(parametric$Efficiency),1:10], parametric[which.min(parametric$MCT),1:10] )) rownames(par.test.best.cutoff)<- c("Se=Sp","Max. Efficiency","Min. MCT") # rm(test.values) if(non.parametric.inconclusive[1,1]>non.parametric.inconclusive[2,1]){ warning("Non-parametric lower inconclusive limit is higher than upper inconclusive limit.") } if(parametric.inconclusive[1,1]>parametric.inconclusive[2,1]){ warning("Parametric lower inconclusive limit is higher than upper inconclusive limit.") } if(np.test.best.cutoff[1,1]>non.parametric.inconclusive[2,1]| np.test.best.cutoff[2,1]>non.parametric.inconclusive[2,1]| np.test.best.cutoff[3,1]>non.parametric.inconclusive[2,1]){ warning("At least one of the non-parametric best cut-off is higher then upper inconclusive limit.") } if(np.test.best.cutoff[1,1]parametric.inconclusive[2,1]| par.test.best.cutoff[2,1]>parametric.inconclusive[2,1]| par.test.best.cutoff[3,1]>parametric.inconclusive[2,1]){ warning("At least one of the parametric best cut-off is higher then upper inconclusive limit.") } if(par.test.best.cutoff[1,1]0){ (pop.prevalence<-Prevalence) } if (is.numeric(gold)==TRUE){ X<-sort(test[gold==0]) Y<-sort(test[gold==1]) #X<-test[gold==0] #Y<-test[gold==1] AUC <- ((as.double(length(test[gold == 0]))) * (as.double(length(test[gold ==1]))) + ((as.double(length(test[gold == 0]))) * ((as.double(length(test[gold == 0]))) + 1))/2 - sum(rank(test,ties.method = "average")[gold == 0]))/((as.double(length(test[gold == 0]))) * (as.double(length(test[gold == 1])))) AUC[AUC < 0.5] <- 1 - AUC } if (is.factor(gold)==TRUE){ #X<-test[gold=="negative"] #Y<-test[gold=="positive"] X<-sort(test[gold=="negative"]) Y<-sort(test[gold=="positive"]) AUC <- ((as.double(length(test[gold == "negative"]))) * (as.double(length(test[gold == "positive"]))) + ((as.double(length(test[gold == "negative"]))) * ((as.double(length(test[gold == "negative"]))) + 1))/2 - sum(rank(test,ties.method = "average")[gold == "negative"]))/((as.double(length(test[gold == "negative"]))) * (as.double(length(test[gold == "positive"])))) AUC[AUC < 0.5] <- 1 - AUC } m<-as.double(length(X)) n<-as.double(length(Y)) test.summary<-round(c(summary(test),sd(test)),digits=5) test.summary<-rbind(test.summary,round(c(summary(X),sd(X)),digits=5)) test.summary<-rbind(test.summary,round(c(summary(Y),sd(Y)),digits=5)) colnames(test.summary)<-c("Min.","1st Qu.","Median","Mean","3rd Qu.","Max.","SD") rownames(test.summary)<-c("Overall summary","Without disease", "With disease") #D10X<-function(Xi){(1/n)*sum(Y>=Xi)} #D01Y<-function(Yi){(1/m)*sum(Yi>=X)} D10X <- function(Xi) {(1/n) * sum(Y >= Xi[1])} D01Y <- function(Yi) {(1/m) * sum(Yi[1] >= X)} VAR.AUC<-sum((tapply(X,X,"D10X")-AUC)^2)/(m*(m-1))+sum((tapply(Y,Y,"D01Y")-AUC)^2)/(n*(n-1)) SD.AUC<-sqrt(VAR.AUC) alpha<-1-CL AUC.summary<-c(AUC- qnorm(1-alpha/2)*SD.AUC,AUC,AUC+ qnorm(1-alpha/2)*SD.AUC) #names(AUC.summary)<-c("AUC inf conf limit", "AUC","AUC sup conf limit") #TP sum(test.table[i:nrow(test.table),2]) #FP sum(test.table[i:nrow(test.table),1]) #TN sum(test.table[1:i-1,1]) #FN sum(test.table[1:i-1,2]) D<-sum(test.table[,2]) ND<-sum(test.table[,1]) # Taking the rownames of the test.table to be results first column test.values<-(as.numeric(rownames(unclass(test.table)))) test.diag.table<-as.data.frame(test.values) # Making a table with Se Sp PLR NLR PPV NPV and its confidence limits for each cut-off for (i in 1:nrow(test.diag.table)) { test.diag.table$TP[i] <- sum(test.table[i:nrow(test.table),2]) test.diag.table$FN[i] <- sum(test.table[1:i-1,2]) test.diag.table$FP[i] <- sum(test.table[i:nrow(test.table),1]) test.diag.table$TN[i] <- sum(test.table[1:i-1,1]) } test.diag.table$Sensitivity <- round(test.diag.table$TP/D,digits=4) test.diag.table$Se.inf.cl <- round(binom.wilson(test.diag.table$TP,D,conf.level=CL)[4]$lower,digits=4) test.diag.table$Se.sup.cl <- round(binom.wilson(test.diag.table$TP,D,conf.level=CL)[5]$upper,digits=4) test.diag.table$Specificity <- round(test.diag.table$TN/ND,digits=4) test.diag.table$Sp.inf.cl <- round(binom.wilson(test.diag.table$TN,ND,conf.level=CL)[4]$lower,digits=4) test.diag.table$Sp.sup.cl <- round(binom.wilson(test.diag.table$TN,ND,conf.level=CL)[5]$upper,digits=4) test.diag.table$PPV <- round(test.diag.table$TP/(test.diag.table$TP + test.diag.table$FP),digits=4) test.diag.table$PPV.inf.cl <- round(binom.wilson(test.diag.table$TP,(test.diag.table$TP + test.diag.table$TP),conf.level=CL)[4]$lower,digits=4) test.diag.table$PPV.sup.cl <- round(binom.wilson(test.diag.table$TP,(test.diag.table$TP + test.diag.table$FN),conf.level=CL)[5]$upper,digits=4) test.diag.table$NPV <- round(test.diag.table$TN/(test.diag.table$TN + test.diag.table$FN),digits=4) test.diag.table$NPV.inf.cl <- round(binom.wilson(test.diag.table$TN,(test.diag.table$TN + test.diag.table$FN),conf.level=CL)[4]$lower,digits=4) test.diag.table$NPV.sup.cl <- round(binom.wilson(test.diag.table$TN,(test.diag.table$TN + test.diag.table$FN),conf.level=CL)[5]$upper,digits=4) test.diag.table$PLR<-round(test.diag.table$Sensitivity/(1-test.diag.table$Specificity),digits=2) test.diag.table$PLR.inf.cl<-round(exp(log(test.diag.table$PLR)-(qnorm(1-((1-CL)/2),mean=0,sd=1))*sqrt((1-test.diag.table$Sensitivity)/( (D)*test.diag.table$Specificity)+(test.diag.table$Specificity)/((ND)*(1-test.diag.table$Specificity)))),digits=2) test.diag.table$PLR.sup.cl<-round(exp(log(test.diag.table$PLR)+(qnorm(1-((1-CL)/2),mean=0,sd=1))*sqrt((1-test.diag.table$Sensitivity)/( (D)*test.diag.table$Specificity)+(test.diag.table$Specificity)/((ND)*(1-test.diag.table$Specificity)))),digits=2) test.diag.table$NLR<-round((1-test.diag.table$Sensitivity)/test.diag.table$Specificity,digits=2) test.diag.table$NLR.inf.cl<-round(exp(log(test.diag.table$NLR)-(qnorm(1-((1-CL)/2),mean=0,sd=1))*sqrt((test.diag.table$Sensitivity)/((D)*(1-test.diag.table$Sensitivity))+(1-test.diag.table$Specificity)/((ND)*(test.diag.table$Specificity)))),digits=2) test.diag.table$NLR.sup.cl<-round(exp(log(test.diag.table$NLR)+(qnorm(1-((1-CL)/2),mean=0,sd=1))*sqrt((test.diag.table$Sensitivity)/((D)*(1-test.diag.table$Sensitivity))+(1-test.diag.table$Specificity)/((ND)*(test.diag.table$Specificity)))),digits=2) test.diag.table$Accuracy <- (test.diag.table$TN + test.diag.table$TP)/sample.size test.diag.table$DOR <- ((test.diag.table$TN)*(test.diag.table$TP))/((test.diag.table$FP)*(test.diag.table$FN)) test.diag.table$DOR<-ifelse(test.diag.table$DOR==Inf,NA,test.diag.table$DOR) test.diag.table$Error.rate <- ((test.diag.table$FP)+(test.diag.table$FN))/sample.size test.diag.table$Accuracy.area <- ((test.diag.table$TP)*(test.diag.table$TN))/(D*ND) test.diag.table$Max.Se.Sp <- test.diag.table$Sensitivity + test.diag.table$Specificity test.diag.table$Youden <- test.diag.table$Sensitivity + test.diag.table$Specificity - 1 test.diag.table$Se.equals.Sp <- abs(test.diag.table$Specificity-test.diag.table$Sensitivity) test.diag.table$MinRocDist <- (test.diag.table$Specificity-1)^2+(1-test.diag.table$Sensitivity)^2 test.diag.table$Efficiency<-(test.diag.table$Sensitivity*(pop.prevalence))+((1-(pop.prevalence))*test.diag.table$Specificity) test.diag.table$MCT<-(1-(pop.prevalence))*(1-test.diag.table$Specificity)+(cost*(pop.prevalence))*(1-test.diag.table$Sensitivity) # Making a table with the test result for each best cut-off and attaching validity measures test.best.cutoff <- as.data.frame(rbind( test.diag.table[which.max(test.diag.table$Accuracy),c(1,6:11,18:20)], test.diag.table[which.max(test.diag.table$DOR),c(1,6:11,18:20)], test.diag.table[which.min(test.diag.table$Error.rate),c(1,6:11,18:20)], test.diag.table[which.max(test.diag.table$Accuracy.area),c(1,6:11,18:20)], test.diag.table[which.max(test.diag.table$Max.Se.Sp),c(1,6:11,18:20)], test.diag.table[which.max(test.diag.table$Youden),c(1,6:11,18:20)], test.diag.table[which.min(test.diag.table$Se.equals.Sp),c(1,6:11,18:20)], test.diag.table[which.min(test.diag.table$MinRocDist),c(1,6:11,18:20)], test.diag.table[which.max(test.diag.table$Efficiency),c(1,6:11,18:20)], test.diag.table[which.min(test.diag.table$MCT),c(1,6:11,18:20)] )) rownames(test.best.cutoff)<- c("Max. Accuracy", "Max. DOR","Min. Error rate", "Max. Accuracy area","Max. Sens+Spec","Max. Youden","Se=Sp","Min. ROC distance", "Max. Efficiency", "Min. MCT") #names(pop.prevalence)<-c("Informed disease prevalence - same as sample prevalence if not informed") #names(sample.prevalence)<-c("Observed prevalence by gold standard") reteval<-list(pop.prevalence=pop.prevalence, sample.size=sample.size, sample.prevalence=sample.prevalence, test.summary=test.summary, AUC.summary=AUC.summary, test.table=test.table, test.best.cutoff=test.best.cutoff, test.diag.table=test.diag.table, CL=CL, cost=cost) class(reteval)<-"ROC" if(Print==TRUE){ if(Full==TRUE){ print(reteval,Full=TRUE) } else{ print(reteval) } } # the plot commands if(Plot==TRUE){ plot(reteval,Plot.point=Plot.point,p.cex=p.cex) } invisible(reteval) }