Functions from these packages will be used throughout this document:
[R code]
library(conflicted) # check for conflicting function definitions# library(printr) # inserts help-file output into markdown outputlibrary(rmarkdown) # Convert R Markdown documents into a variety of formats.library(pander) # format tables for markdownlibrary(ggplot2) # graphicslibrary(ggfortify) # help with graphicslibrary(dplyr) # manipulate datalibrary(tibble) # `tibble`s extend `data.frame`slibrary(magrittr) # `%>%` and other additional piping toolslibrary(haven) # import Stata fileslibrary(knitr) # format R output for markdownlibrary(tidyr) # Tools to help to create tidy datalibrary(plotly) # interactive graphicslibrary(dobson) # datasets from Dobson and Barnett 2018library(parameters) # format model output tables for markdownlibrary(haven) # import Stata fileslibrary(latex2exp) # use LaTeX in R code (for figures and tables)library(fs) # filesystem path manipulationslibrary(survival) # survival analysislibrary(survminer) # survival analysis graphicslibrary(KMsurv) # datasets from Klein and Moeschbergerlibrary(parameters) # format model output tables forlibrary(webshot2) # convert interactive content to static for pdflibrary(forcats) # functions for categorical variables ("factors")library(stringr) # functions for dealing with stringslibrary(lubridate) # functions for dealing with dates and times
Here are some R settings I use in this document:
[R code]
rm(list =ls()) # delete any data that's already loaded into Rconflicts_prefer(dplyr::filter)ggplot2::theme_set( ggplot2::theme_bw() +# ggplot2::labs(col = "") + ggplot2::theme(legend.position ="bottom",text = ggplot2::element_text(size =12, family ="serif")))knitr::opts_chunk$set(message =FALSE)options('digits'=6)panderOptions("big.mark", ",")pander::panderOptions("table.emphasize.rownames", FALSE)pander::panderOptions("table.split.table", Inf)conflicts_prefer(dplyr::filter) # use the `filter()` function from dplyr() by defaultlegend_text_size =9run_graphs =TRUE
Classification is a core problem in statistics and machine learning: we seek to assign individuals or observations to one of several discrete categories based on available data. In medicine and epidemiology, classification problems arise constantly—for example, determining whether a patient has a disease based on test results, biomarkers, or clinical signs.
Definition 1 (Classification) A classification problem is a statistical problem in which we seek to assign observations to one of two or more discrete categories (classes) based on observed features or predictors. In the binary case, we assign each observation to one of two classes, often labeled as “positive” or “negative”, “diseased” or “healthy”, etc.
A central challenge in medical classification is interpreting test results correctly. A test may appear highly accurate in isolation, yet its predictive value for an individual patient depends heavily on the prevalence of the condition in the population being tested. Understanding this interplay requires tools from probability theory—in particular, Bayes’ theorem and the law of total probability.
In the sections below, we define the key performance measures of a diagnostic test and work through a concrete example using COVID-19 testing.
1 Diagnostic test characteristics
When evaluating a diagnostic test, we consider several key performance measures:
Definition 2 (Sensitivity) The probability that the test is positive given that the person has the disease, denoted \(\text{P}{\left(\text{positive} \mid \text{disease}\right)}\).
Definition 3 (Specificity) The probability that the test is negative given that the person does not have the disease, denoted \(\text{P}{\left(\text{negative} \mid \text{no disease}\right)}\).
Definition 4 (Positive Predictive Value (PPV)) The probability that a person has the disease given that their test is positive, denoted \(\text{P}{\left(\text{disease} \mid \text{positive}\right)}\).
Definition 5 (Negative Predictive Value (NPV)) The probability that a person does not have the disease given that their test is negative, denoted \(\text{P}{\left(\text{no disease} \mid \text{negative}\right)}\).
2 Example: COVID-19 testing
Suppose we have a COVID-19 test with the following characteristics:
99% sensitive: If a person has COVID-19, the test will be positive 99% of the time
99% specific: If a person does not have COVID-19, the test will be negative 99% of the time
Therefore, even with a highly accurate test (99% sensitive and 99% specific), only about 88% of people who test positive actually have COVID-19. This is because the disease prevalence is relatively low (7%), so false positives make up a meaningful fraction of all positive tests.
4 Alternative formulation
We can rearrange Bayes’ theorem to express the positive predictive value in terms of the sensitivity, specificity, and disease prevalence:
This final form emphasizes the ratio of the false positive rate to the sensitivity, weighted by the ratio of non-diseased to diseased individuals in the population. It shows that even with a very high sensitivity and specificity, the positive predictive value depends strongly on disease prevalence.