Test Specificity


Test Specificity, Specificity, False Positive Rate

  • Definition
  1. Screening Test correctly negative in absence of disease
  2. A test with high Specificity has few false positives
  3. Independent of disease Prevalence in the community
  4. Specific Tests allow user to rule in or confirm a condition (mnemonic "SPin")
  • Calculation
  1. Test Specificity
    1. True negative tests per unaffected patients tested
    2. Expressed as a percentage
    3. Test Specificity = P(negative test | no disease)
      1. Where P (A | B) = Probability of A given B
  2. False Positive Rate
    1. Test positive despite absence of condition
    2. False Positive Rate = 1 - Test Specificity
  1. Patients without Crohn's Disease tested: 255
  2. Patients without Crohn's Disease who have a negative test: 230
  3. Specificity = 230/255 or 90%
  • Precaution
  1. Test Specificity can be misleading
  2. Example
    1. Condition A is actually present in 150 patients (5%) of the 3000 patients tested
    2. Therefore 2850 patients do not have condition A
    3. Test Specificity of 90% would result in a 10% False Positive Rate (of 2850) or 285 patients
    4. In this case a 90% Test Specificity would result in a false positive result in 285 patients, when only 150 actually had the condition
  3. Conclusion
    1. The lower the Prevalence of disease in the cohort tested, the higher the Test Specificity must be to give a reasonable likelihood of correctness
    2. Positive Predictive Value may be a more valuable measure as it takes the condition Prevalence into account
    3. Risk stratifying a group prior to testing can concentrate patients more likely to be positive without missing a significant number
      1. Example: Limit D-Dimer testing to the intermediate likelihood of Pulmonary Embolism group (based on Wells Score)
      2. This increases the Prevalence in the tested group and reduces the number of patients with false positive results