Two quantitative approaches for estimating content validity

West J Nurs Res. 2003 Aug;25(5):508-18. doi: 10.1177/0193945903252998.

Abstract

Instrument content validity is often established through qualitative expert reviews, yet quantitative analysis of reviewer agreements is also advocated in the literature. Two quantitative approaches to content validity estimations were compared and contrasted using a newly developed instrument called the Osteoporosis Risk Assessment Tool (ORAT). Data obtained from a panel of eight expert judges were analyzed. A Content Validity Index (CVI) initially determined that only one item lacked interrater proportion agreement about its relevance to the instrument as a whole (CVI = 0.57). Concern that higher proportion agreement ratings might be due to random chance stimulated further analysis using a multirater kappa coefficient of agreement. An additional seven items had low kappas, ranging from 0.29 to 0.48 and indicating poor agreement among the experts. The findings supported the elimination or revision of eight items. Pros and cons to using both proportion agreement and kappa coefficient analysis are examined.

Publication types

  • Comparative Study
  • Research Support, Non-U.S. Gov't

MeSH terms

  • Data Interpretation, Statistical
  • Humans
  • Mass Screening / methods*
  • Mass Screening / standards
  • Nursing Assessment / standards*
  • Nursing Evaluation Research / methods*
  • Nursing Evaluation Research / standards
  • Observer Variation
  • Osteoporosis / diagnosis
  • Osteoporosis / etiology
  • Psychometrics
  • Reproducibility of Results*
  • Risk Assessment / standards*
  • Risk Factors