Are multiple objective measures of student performance necessary?

David J. Minion, Michael B. Donnelly, Rhonda C. Quick, Andrew Pulito, Richard Schwartz

Research output: Contribution to journalArticlepeer-review

10 Scopus citations

Abstract

Background: This study examines the effect of using multiple modalities to evaluate medical students. Methods: Thirty-four students were evaluated by a complex model utilizing National Board of Medical Examiners (NBME) shelf examination, Objective Structured Clinical Examination (OSCE), Computer Patient Simulation (CPS), faculty and peer evaluation. Results were compared with a traditional model based on NBME and faculty evaluation alone. Results: Reliability (coefficient α) of the complex and traditional models were 0.72 and 0.47, respectively. Item correlations suggested that NBME was most discriminating (r = 0.75), followed by OSCE (r = 0.52), peer evaluation (r = 0.43), CPS (r = 0.39), and faculty evaluation (r = 0.32). Rank order correlation (Spearman's ρ) between scores calculated using each model was 0.87. Conclusions: Although the complex model has improved reliability, both models rank students similarly. However, neither model fully captures and reflects the information provided by each of the specific evaluation methods.

Original languageEnglish
Pages (from-to)663-665
Number of pages3
JournalAmerican Journal of Surgery
Volume183
Issue number6
DOIs
StatePublished - 2002

Keywords

  • Computer simulation
  • Evaluation
  • Grading
  • National Board of Medical Examiners
  • Objective Structured Clinical Examination
  • Undergraduate medical education

ASJC Scopus subject areas

  • Surgery

Fingerprint

Dive into the research topics of 'Are multiple objective measures of student performance necessary?'. Together they form a unique fingerprint.

Cite this