TY - JOUR
T1 - Crowdsourcing for assessment items to support adaptive learning
AU - Tackett, Sean
AU - Raymond, Mark
AU - Desai, Rishi
AU - Haist, Steven A.
AU - Morales, Amy
AU - Gaglani, Shiv
AU - Clyman, Stephen G.
N1 - Publisher Copyright:
© 2018, © 2018 Informa UK Limited, trading as Taylor & Francis Group.
PY - 2018/8/3
Y1 - 2018/8/3
N2 - Purpose: Adaptive learning requires frequent and valid assessments for learners to track progress against their goals. This study determined if multiple-choice questions (MCQs) “crowdsourced” from medical learners could meet the standards of many large-scale testing programs. Methods: Users of a medical education app (Osmosis.org, Baltimore, MD) volunteered to submit case-based MCQs. Eleven volunteers were selected to submit MCQs targeted to second year medical students. Two hundred MCQs were subjected to duplicate review by a panel of internal medicine faculty who rated each item for relevance, content accuracy, and quality of response option explanations. A sample of 121 items was pretested on clinical subject exams completed by a national sample of U.S. medical students. Results: Seventy-eight percent of the 200 MCQs met faculty reviewer standards based on relevance, accuracy, and quality of explanations. Of the 121 pretested MCQs, 50% met acceptable statistical criteria. The most common reasons for exclusion were that the item was too easy or had a low discrimination index. Conclusions: Crowdsourcing can efficiently yield high-quality assessment items that meet rigorous judgmental and statistical criteria. Similar models may be adopted by students and educators to augment item pools that support adaptive learning.
AB - Purpose: Adaptive learning requires frequent and valid assessments for learners to track progress against their goals. This study determined if multiple-choice questions (MCQs) “crowdsourced” from medical learners could meet the standards of many large-scale testing programs. Methods: Users of a medical education app (Osmosis.org, Baltimore, MD) volunteered to submit case-based MCQs. Eleven volunteers were selected to submit MCQs targeted to second year medical students. Two hundred MCQs were subjected to duplicate review by a panel of internal medicine faculty who rated each item for relevance, content accuracy, and quality of response option explanations. A sample of 121 items was pretested on clinical subject exams completed by a national sample of U.S. medical students. Results: Seventy-eight percent of the 200 MCQs met faculty reviewer standards based on relevance, accuracy, and quality of explanations. Of the 121 pretested MCQs, 50% met acceptable statistical criteria. The most common reasons for exclusion were that the item was too easy or had a low discrimination index. Conclusions: Crowdsourcing can efficiently yield high-quality assessment items that meet rigorous judgmental and statistical criteria. Similar models may be adopted by students and educators to augment item pools that support adaptive learning.
UR - http://www.scopus.com/inward/record.url?scp=85051976260&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85051976260&partnerID=8YFLogxK
U2 - 10.1080/0142159X.2018.1490704
DO - 10.1080/0142159X.2018.1490704
M3 - Article
C2 - 30096987
AN - SCOPUS:85051976260
SN - 0142-159X
VL - 40
SP - 838
EP - 841
JO - Medical Teacher
JF - Medical Teacher
IS - 8
ER -