Applying the Many-Facet Rasch Measurement Model to Explore Reviewer Ratings of Conference Proposals

Kelly D. Bradley, Michael R. Peabody, Richard K. Mensah

Research output: Contribution to journalArticlepeer-review

1 Scopus citations

Abstract

For academic conferences, when proposals are submit they are often judged using a rating scale on identified criterion by reviewers who have a shared interest and expertise in the area under consideration. Given the multiple and varied reviewers, an analysis of psychometric properties like rater severity and consistency are important. However, many of the problems that plague the conference proposal selection process are the same issues that plague survey research: rater bias/severity, misuse of rating scale, and the use of raw scores as measures. We propose the use of the many-facet Rasch measurement model (MFRM) to combat these shortcomings and improve the quality of the conference proposal selection process. A set of American Educational Research Association (AERA) Special Interest Group (SIG) proposals is used as an example. The results identify proposals that were accepted based on calculating the mean of summed raw scores, but when MFRM is applied to adjust for judge severity the rank order of the proposals is substantially altered.

Original languageEnglish
Pages (from-to)283-292
Number of pages10
JournalJournal of Applied Measurement
Volume17
Issue number3
StatePublished - Jan 1 2016

ASJC Scopus subject areas

  • General Medicine

Fingerprint

Dive into the research topics of 'Applying the Many-Facet Rasch Measurement Model to Explore Reviewer Ratings of Conference Proposals'. Together they form a unique fingerprint.

Cite this