Medical students' ratings of faculty teaching in a multi-instructor setting: An examination of monotonic response patterns

Terry D. Stratton, Donald B. Witzke, Robert J. Jacob, Marlene J. Sauer, Amy Murphy-Spencer

Research output: Contribution to journalArticlepeer-review

6 Scopus citations

Abstract

Realizing that the psychometric properties of a measure may be highly variable is especially relevant in a multi-instructor context, since an implicit assumption is that student ratings are equally reliable and valid for all faculty ratees. As a possible indicator of nonattending (i.e. invalid) responses, the authors examined the effects of monotonic response patterns on the reliabilities of students' ratings of faculty teaching - including how an alternative presentation format may reduce the prevalence of this behavior. Second-year medical and dental students (n = 130) enrolled in a required basic science course during the 1998-99 academic year were randomly assigned to one of two groups - each of which evaluated the teaching of 6 different faculty across 6 distinct dimensions (i.e. overall quality, organization, preparation, stimulation, respectfulness, and helpfulness). Using a 'split ballot' design, two versions of the conceptually equivalent faculty evaluation form were distributed at random to students in each group. Form A contained the 'traditional' items-within-faculty format, while Form B listed faculty-within-item. The number of monotonic forms (i.e. the identical rating of all 6 items) varied measurably across faculty ratees, as did the respective effects on scale reliabilities. Alpha was especially inflated where a sizeable proportion of monotonic patterns were located on response categories that were either very high (> +1.28 z m deviations) or very low (< -1.28 z m deviations) compared to the group mean. Lastly, the prevalence of monotonic response patterns was significantly (p = ≤ 0.01) less when a faculty-within-item format is used (Form B). These findings suggest that monotonic response patterns differentially impact the reliabilities and, hence, the validity of students' ratings of individual faculty in a multi-instructor context.

Original languageEnglish
Pages (from-to)99-116
Number of pages18
JournalAdvances in Health Sciences Education
Volume7
Issue number2
DOIs
StatePublished - 2002

Keywords

  • effects
  • faculty evaluation
  • monotonic response patterns
  • multi-instructor course
  • nonattending behaviors
  • response
  • validity

ASJC Scopus subject areas

  • Education

Fingerprint

Dive into the research topics of 'Medical students' ratings of faculty teaching in a multi-instructor setting: An examination of monotonic response patterns'. Together they form a unique fingerprint.

Cite this