Abstract
The goal of this research is to make progress towards using supervised machine learning for automated content analysis dealing with complex interpretations of text. For Step 1, two humans coded a sub-sample of online forum posts for relational uncertainty. For Step 2, we evaluated reliability, in which we trained three different classifiers to learn from those subjective human interpretations. Reliability was established when two different metrics of inter-coder reliability could not distinguish whether a human or a machine coded the text on a separate hold-out set. Finally, in Step 3 we assessed validity. To accomplish this, we administered a survey in which participants described their own relational uncertainty/certainty via text and completed a questionnaire. After classifying the text, the machine’s classifications of the participants’ text positively correlated with the subjects’ own self-reported relational uncertainty and relational satisfaction. We discuss our results in line with areas of computational communication science, content analysis, and interpersonal communication.
Original language | English |
---|---|
Pages (from-to) | 287-304 |
Number of pages | 18 |
Journal | Communication Methods and Measures |
Volume | 13 |
Issue number | 4 |
DOIs | |
State | Published - Oct 2 2019 |
Bibliographical note
Funding Information:This work was supported by the University of Kentucky [Research and Creative Activities Program].
Publisher Copyright:
© 2019, © 2019 Taylor & Francis Group, LLC.
ASJC Scopus subject areas
- Communication