Abstract
Human analysts working with results from automated traceability tools often make incorrect decisions that lead to lower quality final trace matrices. As the human must vet the results of trace tools for mission- and safety-critical systems, the hopes of developing expedient and accurate tracing procedures lies in understanding how analysts work with trace matrices. This paper describes a study to understand when and why humans make correct and incorrect decisions during tracing tasks through logs of analyst actions. In addition to the traditional measures of recall and precision to describe the accuracy of the results, we introduce and study new measures that focus on analyst work quality: potential recall, sensitivity, and effort distribution. We use these measures to visualize analyst progress towards the final trace matrix, identifying factors that may influence their performance and determining how actual tracing strategies, derived from analyst logs, affect results.
Original language | English |
---|---|
Title of host publication | 2012 20th IEEE International Requirements Engineering Conference, RE 2012 - Proceedings |
Pages | 31-40 |
Number of pages | 10 |
DOIs | |
State | Published - 2012 |
Event | 2012 20th IEEE International Requirements Engineering Conference, RE 2012 - Chicago, IL, United States Duration: Sep 24 2012 → Sep 28 2012 |
Publication series
Name | 2012 20th IEEE International Requirements Engineering Conference, RE 2012 - Proceedings |
---|
Conference
Conference | 2012 20th IEEE International Requirements Engineering Conference, RE 2012 |
---|---|
Country/Territory | United States |
City | Chicago, IL |
Period | 9/24/12 → 9/28/12 |
Bibliographical note
Funding Information:Supported by Research Grants GM 13914 and HL 28481 from the National Institutes of Health.
Keywords
- Human Factors
- Performance Measures
- Process Improvement
- Traceability
- Tracing Strategies
ASJC Scopus subject areas
- Software