Abstract
To comparatively evaluate automated trace ability solutions, we need to develop standardized benchmarks. However there is currently no consensus on how a benchmark should be constructed and used to evaluate competing techniques. In this paper we discuss recurring problems in evaluating trace ability techniques, identify essential properties that evaluation methods should possess, and provide guidelines for benchmarking software trace ability techniques. We illustrate the properties and guidelines using empirical evaluation of three software trace ability techniques on nine data sets.
Original language | English |
---|---|
Title of host publication | Proceedings - 2015 IEEE/ACM 8th International Symposium on Software and Systems Traceability, SST 2015 |
Pages | 61-67 |
Number of pages | 7 |
ISBN (Electronic) | 9780769555935 |
DOIs | |
State | Published - Aug 5 2015 |
Event | 8th IEEE/ACM International Symposium on Software and Systems Traceability, SST 2015 - Florence, Italy Duration: May 17 2015 → … |
Publication series
Name | Proceedings - 2015 IEEE/ACM 8th International Symposium on Software and Systems Traceability, SST 2015 |
---|
Conference
Conference | 8th IEEE/ACM International Symposium on Software and Systems Traceability, SST 2015 |
---|---|
Country/Territory | Italy |
City | Florence |
Period | 5/17/15 → … |
Bibliographical note
Publisher Copyright:© 2015 IEEE.
Keywords
- Traceability
- benchmarks
- evaluation metrics
- measurement
ASJC Scopus subject areas
- Software