TY - GEN
T1 - Toward actionable, broadly accessible contests in software engineering
AU - Cleland-Huang, Jane
AU - Shin, Yonghee
AU - Keenan, Ed
AU - Czauderna, Adam
AU - Leach, Greg
AU - Moritz, Evan
AU - Gethers, Malcom
AU - Poshyvanyk, Denys
AU - Hayes, Jane Huffman
AU - Li, Wenbin
N1 - Copyright:
Copyright 2012 Elsevier B.V., All rights reserved.
PY - 2012
Y1 - 2012
N2 - Software Engineering challenges and contests are becoming increasingly popular for focusing researchers' efforts on particular problems. Such contests tend to follow either an exploratory model, in which the contest holders provide data and ask the contestants to discover "interesting things" they can do with it, or task-oriented contests in which contestants must perform a specific task on a provided dataset. Only occasionally do contests provide more rigorous evaluation mechanisms that precisely specify the task to be performed and the metrics that will be used to evaluate the results. In this paper, we propose actionable and crowd-sourced contests: actionable because the contest describes a precise task, datasets, and evaluation metrics, and also provides a downloadable operating environment for the contest; and crowd-sourced because providing these features creates accessibility to Information Technology hobbyists and students who are attracted by the challenge. Our proposed approach is illustrated using research challenges from the software traceability area as well as an experimental workbench named TraceLab.
AB - Software Engineering challenges and contests are becoming increasingly popular for focusing researchers' efforts on particular problems. Such contests tend to follow either an exploratory model, in which the contest holders provide data and ask the contestants to discover "interesting things" they can do with it, or task-oriented contests in which contestants must perform a specific task on a provided dataset. Only occasionally do contests provide more rigorous evaluation mechanisms that precisely specify the task to be performed and the metrics that will be used to evaluate the results. In this paper, we propose actionable and crowd-sourced contests: actionable because the contest describes a precise task, datasets, and evaluation metrics, and also provides a downloadable operating environment for the contest; and crowd-sourced because providing these features creates accessibility to Information Technology hobbyists and students who are attracted by the challenge. Our proposed approach is illustrated using research challenges from the software traceability area as well as an experimental workbench named TraceLab.
KW - Contest
KW - Empirical Software Engineering
KW - TraceLab
KW - Traceability
UR - http://www.scopus.com/inward/record.url?scp=84864231278&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=84864231278&partnerID=8YFLogxK
U2 - 10.1109/ICSE.2012.6227087
DO - 10.1109/ICSE.2012.6227087
M3 - Conference contribution
AN - SCOPUS:84864231278
SN - 9781467310673
T3 - Proceedings - International Conference on Software Engineering
SP - 1329
EP - 1332
BT - Proceedings - 34th International Conference on Software Engineering, ICSE 2012
T2 - 34th International Conference on Software Engineering, ICSE 2012
Y2 - 2 June 2012 through 9 June 2012
ER -