TY - GEN
T1 - UTS
T2 - 19th International Workshop on Languages and Compilers for Parallel Computing, LCPC 2006
AU - Olivier, Stephen
AU - Huan, Jun
AU - Liu, Jinze
AU - Prins, Jan
AU - Dinan, James
AU - Sadayappan, P.
AU - Tseng, Chau Wen
PY - 2007
Y1 - 2007
N2 - This paper presents an unbalanced tree search (UTS) benchmark designed to evaluate the performance and ease of programming for parallel applications requiring dynamic load balancing. We describe algorithms for building a variety of unbalanced search trees to simulate different forms of load imbalance. We created versions of UTS in two parallel languages, OpenMP and Unified Parallel C (UPC), using work stealing as the mechanism for reducing load imbalance. We benchmarked the performance of UTS on various parallel architectures, including shared-memory systems and PC clusters. We found it simple to implement UTS in both UPC and OpenMP, due to UPCs shared-memory abstractions. Results show that both UPC and OpenMP can support efficient dynamic load balancing on shared-memory architectures. However, UPC cannot alleviate the underlying communication costs of distributed-memory systems. Since dynamic load balancing requires intensive communication, performance portability remains difficult for applications such as UTS and performance degrades on PC clusters. By varying key work stealing parameters, we expose important tradeoffs between the granularity of load balance, the degree of parallelism, and communication costs.
AB - This paper presents an unbalanced tree search (UTS) benchmark designed to evaluate the performance and ease of programming for parallel applications requiring dynamic load balancing. We describe algorithms for building a variety of unbalanced search trees to simulate different forms of load imbalance. We created versions of UTS in two parallel languages, OpenMP and Unified Parallel C (UPC), using work stealing as the mechanism for reducing load imbalance. We benchmarked the performance of UTS on various parallel architectures, including shared-memory systems and PC clusters. We found it simple to implement UTS in both UPC and OpenMP, due to UPCs shared-memory abstractions. Results show that both UPC and OpenMP can support efficient dynamic load balancing on shared-memory architectures. However, UPC cannot alleviate the underlying communication costs of distributed-memory systems. Since dynamic load balancing requires intensive communication, performance portability remains difficult for applications such as UTS and performance degrades on PC clusters. By varying key work stealing parameters, we expose important tradeoffs between the granularity of load balance, the degree of parallelism, and communication costs.
UR - http://www.scopus.com/inward/record.url?scp=38149069665&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=38149069665&partnerID=8YFLogxK
U2 - 10.1007/978-3-540-72521-3_18
DO - 10.1007/978-3-540-72521-3_18
M3 - Conference contribution
AN - SCOPUS:38149069665
SN - 3540725202
SN - 9783540725206
T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
SP - 235
EP - 250
BT - Languages and Compilers for Parallel Computing - 19th International Workshop, LCPC 2006, Revised Papers
Y2 - 2 November 2006 through 4 November 2006
ER -