TY - GEN
T1 - Programming paradigm driven heap analysis
AU - Marron, Mark
AU - Lhoták, Ondřej
AU - Banerjee, Anindya
PY - 2012
Y1 - 2012
N2 - The computational cost and precision of a shape style heap analysis is highly dependent on the way method calls are handled. This paper introduces a new approach to analyzing method calls that leverages the fundamental object-oriented programming concepts of encapsulation and invariants. The analysis consists of a novel partial context-sensitivity heuristic and a new take on cutpoints that, in practice, provide large improvements in interprocedural analysis performance while having minimal impacts on the precision of the results. The interprocedural analysis has been implemented for .Net bytecode and an existing abstract heap model. Using this implementation we evaluate both the runtime cost and the precision of the results on a number of well known benchmarks and real-world programs. Our experimental evaluations show that, despite the use of partial context sensitivity heuristics, the static analysis is able to precisely approximate the ideal analysis results. Further, the results show that the interprocedural analysis heuristics and the approach to cutpoints used in this work are critical in enabling the analysis of large real-world programs, over 30K bytecodes in less than 65 seconds and using less than 130 MB of memory, and which could not be analyzed with previous approaches.
AB - The computational cost and precision of a shape style heap analysis is highly dependent on the way method calls are handled. This paper introduces a new approach to analyzing method calls that leverages the fundamental object-oriented programming concepts of encapsulation and invariants. The analysis consists of a novel partial context-sensitivity heuristic and a new take on cutpoints that, in practice, provide large improvements in interprocedural analysis performance while having minimal impacts on the precision of the results. The interprocedural analysis has been implemented for .Net bytecode and an existing abstract heap model. Using this implementation we evaluate both the runtime cost and the precision of the results on a number of well known benchmarks and real-world programs. Our experimental evaluations show that, despite the use of partial context sensitivity heuristics, the static analysis is able to precisely approximate the ideal analysis results. Further, the results show that the interprocedural analysis heuristics and the approach to cutpoints used in this work are critical in enabling the analysis of large real-world programs, over 30K bytecodes in less than 65 seconds and using less than 130 MB of memory, and which could not be analyzed with previous approaches.
UR - http://www.scopus.com/inward/record.url?scp=84859146019&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=84859146019&partnerID=8YFLogxK
U2 - 10.1007/978-3-642-28652-0_3
DO - 10.1007/978-3-642-28652-0_3
M3 - Conference contribution
AN - SCOPUS:84859146019
SN - 9783642286513
T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
SP - 41
EP - 60
BT - Compiler Construction - 21st International Conference, CC 2012, Held as Part of the European Joint Conferences on Theory and Practice of Software, ETAPS 2012, Proceedings
T2 - 21st International Conference on Compiler Construction, CC 2012, Held as Part of the European Joint Conferences on Theory and Practice of Software, ETAPS 2012
Y2 - 24 March 2012 through 1 April 2012
ER -