TY - JOUR
T1 - MGARD+
T2 - Optimizing Multilevel Methods for Error-Bounded Scientific Data Reduction
AU - Liang, Xin
AU - Whitney, Ben
AU - Chen, Jieyang
AU - Wan, Lipeng
AU - Liu, Qing
AU - Tao, Dingwen
AU - Kress, James
AU - Pugmire, David
AU - Wolf, Matthew
AU - Podhorszki, Norbert
AU - Klasky, Scott
N1 - Publisher Copyright:
© 1968-2012 IEEE.
PY - 2022/7/1
Y1 - 2022/7/1
N2 - Nowadays, data reduction is becoming increasingly important in dealing with the large amounts of scientific data. Existing multilevel compression algorithms offer a promising way to manage scientific data at scale, but may suffer from relatively low performance and reduction quality. In this paper, we propose MGARD+, a multilevel data reduction and refactoring framework drawing on previous multilevel methods, to achieve high-performance data decomposition and high-quality error-bounded lossy compression. Our contributions are four-fold: 1) We propose to leverage a level-wise coefficient quantization method, which uses different error tolerances to quantize the multilevel coefficients. 2) We propose an adaptive decomposition method which treats the multilevel decomposition as a preconditioner and terminates the decomposition process at an appropriate level. 3) We leverage a set of algorithmic optimization strategies to significantly improve the performance of multilevel decomposition/recomposition. 4) We evaluate our proposed method using four real-world scientific datasets and compare with several state-of-the-art lossy compressors. Experiments demonstrate that our optimizations improve the decomposition/recomposition performance of the existing multilevel method by up to $70 \times$70×, and the proposed compression method can improve compression ratio by up to $2 \times$2× compared with other state-of-the-art error-bounded lossy compressors under the same level of data distortion.
AB - Nowadays, data reduction is becoming increasingly important in dealing with the large amounts of scientific data. Existing multilevel compression algorithms offer a promising way to manage scientific data at scale, but may suffer from relatively low performance and reduction quality. In this paper, we propose MGARD+, a multilevel data reduction and refactoring framework drawing on previous multilevel methods, to achieve high-performance data decomposition and high-quality error-bounded lossy compression. Our contributions are four-fold: 1) We propose to leverage a level-wise coefficient quantization method, which uses different error tolerances to quantize the multilevel coefficients. 2) We propose an adaptive decomposition method which treats the multilevel decomposition as a preconditioner and terminates the decomposition process at an appropriate level. 3) We leverage a set of algorithmic optimization strategies to significantly improve the performance of multilevel decomposition/recomposition. 4) We evaluate our proposed method using four real-world scientific datasets and compare with several state-of-the-art lossy compressors. Experiments demonstrate that our optimizations improve the decomposition/recomposition performance of the existing multilevel method by up to $70 \times$70×, and the proposed compression method can improve compression ratio by up to $2 \times$2× compared with other state-of-the-art error-bounded lossy compressors under the same level of data distortion.
KW - High-performance computing
KW - error control
KW - lossy compression
KW - multilevel decomposition
KW - scientific data
UR - http://www.scopus.com/inward/record.url?scp=85110855056&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85110855056&partnerID=8YFLogxK
U2 - 10.1109/TC.2021.3092201
DO - 10.1109/TC.2021.3092201
M3 - Article
AN - SCOPUS:85110855056
SN - 0018-9340
VL - 71
SP - 1522
EP - 1536
JO - IEEE Transactions on Computers
JF - IEEE Transactions on Computers
IS - 7
ER -