A BANDIT-LEARNING APPROACH TO MULTIFIDELITY APPROXIMATION

Yiming Xu, Vahid Keshavarzzadeh, Robert M. Kirby, Akil Narayan

Research output: Contribution to journalArticlepeer-review

4 Scopus citations

Abstract

Multifidelity approximation is an important technique in scientific computation and simulation. In this paper, we introduce a bandit-learning approach for leveraging data of varying fidelities to achieve precise estimates of the parameters of interest. Under a linear model assumption, we formulate a multifidelity approximation as a modified stochastic bandit and analyze the loss for a class of policies that uniformly explore each model before exploiting. Utilizing the estimated conditional mean-squared error, we propose a consistent algorithm, adaptive explore-then-commit (AETC), and establish a corresponding trajectorywise optimality result. These results are then extended to the case of vector-valued responses, where we demonstrate that the algorithm is efficient without the need to worry about estimating high-dimensional parameters. The main advantage of our approach is that we require neither hierarchical model structure nor a priori knowledge of statistical information (e.g., correlations) about or between models. Instead, the AETC algorithm requires only knowledge of which model is a trusted high-fidelity model, along with (relative) computational cost estimates of querying each model. Numerical experiments are provided at the end to support our theoretical findings.

Original languageEnglish
Pages (from-to)A150-A175
JournalSIAM Journal on Scientific Computing
Volume44
Issue number1
DOIs
StatePublished - 2022

Bibliographical note

Publisher Copyright:
© 2022 Society for Industrial and Applied Mathematics

Funding

\u2217Submitted to the journal\u2019s Methods and Algorithms for Scientific Computing section March 29, 2021; accepted for publication (in revised form) September 22, 2021; published electronically January 18, 2022. https://doi.org/10.1137/21M1408312 Funding: The work of the first and fourth authors was partially supported by NSF DMS-1848508. The work of the second and fourth authors was partially supported by AFOSR under award FA9550-20-1-0338. The work of the second and third authors was partially supported by ARL under cooperative agreement W911NF-12-2-0023. \u2020Department of Mathematics, and Scientific Computing and Imaging Institute, University of Utah, Salt Lake City, UT 84112 USA ([email protected], [email protected]). \u2021Scientific Computing and Imaging Institute, University of Utah, Salt Lake City, UT 84112 USA ([email protected]). \u00A7School of Computing, and Scientific Computing and Imaging Institute, University of Utah, Salt Lake City, UT 84112 USA ([email protected]).

FundersFunder number
U.S. Department of Energy Chinese Academy of Sciences Guangzhou Municipal Science and Technology Project Oak Ridge National Laboratory Extreme Science and Engineering Discovery Environment National Science Foundation National Energy Research Scientific Computing Center National Natural Science Foundation of ChinaDMS-1848508
Army Research LaboratoryW911NF-12-2-0023
Air Force Office of Scientific Research, United States Air ForceFA9550-20-1-0338

    Keywords

    • Monte Carlo
    • bandit learning
    • consistency
    • linear regression
    • multifidelity

    ASJC Scopus subject areas

    • Computational Mathematics
    • Applied Mathematics

    Fingerprint

    Dive into the research topics of 'A BANDIT-LEARNING APPROACH TO MULTIFIDELITY APPROXIMATION'. Together they form a unique fingerprint.

    Cite this