TY - GEN
T1 - Liberating the Dimension for Function Approximation and Integration
AU - Wasilkowski, G. W.
PY - 2012
Y1 - 2012
N2 - We discuss recent results on the complexity and tractability of problems dealing with ∞-variate functions. Such problems, especially path integrals, arise in many areas including mathematical finance, quantum physics and chemistry, and stochastic differential equations. It is possible to replace the ∞-variate problem by one that has only d variables since the difference between the two problems diminishes with d approaching infinity. Therefore, one could use algorithms obtained in the Information-Based Complexity study, where problems with arbitrarily large but fixed d have been analyzed. However, to get the optimal results, the choice of a specific value of d should be a part of an efficient algorithm. This is why the approach discussed in the present paper is called liberating the dimension. Such a choice should depend on the cost of sampling d-variate functions and on the error demand ∈. Actually, as recently observed for a specific class of problems, optimal algorithms are from a family of changing dimension algorithms which approximate ∞-variate functions by a combination of special functions, each depending on a different set of variables. Moreover, each such set contains no more than d(∈) = O(ln(1/∈)/ln(ln(1/∈))) variables. This is why the new algorithms have the total cost polynomial in 1/∈ even if the cost of sampling a d-variate function is exponential in d.
AB - We discuss recent results on the complexity and tractability of problems dealing with ∞-variate functions. Such problems, especially path integrals, arise in many areas including mathematical finance, quantum physics and chemistry, and stochastic differential equations. It is possible to replace the ∞-variate problem by one that has only d variables since the difference between the two problems diminishes with d approaching infinity. Therefore, one could use algorithms obtained in the Information-Based Complexity study, where problems with arbitrarily large but fixed d have been analyzed. However, to get the optimal results, the choice of a specific value of d should be a part of an efficient algorithm. This is why the approach discussed in the present paper is called liberating the dimension. Such a choice should depend on the cost of sampling d-variate functions and on the error demand ∈. Actually, as recently observed for a specific class of problems, optimal algorithms are from a family of changing dimension algorithms which approximate ∞-variate functions by a combination of special functions, each depending on a different set of variables. Moreover, each such set contains no more than d(∈) = O(ln(1/∈)/ln(ln(1/∈))) variables. This is why the new algorithms have the total cost polynomial in 1/∈ even if the cost of sampling a d-variate function is exponential in d.
UR - http://www.scopus.com/inward/record.url?scp=84893525615&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=84893525615&partnerID=8YFLogxK
U2 - 10.1007/978-3-642-27440-4_9
DO - 10.1007/978-3-642-27440-4_9
M3 - Conference contribution
AN - SCOPUS:84893525615
SN - 9783642274398
T3 - Springer Proceedings in Mathematics and Statistics
SP - 211
EP - 231
BT - Monte Carlo and Quasi-Monte Carlo Methods 2010
T2 - 9th International Conference on Monte Carlo and Quasi Monte Carlo Methods in Scientific Computing, MCQMC 2010
Y2 - 15 August 2010 through 20 August 2010
ER -