We present and analyze a novel sparse polynomial technique for the simultaneous approximation of parameterized partial differential equations (PDEs) with deterministic and stochastic inputs. Our approach treats the numerical solution as a jointly sparse reconstruction problem through the reformulation of the standard basis pursuit denoising, where the set of jointly sparse vectors is infinite. To achieve global reconstruction of sparse solutions to parameterized elliptic PDEs over both physical and parametric domains, we combine the standard measurement scheme developed for compressed sensing in the context of bounded orthonormal systems with a novel mixed-norm based ℓ1 regularization method that exploits both energy and sparsity. In addition, we are able to prove that, with minimal sample complexity, error estimates comparable to the best s-term and quasi-optimal approximations are achievable, while requiring only a priori bounds on polynomial truncation error with respect to the energy norm. Finally, we perform extensive numerical experiments on several high-dimensional parameterized elliptic PDE models to demonstrate the superior recovery properties of the proposed approach.
Mots-clés : compressed sensing, sparse recovery, polynomial expansions, convex regularization, Hilbert-valued signals, parameterized PDEs, basis pursuit denoising, joint sparsity, best approximation, high-dimensional, quasi-optimal, bounded orthonormal systems
@article{M2AN_2019__53_6_2025_0, author = {Dexter, Nick and Tran, Hoang and Webster, Clayton}, title = {A mixed \ensuremath{\ell}\protect\textsubscript{1} regularization approach for sparse simultaneous approximation of parameterized {PDEs}}, journal = {ESAIM: Mathematical Modelling and Numerical Analysis }, pages = {2025--2045}, publisher = {EDP-Sciences}, volume = {53}, number = {6}, year = {2019}, doi = {10.1051/m2an/2019048}, mrnumber = {4036662}, zbl = {07167647}, language = {en}, url = {http://www.numdam.org/articles/10.1051/m2an/2019048/} }
TY - JOUR AU - Dexter, Nick AU - Tran, Hoang AU - Webster, Clayton TI - A mixed ℓ1 regularization approach for sparse simultaneous approximation of parameterized PDEs JO - ESAIM: Mathematical Modelling and Numerical Analysis PY - 2019 SP - 2025 EP - 2045 VL - 53 IS - 6 PB - EDP-Sciences UR - http://www.numdam.org/articles/10.1051/m2an/2019048/ DO - 10.1051/m2an/2019048 LA - en ID - M2AN_2019__53_6_2025_0 ER -
%0 Journal Article %A Dexter, Nick %A Tran, Hoang %A Webster, Clayton %T A mixed ℓ1 regularization approach for sparse simultaneous approximation of parameterized PDEs %J ESAIM: Mathematical Modelling and Numerical Analysis %D 2019 %P 2025-2045 %V 53 %N 6 %I EDP-Sciences %U http://www.numdam.org/articles/10.1051/m2an/2019048/ %R 10.1051/m2an/2019048 %G en %F M2AN_2019__53_6_2025_0
Dexter, Nick; Tran, Hoang; Webster, Clayton. A mixed ℓ1 regularization approach for sparse simultaneous approximation of parameterized PDEs. ESAIM: Mathematical Modelling and Numerical Analysis , Tome 53 (2019) no. 6, pp. 2025-2045. doi : 10.1051/m2an/2019048. http://www.numdam.org/articles/10.1051/m2an/2019048/
[1] Correcting for unknown errors in sparse high-dimensional function approximation. (submitted, 2017). | MR
, and ,[2] Compressed sensing approaches for polynomial approximation of high-dimensional functions. In: Compressed Sensing and its Applications, Springer International Publishing, Basel, Switzerland (2018) 93–124. | MR
, and ,[3] Infinite-dimensional minimization and function approximation from pointwise data. Construct. Approx. 45 (2017) 345–390. | DOI | MR | Zbl
,[4] Infinite-dimensional compressed sensing and function interpolation, Found. Comput. Math. 18 (2018) 661–701. | DOI | MR | Zbl
,[5] A stochastic collocation method for elliptic partial differential equations with random input data. SIAM J. Numer. Anal. 45 (2007) 1005–1034. | DOI | MR | Zbl
, and ,[6] Sparse polynomial approximation of parametric elliptic PDEs. Part II: lognormal coefficients. ESAIM: M2AN 51 (2017) 341–363. | DOI | Numdam | MR | Zbl
, , and ,[7] Convergence of quasi-optimal stochastic Galerkin methods for a class of PDEs with random coefficients. Comput. Math. Appl. 67 (2014) 732–751. | DOI | MR | Zbl
, , and ,[8] On the optimal polynomial approximation of stochastic PDEs by Galerkin and collocation methods. Math. Models Methods Appl. Sci. 22 (2012). | MR | Zbl
, , and ,[9] Stochastic spectral Galerkin and collocation methods for PDEs with random coefficients: A numerical comparison, edited by and . Spectral and High Order Methods for Partial Differential Equations. In: Vol. 76 Lecture Notes in Computational Science and Engineering. Springer, Berlin, Heidelberg (2011) 43–62. | DOI | MR | Zbl
, , and ,[10] Convex Analysis and Monotone Operator Theory in Hilbert Spaces. Springer International Publishing, Basel, Switzerland (2010). | MR | Zbl
and ,[11] Sparse tensor discretization of elliptic sPDEs. SIAM J. Sci. Comput. 31 (2009) 4281–4304. | DOI | MR | Zbl
, and ,[12] The relaxation method of finding the common point of convex sets and its application to the solution of problems in convex programming. USSR Comput. Math. Math. Phys. 7 (1967) 200–217. | DOI | MR | Zbl
,[13] The Mathematical Theory of Finite Element Methods.. In Vol. 15 of Texts in Applied Mathematics. Springer-Verlag, New York (1994). | DOI | MR | Zbl
and ,[14] Robustness to unknown error in sparse regularization. IEEE Trans. Inf. Theory 64 (2018) 6638–6661. | DOI | MR | Zbl
and ,[15] Robust uncertainty principles: Exact signal reconstruction from highly incomplete frequency information. IEEE Trans. Inf. Theory 52 (2006) 489–509. | DOI | MR | Zbl
, and ,[16] Sparse adaptive Taylor approximation algorithms for parametric and stochastic elliptic PDEs. Modél. Math. Anal. Numér. 47 (2013) 253–280. | DOI | Numdam | MR | Zbl
, , and ,[17] Discrete least squares polynomial approximation with random evaluations – Application to parametric and stochastic elliptic PDEs. ESAIM: M2AN 49 (2015) 815–837. | DOI | Numdam | MR | Zbl
, , , and ,[18] Breaking the curse of dimensionality in sparse polynomial approximation of parametric PDEs. J. Math. Pures Appl. 103 (2015) 400–428. | DOI | MR | Zbl
, and ,[19] Polynomial approximation via compressed sensing of high-dimensional functions on lower sets. Math. Comput. 87 (2018) 1415–1450. | DOI | MR | Zbl
, , and ,[20] On the stability and accuracy of least squares approximations. Found. Comput. Math. 13 (2013) 819–834. | DOI | MR | Zbl
, and ,[21] Convergence rates of best N-term Galerkin approximations for a class of elliptic sPDEs. Found. Comput. Math 10 (2010) 615–646. | DOI | MR | Zbl
, and ,[22] Analytic regularity and polynomial approximation of parametric and stochastic elliptic PDEs. Anal. Appl. 9 (2011) 11–47. | DOI | MR | Zbl
, and ,[23] Signal recovery by proximal forward-backward splitting. Multiscale Model. Simul. 4 (2005) 1168–1200. | DOI | MR | Zbl
and ,[24] Sparse solutions to linear inverse problems with multiple measurement vectors. IEEE Trans. Signal Proces. 53 (2005) 2477–2488. | DOI | MR | Zbl
, and ,[25] An iterative thresholding algorithm for linear inverse problems with a sparsity constraint. Commun. Pure Appl. Math. 57 (2004) 1413–1457. | DOI | MR | Zbl
, and ,[26] Nonlinear approximation. Acta. Numer. 7 (1998) 51–150. | DOI | MR | Zbl
,[27] On the strong convergence of forward-backward splitting in reconstructing jointly sparse signals. (Submitted2017).
, and ,[28] Explicit cost bounds of stochastic Galerkin approximations for parameterized PDEs with random coefficients. Comput. Math. Appl. 71 (2016) 2231–2256. | DOI | MR | Zbl
, and ,[29] Compressed sensing. IEEE Trans. Inf. Theory 52 (2006) 1289–1306. | DOI | MR | Zbl
,[30] A non-adapted sparse approximation of PDEs with stochastic inputs. J. Comput. Phys. 230 (2011) 3015–3034. | DOI | MR | Zbl
and ,[31] Average case analysis of multichannel sparse recovery using convex relaxation. IEEE Trans. Inf. Theory 56 (2010) 505–519. | DOI | MR | Zbl
and ,[32] Assessment of collocation and Galerkin approaches to linear diffusion equations with random data. Int. J. Uncertain. Quantif. 1 (2011) 19–33. | DOI | MR | Zbl
, , and ,[33] A Mathematical Introduction to Compressive Sensing. In: Applied and Numerical Harmonic Analysis. Birkhäuser /Springer, New York (2013). | DOI | MR | Zbl
and, ,[34] Stochastic Finite Elements: A Spectral Approach. 2nd edition. Dover, New York (2002). | MR | Zbl
and ,[35] Atoms of all channels, unite! Average case analysis of multi-channel sparse recovery using greedy algorithms. J. Fourier Anal. Appl. 14 (2008) 655–687. | DOI | MR | Zbl
, , and ,[36] Stochastic finite element methods for partial differential equations with random input data. Acta Numer. 23 (2014) 521–650. | DOI | MR | Zbl
, and ,[37] A gradient enhanced ℓ1-minimization for sparse approximation of polynomial chaos expansions. J. Comput. Phys. 367 (2018) 49–64. | DOI | MR | Zbl
, and ,[38] Fixed-point continuation for ℓ1-minimization: Methodology and convergence. SIAM J. Optim. 19 (2008) 1107–1130. | DOI | MR | Zbl
, and ,[39] Fixed-point continuation applied to compressed sensing: Implementation and numerical experiments. J. Comput. Math. 28 (2010) 170–194. | DOI | MR | Zbl
, and ,[40] Compressive sampling of polynomial chaos expansions: Convergence analysis and sampling strategies. J. Comput. Phys. 280 (2015) 363–386. | DOI | MR | Zbl
and ,[41] Compressive sampling of polynomial chaos expansions: Convergence analysis and sampling strategies. J. Comput. Phys. 280 (2015) 363–386. | DOI | MR | Zbl
and ,[42] Analytic regularity and nonlinear approximation of a class of parametric semilinear elliptic pdes. Math. Nachr. 286 (2013) 832–860. | DOI | MR | Zbl
and ,[43] Sparse adaptive approximation of high dimensional parametric initial value problems. Vietnam J. Math. 41 (2013) 181–215. | DOI | MR | Zbl
and ,[44] Sparse Tensor Galerkin Discretizations for parametric and random parabolic PDEs – analytic regularity and generalized polynomial chaos approximation. SIAM J. Math. Anal. 45 (2013) 3050–3083. | DOI | MR | Zbl
and ,[45] Regularity and generalized polynomial chaos approximation of parametric and random 2nd order hyperbolic partial differential equations. Anal. Appl. 10 (2012) 295–326. | DOI | MR | Zbl
and ,[46] N-term Wiener Chaos Approximation Rates for elliptic PDEs with lognormal Gaussian random inputs. Math. Models Methods Appl. Sci. 24 (2014) 797–826. | DOI | MR | Zbl
and ,[47] A generalized sampling and preconditioning scheme for sparse approximation of polynomial chaos expansions. SIAM J. Sci. Comput. 39 (2017) 1114–1144. | DOI | MR | Zbl
, and ,[48] The null space property for sparse recovery from multiple measurement vectors. Appl. Comput. Harmonic Anal. 30 (2011) 402–406. | DOI | MR | Zbl
and ,[49] Spectral Methods for uncertainty quantification With applications to computational fluid dynamics. In: Scientific Computation. Springer, Berlin (2010). | DOI | MR | Zbl
and ,[50] A compressed sensing approach for partial differential equations with random input data. Commun. Comput. Phys. 12 (2012) 919–954. | DOI | MR | Zbl
and ,[51] Reduce and boost: Recovering arbitrary sets of jointly sparse vectors. IEEE Trans. Signal Proces. 56 (2008) 4692–4702. | DOI | MR | Zbl
and ,[52] A sparse grid stochastic collocation method for elliptic partial differential equations with random input data. SIAM J. Numer. Anal. 46 (2008) 2309–2345. | DOI | MR | Zbl
, and ,[53] An anisotropic sparse grid stochastic collocation method for partial differential equations with random input data. SIAM J. Numer. Anal. 46 (2008) 2411–2442. | DOI | MR | Zbl
, and ,[54] An iterated regularization method for total variation-based image restoration. Multiscale Model. Simul. 4 (2005) 460–489. | DOI | MR | Zbl
, , , and ,[55] On polynomial chaos expansion via gradient-enhanced ℓ1-minimization. J. Comput. Phys. 310 (2016) 440–458. | DOI | MR | Zbl
, and ,[56] A weighted ℓ1-minimization approach for sparse polynomial chaos expansions. J. Comput. Phys. 267 (2014) 92–111. | DOI | MR | Zbl
, and ,[57] Sparse legendre expansions via ℓ1-minimization. J. Approx. Theory 164 (2012) 517–533. | DOI | MR | Zbl
and ,[58] Interpolation via weighted ℓ1-minimization. Appl. Comput. Harmonic Anal. (submitted, 2015). | MR | Zbl
and ,[59] Compressive sensing Petrov-Galerkin approximation of high-dimensional parametric operator equations. Math. Comput. 86 (2017) 661–700. | DOI | MR | Zbl
and ,[60] A dynamically adaptive sparse grid method for quasi-optimal interpolation of multidimensional functions. Comput. Math. Appl. 71 (2016) 2449–2465. | DOI | MR | Zbl
and ,[61] Convergence rates for sparse chaos approximations of elliptic problems with stochastic coefficients. IMA J. Numer. Anal. 27 (2007) 232–261. | DOI | MR | Zbl
and ,[62] Analysis of quasi-optimal polynomial approximations for parameterized PDEs with deterministic and stochastic coefficients. Numer. Math. 137 (2017) 451–493. | DOI | MR | Zbl
, and ,[63] Theoretical and empirical results for recovery from multiple measurements. IEEE Trans. Inf. Theory 56 (2010). | MR | Zbl
and ,[64] Compressed sensing with cross validation. IEEE Trans. Inf. Theory 55 (2009) 5773–5782. | DOI | MR | Zbl
,[65] Sparse grid stochastic collocation techniques for the numerical solution of partial differential equations with random input data. Ph.D. thesis, Florida State University (2007). | MR
,[66] The homogeneous chaos. Am. J. Math. 60 (1938) 897–936. | DOI | JFM | MR
,[67] Modeling uncertainty in steady state diffusion problems via generalized polynomial chaos. Comput. Methods Appl. Mech. Eng. 191 (2002) 4927–4948. | DOI | MR | Zbl
and ,[68] The Wiener-Askey polynomial chaos for stochastic differential equations. SIAM J. Sci. Comput. 24 (2002) 619–644. | DOI | MR | Zbl
and ,[69] Stochastic collocation algorithms using ℓ1-minimization. Int. J. Uncertain. Quantif. 2 (2012) 279–293. | DOI | MR | Zbl
, and ,[70] Reweighted ℓ1-minimization method for stochastic elliptic differential equations. J. Comput. Phys. 248 (2013) 87–108. | DOI | Zbl
and ,[71] Bregman iterative algorithms for ℓ1-minimization with applications to compressed sensing. SIAM J. Imag. Sci. 1 (2008) 143–168. | DOI | MR | Zbl
, , and ,[72] Error forgetting of Bregman iteration. J. Sci. Comput. 54 (2013) 684–695. | DOI | MR | Zbl
and ,Cité par Sources :