Optimal survey schemes for stochastic gradient descent with applications to M-estimation
ESAIM: Probability and Statistics, Tome 23 (2019), pp. 310-337.

Iterative stochastic approximation methods are widely used to solve M-estimation problems, in the context of predictive learning in particular. In certain situations that shall be undoubtedly more and more common in the Big Data era, the datasets available are so massive that computing statistics over the full sample is hardly feasible, if not unfeasible. A natural and popular approach to gradient descent in this context consists in substituting the “full data” statistics with their counterparts based on subsamples picked at random of manageable size. It is the main purpose of this paper to investigate the impact of survey sampling with unequal inclusion probabilities on stochastic gradient descent-based M-estimation methods. Precisely, we prove that, in presence of some a priori information, one may significantly increase statistical accuracy in terms of limit variance, when choosing appropriate first order inclusion probabilities. These results are described by asymptotic theorems and are also supported by illustrative numerical experiments.

Reçu le :
Accepté le :
DOI : 10.1051/ps/2018021
Classification : 62D05
Mots-clés : Asymptotic analysis, central limit theorem, Horvitz–Thompson estimator, M-estimation, Poisson sampling, stochastic gradient descent, survey scheme
Clémençon, Stephan 1 ; Bertail, Patrice 1 ; Chautru, Emilie 1 ; Papa, Guillaume 1

1
@article{PS_2019__23__310_0,
     author = {Cl\'emen\c{c}on, Stephan and Bertail, Patrice and Chautru, Emilie and Papa, Guillaume},
     title = {Optimal survey schemes for stochastic gradient descent with applications to {M-estimation}},
     journal = {ESAIM: Probability and Statistics},
     pages = {310--337},
     publisher = {EDP-Sciences},
     volume = {23},
     year = {2019},
     doi = {10.1051/ps/2018021},
     mrnumber = {3963530},
     zbl = {1420.62036},
     language = {en},
     url = {http://www.numdam.org/articles/10.1051/ps/2018021/}
}
TY  - JOUR
AU  - Clémençon, Stephan
AU  - Bertail, Patrice
AU  - Chautru, Emilie
AU  - Papa, Guillaume
TI  - Optimal survey schemes for stochastic gradient descent with applications to M-estimation
JO  - ESAIM: Probability and Statistics
PY  - 2019
SP  - 310
EP  - 337
VL  - 23
PB  - EDP-Sciences
UR  - http://www.numdam.org/articles/10.1051/ps/2018021/
DO  - 10.1051/ps/2018021
LA  - en
ID  - PS_2019__23__310_0
ER  - 
%0 Journal Article
%A Clémençon, Stephan
%A Bertail, Patrice
%A Chautru, Emilie
%A Papa, Guillaume
%T Optimal survey schemes for stochastic gradient descent with applications to M-estimation
%J ESAIM: Probability and Statistics
%D 2019
%P 310-337
%V 23
%I EDP-Sciences
%U http://www.numdam.org/articles/10.1051/ps/2018021/
%R 10.1051/ps/2018021
%G en
%F PS_2019__23__310_0
Clémençon, Stephan; Bertail, Patrice; Chautru, Emilie; Papa, Guillaume. Optimal survey schemes for stochastic gradient descent with applications to M-estimation. ESAIM: Probability and Statistics, Tome 23 (2019), pp. 310-337. doi : 10.1051/ps/2018021. http://www.numdam.org/articles/10.1051/ps/2018021/

[1] F. Bach and E. Moulines, E. Moulines, and F. Bach, Non-Asymptotic Analysis of Stochastic Approximation Algorithms for Machine Learning, in Vol. 24 of Advances in Neural Information Processing Systems, edited by J. Shawe-Taylor, R.S. Zemel, P.L. Bartlett, F. Pereira, K.Q. Weinberger, Curran Associates, Inc. (2011) 451–459.

[2] R. Bekkerman, M. Bilenko and J. Langford, Scaling Up Machine Learning. Cambridge University Press, Cambridge (2011). | DOI

[3] Y. Berger, Rate of convergence to normal distribution for the Horvitz-Thompson estimator. J. Stat. Plan. Inference 67 (1998) 209–226. | DOI | MR | Zbl

[4] Y. Berger, Asymptotic consistency under large entropy sampling designs with unequal probabilities. Pak. J. Stat. 27 (2011) 407–426. | MR | Zbl

[5] P. Bertail, E. Chautru and S. Clémençon, Empirical processes in survey sampling with (conditional) Poisson designs. Scand. J. Stat. 44 (2017) 97–111. | DOI | MR | Zbl

[6] D. Bertsekas, Convex Analysis and Optimization. Athena Scientific, NH (2003). | Zbl

[7] P. Bianchi, S. Clémençon, J. Jakubowicz and G. Moral-Adell, On-Line Learning Gossip Algorithm in Multi-Agent Systems with Local Decision Rules, in 2013 IEEE International Conference on Big Data (BIG DATA) (2014) 6–14.

[8] P. Bickel, C. Klaassen, Y. Ritov and J. Wellner, Efficient and Adaptive Estimation for Semiparametric Models. Johns Hopkins University Press, Baltimore (1993). | MR | Zbl

[9] V. Borkar, Stochastic Approximation: A Dynamical Systems Viewpoint. Cambridge University Press, Cambridge (2008). | DOI | MR | Zbl

[10] L. Bottou, Online Algorithms and Stochastic Approximations: Online Learning and Neural Networks. Cambridge University Press, Cambridge (1998). | Zbl

[11] L. Bottou and O. Bousquet, The tradeoffs of large scale learning. Adv. Neural Inf. Process. Syst. 20 (2008) 161–168.

[12] S. Boucheron, O. Bousquet and G. Lugosi, Theory of classification: a survey of some recent advances. ESAIM: PS 9 (2005) 323–375. | DOI | Numdam | MR | Zbl

[13] N. Breslow and J. Wellner, Weighted likelihood for semiparametric models and two-phase stratified samples, with application to Cox regression. Scand. J. Stat. 35 (2007) 186–192. | DOI | MR | Zbl

[14] N. Breslow and J. Wellner, A Z-theorem with estimated nuisance parameters and correction note for “Weighted likelihood for semiparametric models and two-phase stratified samples, with application to Cox regression”. Scand. J. Stat. 35 (2008) 186–192. | DOI | MR | Zbl

[15] N. Breslow, T. Lumley, C. Ballantyne, L. Chambless and M. Kulich, Improved Horvitz-Thompson estimation of model parameters from two-phase stratified samples: applications in epidemiology. Stat. Biosci. 1 (2009) 32–49. | DOI

[16] S. Clémençon, S. Robbiano and J. Tressou, Maximal Deviations of Incomplete U-statistics with Applications to Empirical Risk Sampling, in Proceedings of the 2013 SIAM International Conference on Data Mining (2013) 19–27. | DOI

[17] S. Clémençon, A. Bellet and I. Colin, Scaling-up empirical risk minimization: optimization of incomplete U-statistics. J. Mach. Learn. Res. 17 (2016) 1–36. | MR | Zbl

[18] W. Cochran, Sampling Techniques. Wiley, NY (1977). | MR | Zbl

[19] B. Delyon, Stochastic Approximation with Decreasing Gain: Convergence and Asymptotic Theory, 2000. Available at: http://perso.univ-rennes1.fr/bernard.delyon/.

[20] J. Deville, Réplications d’échantillons, demi-échantillons, Jackknife, bootstrap dans, Les Sondages, edited by J.-J. Droesbeke, Ph. Tassi, B. Fichet. Economica (1987).

[21] J. Deville and C. Särndal, Calibration estimators in survey sampling. J. Acoust. Soc. Amer. 87 (1992) 376–382. | MR | Zbl

[22] L. Devroye, L. Györfi and G. Lugosi, A Probabilistic Theory of Pattern Recognition. Springer, New York (1996). | DOI | MR | Zbl

[23] R. Gill, Y. Vardi and J. Wellner, Large sample theory of empirical distributions in biased sampling models. Ann. Stat. 16 (1988) 1069–1112. | DOI | MR | Zbl

[24] J. Hajek, Asymptotic theory of rejective sampling with varying probabilities from a finite population. Ann. Math. Stat. 35 (1964) 1491–1523. | DOI | MR | Zbl

[25] D. Horvitz and D. Thompson, A generalization of sampling without replacement from a finite universe. J. Acoust. Soc. Amer. 47 (1951) 663–685. | MR | Zbl

[26] V. Koltchinskii, Local Rademacher complexities and oracle inequalities in risk minimization (with discussion). Ann. Stat. 34 (2006) 2593–2706. | Zbl

[27] H. Kushner and G. Yin, Stochastic Approximation and Recursive Algorithms and Applications. Springer, New York (2010). | Zbl

[28] G. Mateos, J. Bazerque and G. Giannakis, Distributed sparse linear regression. IEEE Trans. Signal Process. 58 (2010) 5262–5276. | DOI | MR | Zbl

[29] A. Navia-Vazquez, D. Gutierrez-Gonzalez, E. Parrado-Hernandez and J. Navarro-Abellan, Distributed support vector machines. IEEE Trans. Neural Netw. 17 (2006) 1091–1097. | DOI

[30] A. Nemirovski, A. Juditsky, G. Lan and A. Shapiro, Robust stochastic approximation approach to stochastic programming. SIAM J. Optim. 19 (2009) 1574–1609. | DOI | MR | Zbl

[31] Y. Nesterov, Introductory lectures on convex optimization: a basic course, in Applied Optimization. Kluwer Academic Publ., Boston, Dordrecht, London (2004). | DOI | MR | Zbl

[32] M. Pelletier, Weak convergence rates for stochastic approximation with application to multiple targets and simulated annealing. Ann. Appl. Probab. 8 (1998) 10–44. | DOI | MR | Zbl

[33] P. Robinson, On the convergence of the Horvitz-Thompson estimator. Aust. J. Stat. 24 (1982) 234–238. | DOI | MR | Zbl

[34] P. Rosen, Asymptotic theory for successive sampling. AMS J. 43 (1972) 373–397. | Zbl

[35] T. Saegusa and J. Wellner, Weighted likelihood estimation under two-phase sampling. Ann. Statist. 41 (2013) 269–295. | DOI | MR | Zbl

[36] S. Van De Geer, Empirical Processes in M-Estimation. Cambridge University Press, Cambridge (2000). | Zbl

[37] A. Van Der Vaart, Asymptotic Statistics. Vol. 3, Cambridge University Press, Cambridge (2000). | MR | Zbl

Cité par Sources :