Cet article fait suite à la Conférence Lucien Le Cam que j’ai eu l’honneur de donner lors des XXXVIIèmes Journées de Statistique à Pau, en 2005. Il présente un aperçu de quelques résultats récents sur les méthodes d’agrégation d’estimateurs. Ces méthodes consistent à construire, à partir d’un ensemble de estimateurs donnés, une combinaison linéaire ou convexe de ces estimateurs avec des poids aléatoires choisis de façon optimale. Nous mettons l’accent sur le lien entre agrégation et optimisation stochastique, ce qui nous permet d’aboutir à de nouvelles procédures recursives d’agrégation très performantes.
This paper is a written version of the Conférence Lucien Le Cam delivered at the XXXVIIèmes Journées de Statistique in Pau, 2005. It presents an overview of some recent results on the methods of aggregation of estimators. Given a collection of estimators, aggregation procedures consist in constructing their convex or linear combination with optimally chosen random weights. We mainly focus on the link between aggregation and stochastic optimization which leads us to the construction of some new highly efficient recursive aggregation procedures.
Mots clés : aggregation, stochastic optimization, mirror descent, adaptive estimation, optimal rate of aggregation
@article{JSFS_2008__149_1_3_0, author = {Tsybakov, Alexandre B.}, title = {Agr\'egation d'estimateurs et optimisation stochastique}, journal = {Journal de la Soci\'et\'e fran\c{c}aise de statistique & Revue de statistique appliqu\'ee}, pages = {3--26}, publisher = {Soci\'et\'e fran\c{c}aise de statistique}, volume = {149}, number = {1}, year = {2008}, language = {fr}, url = {http://www.numdam.org/item/JSFS_2008__149_1_3_0/} }
TY - JOUR AU - Tsybakov, Alexandre B. TI - Agrégation d'estimateurs et optimisation stochastique JO - Journal de la Société française de statistique & Revue de statistique appliquée PY - 2008 SP - 3 EP - 26 VL - 149 IS - 1 PB - Société française de statistique UR - http://www.numdam.org/item/JSFS_2008__149_1_3_0/ LA - fr ID - JSFS_2008__149_1_3_0 ER -
%0 Journal Article %A Tsybakov, Alexandre B. %T Agrégation d'estimateurs et optimisation stochastique %J Journal de la Société française de statistique & Revue de statistique appliquée %D 2008 %P 3-26 %V 149 %N 1 %I Société française de statistique %U http://www.numdam.org/item/JSFS_2008__149_1_3_0/ %G fr %F JSFS_2008__149_1_3_0
Tsybakov, Alexandre B. Agrégation d'estimateurs et optimisation stochastique. Journal de la Société française de statistique & Revue de statistique appliquée, Tome 149 (2008) no. 1, pp. 3-26. http://www.numdam.org/item/JSFS_2008__149_1_3_0/
[1] Aggregated estimators and empirical complexity for least square regression. Annales de l'Institut Henri Poincaré (B) Probabilités et Statistiques 40 : 685-736. | Numdam | MR | Zbl
(2004).[2] Approximation and learning by greedy algorithms. Annals of Statistics, à paraître. | Zbl
, , et (2005).[3] The conjugate barrier mirror descent method for non-smooth convex optimization. MINERVA Optim. Center Report., Haifa : Faculty of Industrial Engineering and Management, Technion - Israel Institute of Technology. http://iew3.technion.ac.il/Labs/Opt/opt/Pap/CP_MD.pdf
et (1999).[4] Model selection via testing : an alternative to (penalized) maximum likelihood estimators. Annales de l'Institut Henri Poincaré (B) Probabilités et Statistiques 42 : 273 - 325. | Numdam | MR
(2006).[5] Some theory for generalized boosting algorithms. J. Machine Learning Research 7 : 705-732. | MR
, et (2006).[6] Sparse boosting. J. Machine Learning Research 7 : 1001-1024. | MR
et (2005).[7] Aggregation for regression learning. Preprint LPMA, Universities Paris 6 - Paris 7, n. 948, arXiv :math.ST/0410214 et https://hal.ccsd.cnrs.fr/ccsd-00003205
, et (2004).[8] Aggregation and sparsity via penalized least squares. Proceedings of 19th Annual Conference on Learning Theory (COLT 2006), Lecture Notes in Artificial Intelligence v. 4005 (Lugosi, G. and Simon, H.U., eds.), Springer-Verlag, Berlin-Heidelberg, 379-391. | MR | Zbl
, et (2006).[9] Aggregation for Gaussian regression. Annals of Statistics 35 : 1674-1697. | MR
, et (2007a).[10] Sparsity oracle inequalities for the Lasso. Electronic Journal of Statistics 1 : 169-194. | MR | Zbl
, et (2007b).[11] Sparse density estimation with penalties. Proceedings of 20th Annual Conference on Learning Theory (COLT 2007), Lecture Notes in Artificial Intelligence, v. 4539 (N.H. Bshouty and C. Gentile, eds.), Springer-Verlag, Berlin-Heidelberg, 530-543. | MR
, et (2007c).[12] “Universal” aggregation rules with exact bias bounds. Preprint LPMA, Universités Paris 6 - Paris 7, n.510.
(1999).[13] Statistical Learning Theory and Stochastic Optimization. Ecole d'Eté de Probabilités de Saint-Flour XXXI - 2001. Lecture Notes in Mathematics, vol. 1851, Springer, New York. | MR | Zbl
(2004).[14] Prediction, Learning, and Games. Cambridge Univ. Press. | MR | Zbl
et (2006).[15] A new algorithm for estimating the effective dimension-reduction subspace. arXiv :math/0701887v1.
, et (2007).[16] Aggregation by exponential weighting and sharp oracle inequalities. Proceedings of the 20th Annual Conference on Learning Theory (COLT-2007), Lecture Notes in Artificial Intelligence, v. 4539 (N.H. Bshouty and C. Gentile, eds.), Springer-Verlag, Berlin-Heidelberg, 97-111. | MR
et (2007).[17] Boosting a weak learning algorithm by majority. Information and Computation 121 : 256-285. | MR | Zbl
(1995).[18] Direct estimation of the index coefficient in a single-index model. Annals of Statistics 29 : 593-623. | MR | Zbl
, et (2001).[19] The behavior of maximum likelihood estimates under nonstandard conditions. Proc. Fifth Berkeley Symp. Math. Statist. Prob. 1 : 221-234. | MR | Zbl
(1967).[20] Recursive aggregation of estimators by the mirror descent algorithm with averaging. Problems of Information Transmission 41 : 368-384. | MR | Zbl
, , et (2005).[21] Functional aggregation for nonparametric estimation. Annals of Statistics 28 : 681-712. | MR | Zbl
et (2000).[22] Learning by mirror averaging. Annals of Statistics, à paraître.
, et (2005).[23] Density estimation with stagewise optimization of the empirical risk. Machine Learning 67 : 169-195.
(2006).[24] Local Rademacher complexities and oracle inequalities in empirical risk minimization (with discusssion). Annals of Statistics 34 : 2697-2706. | MR | Zbl
(2006a).[25] Sparsity in penalized empirical risk minimization. Annales de l'Institut Henri Poincaré (B) Probabilités et Statistiques, à paraître.
(2006b).[26] On some asymptotic properties of maximum likelihood estimates and related Bayes estimates. University of California Publications in Statistics 1 : 277-329. | MR | Zbl
(1953).[27] Lower bounds and aggregation in density estimation. J. Machine Learning Research 7 : 971-981. | MR
(2005).[28] Méthodes d'agrégation : optimalité et vitesses rapides. Thèse de doctorat, Université Paris 6. http://tel.archives-ouvertes.fr/tel-00150402
(2007a).[29] Suboptimality of penalized empirical risk minimization in classification. Proceedings of 20th Annual Conference on Learning Theory (COLT 2007), Lecture Notes in Artificial Intelligence, v. 4539 (N.H. Bshouty and C.Gentile, eds.), Springer-Verlag, Berlin-Heidelberg, 142-156. | MR
(2007b).[30] Prédiction randomisée de suites individuelles. J. Société Française de Statistique 147 : 5-37.
(2006).[31] Generalized mirror averaging and -convex aggregation. Mathematical Methods of Statistics 16 : 246-259. | MR
(2007).[32] On the Bayes-risk consistency of regularized boosting methods (with discussion). Annals of Statistics 32 : 30-55. | MR | Zbl
et (2004).[33] Greedy algorithms for classification - consistency, convergence rates, and adaptivity. Journal of Machine Learning Research 4 : 713-742. | MR | Zbl
, et (2003).[34] Topics in Non-parametric Statistics. Ecole d'Eté de Probabilités de Saint-Flour XXVIII - 1998, Lecture Notes in Mathematics, v. 1738, Springer : New York. | MR | Zbl
(2000).[35] Problem Complexity and Method Efficiency in Optimization, Wiley, Chichester. | MR | Zbl
et (1983).[36] Primal-dual subgradient methods for convex problems. CORE discussion paper 2005/67. Center for Operations Research and Econometrics, Louvain-la-Neuve, Belgique.
(2005).[37] Primal-dual subgradient methods for convex problems. Mathematical Programming, publié en ligne, DOI : 10.1007/s10107-007-0149-x.
(2007).[38] Inégalités d'oracle, agrégation et adaptation. Thèse de doctorat, Université Paris 6. http://tel.archives-ouvertes.fr/tel-00115494
(2006).[39] Linear and convex aggregation of density estimators. Mathematical Methods of Statistics 16 : 260-280. | MR
et (2007).[40] A stochastic approximation method. Annals of Mathematical Statistics 22 : 400-407. | MR | Zbl
et (1951).[41] Variational Analysis. Springer, N.Y. | MR | Zbl
et (1998).[42] Aggregation of density estimators and dimension reduction. Advances in Statistical Modeling and Inference. Essays in Honor of Kjell A. Doksum (V. Nair, ed.), World Scientific, Singapore e.a., 233-251. | MR
et (2007)[43] The strength of weak learnability. Machine Learning 5 : 197-227. | Zbl
(1990).[44] Optimal rates of aggregation. Computational Learning Theory and Kernel Machines, (B. Schölkopf and M. Warmuth, eds.), Lecture Notes in Artificial Intelligence, v. 2777. Springer, Heidelberg, 303-313.
(2003).[45] Theory of Statistical Inference and Information. Kluwer, Dordrecht. | Zbl
(1986).[46] High dimensional generalized linear models and the Lasso. Annals of Statistics, à paraître. | Zbl
(2006).[47] Theory of Pattern Recognition Nauka, Moscow (in Russian). Traduction allemande : Wapnik W. und Tscherwonenkis A. Theorie der Zeichenerkennung, Berlin : Akademie-Verlag, 1979. | MR
et (1974).[48] Kernel Smoothing. Chapman and Hall, London. | MR | Zbl
et (1995).[49] Model selection in nonparametric regression. Annals of Statistics 31 : 252-273. | MR | Zbl
(2003).[50] Mixing strategies for density estimation. Annals of Statistics 28 : 75-87. | MR | Zbl
(2000a).[51] Combining different procedures for adaptive regression. Journal of Multivariate Analysis 74 : 135-161. | MR | Zbl
(2000b).[52] Adaptive regression by mixing. Journal of the American Statistical Association 96 : 574-588. | MR | Zbl
(2001).[53] Aggregating regression procedures for a better performance. Bernoulli 10 : 25-47. | MR | Zbl
(2004).[54] Boosting with early stopping : convergence and cosistency. Annals of Statistics 33 : 1538-1579. | MR | Zbl
et (2005)