Theory of classification : a survey of some recent advances
ESAIM: Probability and Statistics, Tome 9 (2005), pp. 323-375.

The last few years have witnessed important new developments in the theory and practice of pattern classification. We intend to survey some of the main new ideas that have led to these recent results.

DOI : 10.1051/ps:2005018
Classification : 62G08, 60E15, 68Q32
Mots-clés : pattern recognition, statistical learning theory, concentration inequalities, empirical processes, model selection
@article{PS_2005__9__323_0,
     author = {Boucheron, St\'ephane and Bousquet, Olivier and Lugosi, G\'abor},
     title = {Theory of classification : a survey of some recent advances},
     journal = {ESAIM: Probability and Statistics},
     pages = {323--375},
     publisher = {EDP-Sciences},
     volume = {9},
     year = {2005},
     doi = {10.1051/ps:2005018},
     mrnumber = {2182250},
     zbl = {1136.62355},
     language = {en},
     url = {https://www.numdam.org/articles/10.1051/ps:2005018/}
}
TY  - JOUR
AU  - Boucheron, Stéphane
AU  - Bousquet, Olivier
AU  - Lugosi, Gábor
TI  - Theory of classification : a survey of some recent advances
JO  - ESAIM: Probability and Statistics
PY  - 2005
SP  - 323
EP  - 375
VL  - 9
PB  - EDP-Sciences
UR  - https://www.numdam.org/articles/10.1051/ps:2005018/
DO  - 10.1051/ps:2005018
LA  - en
ID  - PS_2005__9__323_0
ER  - 
%0 Journal Article
%A Boucheron, Stéphane
%A Bousquet, Olivier
%A Lugosi, Gábor
%T Theory of classification : a survey of some recent advances
%J ESAIM: Probability and Statistics
%D 2005
%P 323-375
%V 9
%I EDP-Sciences
%U https://www.numdam.org/articles/10.1051/ps:2005018/
%R 10.1051/ps:2005018
%G en
%F PS_2005__9__323_0
Boucheron, Stéphane; Bousquet, Olivier; Lugosi, Gábor. Theory of classification : a survey of some recent advances. ESAIM: Probability and Statistics, Tome 9 (2005), pp. 323-375. doi : 10.1051/ps:2005018. https://www.numdam.org/articles/10.1051/ps:2005018/

[1] R. Ahlswede, P. Gács and J. Körner, Bounds on conditional probabilities with applications in multi-user communication. Zeitschrift für Wahrscheinlichkeitstheorie und verwandte Gebiete 34 (1976) 157-177. (correction in 39 (1977) 353-354). | Zbl

[2] M.A. Aizerman, E.M. Braverman and L.I. Rozonoer, The method of potential functions for the problem of restoring the characteristic of a function converter from randomly observed points. Automat. Remote Control 25 (1964) 1546-1556. | Zbl

[3] M.A. Aizerman, E.M. Braverman and L.I. Rozonoer, The probability problem of pattern recognition learning and the method of potential functions. Automat. Remote Control 25 (1964) 1307-1323. | Zbl

[4] M.A. Aizerman, E.M. Braverman and L.I. Rozonoer, Theoretical foundations of the potential function method in pattern recognition learning. Automat. Remote Control 25 (1964) 917-936. | Zbl

[5] M.A. Aizerman, E.M. Braverman and L.I. Rozonoer, Method of potential functions in the theory of learning machines. Nauka, Moscow (1970).

[6] H. Akaike, A new look at the statistical model identification. IEEE Trans. Automat. Control 19 (1974) 716-723. | MR | Zbl

[7] S. Alesker, A remark on the Szarek-Talagrand theorem. Combin. Probab. Comput. 6 (1997) 139-144. | MR | Zbl

[8] N. Alon, S. Ben-David, N. Cesa-Bianchi and D. Haussler, Scale-sensitive dimensions, uniform convergence, and learnability. J. ACM 44 (1997) 615-631. | MR | Zbl

[9] M. Anthony and P.L. Bartlett, Neural Network Learning: Theoretical Foundations. Cambridge University Press, Cambridge (1999). | MR | Zbl

[10] M. Anthony and N. Biggs, Computational Learning Theory. Cambridge Tracts in Theoretical Computer Science (30). Cambridge University Press, Cambridge (1992). | MR | Zbl

[11] M. Anthony and J. Shawe-Taylor, A result of Vapnik with applications. Discrete Appl. Math. 47 (1993) 207-217. | Zbl

[12] A Antos, L. Devroye and L. Györfi, Lower bounds for Bayes error estimation. IEEE Trans. Pattern Anal. Machine Intelligence 21 (1999) 643-645.

[13] A. Antos, B. Kégl, T. Linder and G. Lugosi, Data-dependent margin-based generalization bounds for classification. J. Machine Learning Res. 3 (2002) 73-98. | Zbl

[14] A. Antos and G. Lugosi, Strong minimax lower bounds for learning. Machine Learning 30 (1998) 31-56. | Zbl

[15] P. Assouad, Densité et dimension. Annales de l'Institut Fourier 33 (1983) 233-282. | Numdam | Zbl

[16] J.-Y. Audibert and O. Bousquet, Pac-Bayesian generic chaining, in Advances in Neural Information Processing Systems 16, L. Saul, S. Thrun and B. Schölkopf Eds., Cambridge, Mass., MIT Press (2004).

[17] J.-Y. Audibert, PAC-Bayesian Statistical Learning Theory. Ph.D. Thesis, Université Paris 6, Pierre et Marie Curie (2004).

[18] K. Azuma, Weighted sums of certain dependent random variables. Tohoku Math. J. 68 (1967) 357-367. | Zbl

[19] Y. Baraud, Model selection for regression on a fixed design. Probability Theory and Related Fields 117 (2000) 467-493. | Zbl

[20] A.R. Barron, L. Birgé and P. Massart, Risks bounds for model selection via penalization. Probab. Theory Related Fields 113 (1999) 301-415. | Zbl

[21] A.R. Barron, Logically smooth density estimation. Technical Report TR 56, Department of Statistics, Stanford University (1985).

[22] A.R. Barron, Complexity regularization with application to artificial neural networks, in Nonparametric Functional Estimation and Related Topics, G. Roussas Ed. NATO ASI Series, Kluwer Academic Publishers, Dordrecht (1991) 561-576. | Zbl

[23] A.R. Barron and T.M. Cover, Minimum complexity density estimation. IEEE Trans. Inform. Theory 37 (1991) 1034-1054. | Zbl

[24] P. Bartlett, S. Boucheron and G. Lugosi, Model selection and error estimation. Machine Learning 48 (2001) 85-113. | Zbl

[25] P. Bartlett, O. Bousquet and S. Mendelson, Localized Rademacher complexities. Ann. Statist. 33 (2005) 1497-1537. | Zbl

[26] P.L. Bartlett and S. Ben-David, Hardness results for neural network approximation problems. Theoret. Comput. Sci. 284 (2002) 53-66. | Zbl

[27] P.L. Bartlett, M.I. Jordan and J.D. Mcauliffe, Convexity, classification, and risk bounds. J. Amer. Statis. Assoc., to appear (2005). | MR | Zbl

[28] P.L. Bartlett and W. Maass, Vapnik-Chervonenkis dimension of neural nets, in Handbook Brain Theory Neural Networks, M.A. Arbib Ed. MIT Press, second edition. (2003) 1188-1192.

[29] P.L. Bartlett and S. Mendelson, Rademacher and gaussian complexities: risk bounds and structural results. J. Machine Learning Res. 3 (2002) 463-482. | Zbl

[30] P. L. Bartlett, S. Mendelson and P. Philips, Local Complexities for Empirical Risk Minimization, in Proc. of the 17th Annual Conference on Learning Theory (COLT), Springer (2004). | MR | Zbl

[31] O. Bashkirov, E.M. Braverman and I.E. Muchnik, Potential function algorithms for pattern recognition learning machines. Automat. Remote Control 25 (1964) 692-695. | Zbl

[32] S. Ben-David, N. Eiron and H.-U. Simon, Limitations of learning via embeddings in Euclidean half spaces. J. Machine Learning Res. 3 (2002) 441-461. | Zbl

[33] G. Bennett, Probability inequalities for the sum of independent random variables. J. Amer. Statis. Assoc. 57 (1962) 33-45. | Zbl

[34] S.N. Bernstein, The Theory of Probabilities. Gostehizdat Publishing House, Moscow (1946).

[35] L. Birgé, An alternative point of view on Lepski's method, in State of the art in probability and statistics (Leiden, 1999), Inst. Math. Statist., Beachwood, OH, IMS Lecture Notes Monogr. Ser. 36 (2001) 113-133.

[36] L. Birgé and P. Massart, Rates of convergence for minimum contrast estimators. Probab. Theory Related Fields 97 (1993) 113-150. | Zbl

[37] L. Birgé and P. Massart, From model selection to adaptive estimation, in Festschrift for Lucien Le Cam: Research papers in Probability and Statistics, E. Torgersen D. Pollard and G. Yang Eds., Springer, New York (1997) 55-87. | Zbl

[38] L. Birgé and P. Massart, Minimum contrast estimators on sieves: exponential bounds and rates of convergence. Bernoulli 4 (1998) 329-375. | Zbl

[39] G. Blanchard, O. Bousquet and P. Massart, Statistical performance of support vector machines. Ann. Statist., to appear (2006). | MR | Zbl

[40] G. Blanchard, G. Lugosi and N. Vayatis, On the rates of convergence of regularized boosting classifiers. J. Machine Learning Res. 4 (2003) 861-894. | Zbl

[41] A. Blumer, A. Ehrenfeucht, D. Haussler and M.K. Warmuth, Learnability and the Vapnik-Chervonenkis dimension. J. ACM 36 (1989) 929-965. | Zbl

[42] S. Bobkov and M. Ledoux, Poincaré's inequalities and Talagrands's concentration phenomenon for the exponential distribution. Probab. Theory Related Fields 107 (1997) 383-400. | Zbl

[43] B. Boser, I. Guyon and V.N. Vapnik, A training algorithm for optimal margin classifiers, in Proc. of the Fifth Annual ACM Workshop on Computational Learning Theory (COLT). Association for Computing Machinery, New York, NY (1992) 144-152.

[44] S. Boucheron, O. Bousquet, G. Lugosi and P. Massart, Moment inequalities for functions of independent random variables. Ann. Probab. 33 (2005) 514-560. | Zbl

[45] S. Boucheron, G. Lugosi and P. Massart, A sharp concentration inequality with applications. Random Structures Algorithms 16 (2000) 277-292. | Zbl

[46] S. Boucheron, G. Lugosi and P. Massart, Concentration inequalities using the entropy method. Ann. Probab. 31 (2003) 1583-1614. | Zbl

[47] O. Bousquet, A Bennett concentration inequality and its application to suprema of empirical processes. C. R. Acad. Sci. Paris 334 (2002) 495-500. | Zbl

[48] O. Bousquet, Concentration inequalities for sub-additive functions using the entropy method, in Stochastic Inequalities and Applications, C. Houdré E. Giné and D. Nualart Eds., Birkhauser (2003). | MR | Zbl

[49] O. Bousquet and A. Elisseeff, Stability and generalization. J. Machine Learning Res. 2 (2002) 499-526. | Zbl

[50] O. Bousquet, V. Koltchinskii and D. Panchenko, Some local measures of complexity of convex hulls and generalization bounds, in Proceedings of the 15th Annual Conference on Computational Learning Theory (COLT), Springer (2002) 59-73. | Zbl

[51] L. Breiman, Arcing classifiers. Ann. Statist. 26 (1998) 801-849. | Zbl

[52] L. Breiman, Some infinite theory for predictor ensembles. Ann. Statist. 32 (2004) 1-11. | Zbl

[53] L. Breiman, J.H. Friedman, R.A. Olshen and C.J. Stone, Classification and Regression Trees. Wadsworth International, Belmont, CA (1984). | MR | Zbl

[54] P. Bühlmann and B. Yu, Boosting with the l2-loss: Regression and classification. J. Amer. Statis. Assoc. 98 (2004) 324-339. | Zbl

[55] A. Cannon, J.M. Ettinger, D. Hush and C. Scovel, Machine learning with data dependent hypothesis classes. J. Machine Learning Res. 2 (2002) 335-358. | Zbl

[56] G. Castellan, Density estimation via exponential model selection. IEEE Trans. Inform. Theory 49 (2003) 2052-2060.

[57] O. Catoni, Randomized estimators and empirical complexity for pattern recognition and least square regression. Preprint PMA-677.

[58] O. Catoni, Statistical learning theory and stochastic optimization. École d'été de Probabilités de Saint-Flour XXXI. Springer-Verlag. Lect. Notes Math. 1851 (2004). | Zbl

[59] O. Catoni, Localized empirical complexity bounds and randomized estimators (2003). Preprint.

[60] N. Cesa-Bianchi and D. Haussler, A graph-theoretic generalization of the Sauer-Shelah lemma. Discrete Appl. Math. 86 (1998) 27-35. | Zbl

[61] M. Collins, R.E. Schapire and Y. Singer, Logistic regression, AdaBoost and Bregman distances. Machine Learning 48 (2002) 253-285. | Zbl

[62] C. Cortes and V.N. Vapnik, Support vector networks. Machine Learning 20 (1995) 1-25. | Zbl

[63] T.M. Cover, Geometrical and statistical properties of systems of linear inequalities with applications in pattern recognition. IEEE Trans. Electronic Comput. 14 (1965) 326-334. | Zbl

[64] P. Craven and G. Wahba, Smoothing noisy data with spline functions: estimating the correct degree of smoothing by the method of generalized cross-validation. Numer. Math. 31 (1979) 377-403. | Zbl

[65] N. Cristianini and J. Shawe-Taylor, An Introduction to Support Vector Machines and other kernel-based learning methods. Cambridge University Press, Cambridge, UK (2000). | Zbl

[66] I. Csiszár, Large-scale typicality of Markov sample paths and consistency of MDL order estimators. IEEE Trans. Inform. Theory 48 (2002) 1616-1628. | Zbl

[67] I. Csiszár and P. Shields, The consistency of the BIC Markov order estimator. Ann. Statist. 28 (2000) 1601-1619. | Zbl

[68] F. Cucker and S. Smale, On the mathematical foundations of learning. Bull. Amer. Math. Soc. (2002) 1-50. | Zbl

[69] A. Dembo, Information inequalities and concentration of measure. Ann. Probab. 25 (1997) 927-939. | Zbl

[70] P.A. Devijver and J. Kittler, Pattern Recognition: A Statistical Approach. Prentice-Hall, Englewood Cliffs, NJ (1982). | MR | Zbl

[71] L. Devroye, Automatic pattern recognition: A study of the probability of error. IEEE Trans. Pattern Anal. Machine Intelligence 10 (1988) 530-543. | Zbl

[72] L. Devroye, L. Györfi and G. Lugosi, A Probabilistic Theory of Pattern Recognition. Springer-Verlag, New York (1996). | MR | Zbl

[73] L. Devroye and G. Lugosi, Lower bounds in pattern recognition and learning. Pattern Recognition 28 (1995) 1011-1018.

[74] L. Devroye and T. Wagner, Distribution-free inequalities for the deleted and holdout error estimates. IEEE Trans. Inform. Theory 25(2) (1979) 202-207. | Zbl

[75] L. Devroye and T. Wagner, Distribution-free performance bounds for potential function rules. IEEE Trans. Inform. Theory 25(5) (1979) 601-604. | Zbl

[76] D.L. Donoho and I.M. Johnstone, Ideal spatial adaptation by wavelet shrinkage. Biometrika 81(3) (1994) 425-455. | Zbl

[77] R.O. Duda and P.E. Hart, Pattern Classification and Scene Analysis. John Wiley, New York (1973). | Zbl

[78] R.O. Duda, P.E. Hart and D.G. Stork, Pattern Classification. John Wiley and Sons (2000). | MR | Zbl

[79] R.M. Dudley, Central limit theorems for empirical measures. Ann. Probab. 6 (1978) 899-929. | Zbl

[80] R.M. Dudley, Balls in Rk do not cut all subsets of k+2 points. Advances Math. 31 (3) (1979) 306-308. | Zbl

[81] R.M. Dudley, Empirical processes, in École de Probabilité de St. Flour 1982. Lect. Notes Math. 1097 (1984). | Zbl

[82] R.M. Dudley, Universal Donsker classes and metric entropy. Ann. Probab. 15 (1987) 1306-1326. | Zbl

[83] R.M. Dudley, Uniform Central Limit Theorems. Cambridge University Press, Cambridge (1999). | MR | Zbl

[84] R.M. Dudley, E. Giné and J. Zinn, Uniform and universal Glivenko-Cantelli classes. J. Theoret. Probab. 4 (1991) 485-510. | Zbl

[85] B. Efron, Bootstrap methods: another look at the jackknife. Ann. Statist. 7 (1979) 1-26. | Zbl

[86] B. Efron, The jackknife, the bootstrap, and other resampling plans. SIAM, Philadelphia (1982). | MR | Zbl

[87] B. Efron and R.J. Tibshirani, An Introduction to the Bootstrap. Chapman and Hall, New York (1994). | MR | Zbl

[88] A. Ehrenfeucht, D. Haussler, M. Kearns and L. Valiant, A general lower bound on the number of examples needed for learning. Inform. Comput. 82 (1989) 247-261. | Zbl

[89] T. Evgeniou, M. Pontil and T. Poggio, Regularization networks and support vector machines, in Advances in Large Margin Classifiers, A.J. Smola, P.L. Bartlett B. Schölkopf and D. Schuurmans, Eds., Cambridge, MA, MIT Press. (2000) 171-203.

[90] P. Frankl, On the trace of finite sets. J. Combin. Theory, Ser. A 34 (1983) 41-45. | Zbl

[91] Y. Freund, Boosting a weak learning algorithm by majority. Inform. Comput. 121 (1995) 256-285. | Zbl

[92] Y. Freund, Self bounding learning algorithms, in Proceedings of the 11th Annual Conference on Computational Learning Theory (1998) 127-135.

[93] Y. Freund, Y. Mansour and R.E. Schapire, Generalization bounds for averaged classifiers (how to be a Bayesian without believing). Ann. Statist. (2004). | MR | Zbl

[94] Y. Freund and R. Schapire, A decision-theoretic generalization of on-line learning and an application to boosting. J. Comput. Syst. Sci. 55 (1997) 119-139. | Zbl

[95] J. Friedman, T. Hastie and R. Tibshirani, Additive logistic regression: a statistical view of boosting. Ann. Statist. 28 (2000) 337-374. | Zbl

[96] M. Fromont, Some problems related to model selection: adaptive tests and bootstrap calibration of penalties. Thèse de doctorat, Université Paris-Sud (December 2003).

[97] K. Fukunaga, Introduction to Statistical Pattern Recognition. Academic Press, New York (1972). | MR | Zbl

[98] E. Giné, Empirical processes and applications: an overview. Bernoulli 2 (1996) 1-28. | Zbl

[99] E. Giné and J. Zinn, Some limit theorems for empirical processes. Ann. Probab. 12 (1984) 929-989. | Zbl

[100] E. Giné, Lectures on some aspects of the bootstrap, in Lectures on probability theory and statistics (Saint-Flour, 1996). Lect. Notes Math. 1665 (1997) 37-151. | Zbl

[101] P. Goldberg and M. Jerrum, Bounding the Vapnik-Chervonenkis dimension of concept classes parametrized by real numbers. Machine Learning 18 (1995) 131-148. | Zbl

[102] U. Grenander, Abstract inference. John Wiley & Sons Inc., New York (1981). | MR | Zbl

[103] P. Hall, Large sample optimality of least squares cross-validation in density estimation. Ann. Statist. 11 (1983) 1156-1174. | Zbl

[104] T. Hastie, R. Tibshirani and J. Friedman, The Elements of Statistical Learning. Springer Series in Statistics. Springer-Verlag, New York (2001). | MR | Zbl

[105] D. Haussler, Decision theoretic generalizations of the pac model for neural nets and other learning applications. Inform. Comput. 100 (1992) 78-150. | Zbl

[106] D. Haussler, Sphere packing numbers for subsets of the boolean n-cube with bounded Vapnik-Chervonenkis dimension. J. Combin. Theory, Ser. A 69 (1995) 217-232. | Zbl

[107] D. Haussler, N. Littlestone and M. Warmuth, Predicting {0,1} functions from randomly drawn points, in Proc. of the 29th IEEE Symposium on the Foundations of Computer Science, IEEE Computer Society Press, Los Alamitos, CA (1988) 100-109.

[108] R. Herbrich and R.C. Williamson, Algorithmic luckiness. J. Machine Learning Res. 3 (2003) 175-212. | Zbl

[109] W. Hoeffding, Probability inequalities for sums of bounded random variables. J. Amer. Statist. Assoc. 58 (1963) 13-30. | Zbl

[110] P. Huber, The behavior of the maximum likelihood estimates under non-standard conditions, in Proc. Fifth Berkeley Symposium on Probability and Mathematical Statistics, Univ. California Press (1967) 221-233. | Zbl

[111] W. Jiang, Process consistency for adaboost. Ann. Statist. 32 (2004) 13-29. | Zbl

[112] D.S. Johnson and F.P. Preparata, The densest hemisphere problem. Theoret. Comput. Sci. 6 (1978) 93-107. | Zbl

[113] I. Johnstone, Function estimation and gaussian sequence models. Technical Report. Department of Statistics, Stanford University (2002).

[114] M. Karpinski and A. Macintyre, Polynomial bounds for vc dimension of sigmoidal and general pfaffian neural networks. J. Comput. Syst. Sci. 54 (1997). | MR | Zbl

[115] M. Kearns, Y. Mansour, A.Y. Ng and D. Ron, An experimental and theoretical comparison of model selection methods, in Proc. of the Eighth Annual ACM Workshop on Computational Learning Theory, Association for Computing Machinery, New York (1995) 21-30.

[116] M.J. Kearns and D. Ron, Algorithmic stability and sanity-check bounds for leave-one-out cross-validation. Neural Comput. 11(6) (1999) 1427-1453.

[117] M.J. Kearns and U.V. Vazirani, An Introduction to Computational Learning Theory. MIT Press, Cambridge, Massachusetts (1994). | MR

[118] A.G. Khovanskii, Fewnomials. Translations of Mathematical Monographs 88, American Mathematical Society (1991). | MR | Zbl

[119] J.C. Kieffer, Strongly consistent code-based identification and order estimation for constrained finite-state model classes. IEEE Trans. Inform. Theory 39 (1993) 893-902. | Zbl

[120] G.S. Kimeldorf and G. Wahba, A correspondence between Bayesian estimation on stochastic processes and smoothing by splines. Ann. Math. Statist. 41 (1970) 495-502. | Zbl

[121] P. Koiran and E.D. Sontag, Neural networks with quadratic vc dimension. J. Comput. Syst. Sci. 54 (1997). | MR | Zbl

[122] A.N. Kolmogorov, On the representation of continuous functions of several variables by superposition of continuous functions of one variable and addition. Dokl. Akad. Nauk SSSR 114 (1957) 953-956. | Zbl

[123] A.N. Kolmogorov and V.M. Tikhomirov, ε-entropy and ε-capacity of sets in functional spaces. Amer. Math. Soc. Transl., Ser. 2 17 (1961) 277-364. | Zbl

[124] V. Koltchinskii, Rademacher penalties and structural risk minimization. IEEE Trans. Inform. Theory 47 (2001) 1902-1914. | Zbl

[125] V. Koltchinskii, Local Rademacher complexities and oracle inequalities in risk minimization. Manuscript (September 2003). | Zbl

[126] V. Koltchinskii and D. Panchenko, Rademacher processes and bounding the risk of function learning, in High Dimensional Probability II, E. Giné, D.M. Mason and J.A. Wellner, Eds. (2000) 443-459. | Zbl

[127] V. Koltchinskii and D. Panchenko, Empirical margin distributions and bounding the generalization error of combined classifiers. Ann. Statist. 30 (2002). | MR | Zbl

[128] S. Kulkarni, G. Lugosi and S. Venkatesh, Learning pattern classification - a survey. IEEE Trans. Inform. Theory 44 (1998) 2178-2206. Information Theory: 1948-1998. Commemorative special issue. | Zbl

[129] S. Kutin and P. Niyogi, Almost-everywhere algorithmic stability and generalization error, in UAI-2002: Uncertainty in Artificial Intelligence (2002).

[130] J. Langford and M. Seeger, Bounds for averaging classifiers. CMU-CS 01-102, Carnegie Mellon University (2001).

[131] M. Ledoux, Isoperimetry and gaussian analysis in Lectures on Probability Theory and Statistics, P. Bernard Ed., École d'Été de Probabilités de St-Flour XXIV-1994 (1996) 165-294. | Zbl

[132] M. Ledoux, On Talagrand's deviation inequalities for product measures. ESAIM: PS 1 (1997) 63-87. | Numdam | Zbl

[133] M. Ledoux and M. Talagrand, Probability in Banach Space. Springer-Verlag, New York (1991). | MR | Zbl

[134] W.S. Lee, P.L. Bartlett and R.C. Williamson, The importance of convexity in learning with squared loss. IEEE Trans. Inform. Theory 44 (1998) 1974-1980. | Zbl

[135] O.V. Lepskiĭ, E. Mammen and V.G. Spokoiny, Optimal spatial adaptation to inhomogeneous smoothness: an approach based on kernel estimates with variable bandwidth selectors. Ann. Statist. 25 (1997) 929-947. | Zbl

[136] O.V. Lepskiĭ, A problem of adaptive estimation in Gaussian white noise. Teor. Veroyatnost. i Primenen. 35 (1990) 459-470. | Zbl

[137] O.V. Lepskiĭ, Asymptotically minimax adaptive estimation. I. Upper bounds. Optimally adaptive estimates. Teor. Veroyatnost. i Primenen. 36 (1991) 645-659. | Zbl

[138] Y. Li, P.M. Long and A. Srinivasan, Improved bounds on the sample complexity of learning. J. Comput. Syst. Sci. 62 (2001) 516-527. | Zbl

[139] Y. Lin, A note on margin-based loss functions in classification. Technical Report 1029r, Department of Statistics, University Wisconsin, Madison (1999). | Zbl

[140] Y. Lin, Some asymptotic properties of the support vector machine. Technical Report 1044r, Department of Statistics, University of Wisconsin, Madison (1999).

[141] Y. Lin, Support vector machines and the bayes rule in classification. Data Mining and Knowledge Discovery 6 (2002) 259-275.

[142] F. Lozano, Model selection using Rademacher penalization, in Proceedings of the Second ICSC Symposia on Neural Computation (NC2000). ICSC Adademic Press (2000).

[143] M.J. Luczak and C. Mcdiarmid, Concentration for locally acting permutations. Discrete Math. 265 (2003) 159-171. | Zbl

[144] G. Lugosi, Pattern classification and learning theory, in Principles of Nonparametric Learning, L. Györfi Ed., Springer, Wien (2002) 5-62.

[145] G. Lugosi and A. Nobel, Adaptive model selection using empirical complexities. Ann. Statist. 27 (1999) 1830-1864. | Zbl

[146] G. Lugosi and N. Vayatis, On the Bayes-risk consistency of regularized boosting methods. Ann. Statist. 32 (2004) 30-55. | Zbl

[147] G. Lugosi and M. Wegkamp, Complexity regularization via localized random penalties. Ann. Statist. 2 (2004) 1679-1697. | Zbl

[148] G. Lugosi and K. Zeger, Concept learning using complexity regularization. IEEE Trans. Inform. Theory 42 (1996) 48-54. | Zbl

[149] A. Macintyre and E.D. Sontag, Finiteness results for sigmoidal “neural” networks, in Proc. of the 25th Annual ACM Symposium on the Theory of Computing, Association of Computing Machinery, New York (1993) 325-334.

[150] C.L. Mallows, Some comments on Cp. Technometrics 15 (1997) 661-675. | Zbl

[151] E. Mammen and A. Tsybakov, Smooth discrimination analysis. Ann. Statist. 27(6) (1999) 1808-1829. | Zbl

[152] S. Mannor and R. Meir, Weak learners and improved convergence rate in boosting, in Advances in Neural Information Processing Systems 13: Proc. NIPS'2000 (2001).

[153] S. Mannor, R. Meir and T. Zhang, The consistency of greedy algorithms for classification, in Proceedings of the 15th Annual Conference on Computational Learning Theory (2002). | MR | Zbl

[154] K. Marton, A simple proof of the blowing-up lemma. IEEE Trans. Inform. Theory 32 (1986) 445-446. | Zbl

[155] K. Marton, Bounding d¯-distance by informational divergence: a way to prove measure concentration. Ann. Probab. 24 (1996) 857-866. | Zbl

[156] K. Marton, A measure concentration inequality for contracting Markov chains. Geometric Functional Analysis 6 (1996) 556-571. Erratum: 7 (1997) 609-613. | Zbl

[157] L. Mason, J. Baxter, P.L. Bartlett and M. Frean, Functional gradient techniques for combining hypotheses, in Advances in Large Margin Classifiers, A.J. Smola, P.L. Bartlett, B. Schölkopf and D. Schuurmans Eds., MIT Press, Cambridge, MA (1999) 221-247.

[158] P. Massart, Optimal constants for Hoeffding type inequalities. Technical report, Mathematiques, Université de Paris-Sud, Report 98.86, 1998.

[159] P. Massart, About the constants in Talagrand's concentration inequalities for empirical processes. Ann. Probab. 28 (2000) 863-884. | Zbl

[160] P. Massart, Some applications of concentration inequalities to statistics. Ann. Fac. Sci. Toulouse IX (2000) 245-303. | Numdam | Zbl

[161] P. Massart, École d'Eté de Probabilité de Saint-Flour XXXIII, chapter Concentration inequalities and model selection, LNM. Springer-Verlag (2003).

[162] P. Massart and E. Nédélec, Risk bounds for statistical learning, Ann. Statist., to appear. | MR | Zbl

[163] D.A. Mcallester, Some pac-Bayesian theorems, in Proc. of the 11th Annual Conference on Computational Learning Theory, ACM Press (1998) 230-234.

[164] D.A. Mcallester, pac-Bayesian model averaging, in Proc. of the 12th Annual Conference on Computational Learning Theory. ACM Press (1999). | MR

[165] D.A. Mcallester, pac-Bayesian stochastic model selection. Machine Learning 51 (2003) 5-21. | Zbl

[166] C. Mcdiarmid, On the method of bounded differences, in Surveys in Combinatorics 1989, Cambridge University Press, Cambridge (1989) 148-188. | Zbl

[167] C. Mcdiarmid, Concentration, in Probabilistic Methods for Algorithmic Discrete Mathematics, M. Habib, C. McDiarmid, J. Ramirez-Alfonsin and B. Reed Eds., Springer, New York (1998) 195-248. | Zbl

[168] C. Mcdiarmid, Concentration for independent permutations. Combin. Probab. Comput. 2 (2002) 163-178. | Zbl

[169] G.J. Mclachlan, Discriminant Analysis and Statistical Pattern Recognition. John Wiley, New York (1992). | MR | Zbl

[170] S. Mendelson, Improving the sample complexity using global data. IEEE Trans. Inform. Theory 48 (2002) 1977-1991. | Zbl

[171] S. Mendelson, A few notes on statistical learning theory, in Advanced Lectures in Machine Learning. Lect. Notes Comput. Sci. 2600, S. Mendelson and A. Smola Eds., Springer (2003) 1-40. | Zbl

[172] S. Mendelson and P. Philips, On the importance of “small” coordinate projections. J. Machine Learning Res. 5 (2004) 219-238.

[173] S. Mendelson and R. Vershynin, Entropy and the combinatorial dimension. Inventiones Mathematicae 152 (2003) 37-55. | Zbl

[174] V. Milman and G. Schechman, Asymptotic theory of finite-dimensional normed spaces, Springer-Verlag, New York (1986). | MR

[175] B.K. Natarajan, Machine Learning: A Theoretical Approach, Morgan Kaufmann, San Mateo, CA (1991). | MR

[176] D. Panchenko, A note on Talagrand's concentration inequality. Electron. Comm. Probab. 6 (2001). | Zbl

[177] D. Panchenko, Some extensions of an inequality of Vapnik and Chervonenkis. Electron. Comm. Probab. 7 (2002). | MR | Zbl

[178] D. Panchenko, Symmetrization approach to concentration inequalities for empirical processes. Ann. Probab. 31 (2003) 2068-2081. | Zbl

[179] T. Poggio, S. Rifkin, S. Mukherjee and P. Niyogi, General conditions for predictivity in learning theory. Nature 428 (2004) 419-422.

[180] D. Pollard, Convergence of Stochastic Processes, Springer-Verlag, New York (1984). | MR | Zbl

[181] D. Pollard, Uniform ratio limit theorems for empirical processes. Scand. J. Statist. 22 (1995) 271-278. | Zbl

[182] W. Polonik, Measuring mass concentrations and estimating density contour clusters-an excess mass approach. Ann. Statist. 23(3) (1995) 855-881. | Zbl

[183] E. Rio, Inégalités de concentration pour les processus empiriques de classes de parties. Probab. Theory Related Fields 119 (2001) 163-175. | Zbl

[184] E. Rio, Une inegalité de Bennett pour les maxima de processus empiriques, in Colloque en l'honneur de J. Bretagnolle, D. Dacunha-Castelle et I. Ibragimov, Annales de l'Institut Henri Poincaré (2001). | Numdam | Zbl

[185] B.D. Ripley, Pattern Recognition and Neural Networks, Cambridge University Press (1996). | MR | Zbl

[186] W.H. Rogers and T.J. Wagner, A finite sample distribution-free performance bound for local discrimination rules. Ann. Statist. 6 (1978) 506-514. | Zbl

[187] M. Rudelson, R. Vershynin, Combinatorics of random processes and sections of convex bodies. Ann. Math, to appear (2004). | MR | Zbl

[188] N. Sauer, On the density of families of sets. J. Combin. Theory, Ser A 13 (1972) 145-147. | Zbl

[189] R.E. Schapire, The strength of weak learnability. Machine Learning 5 (1990) 197-227. | Zbl

[190] R.E. Schapire, Y. Freund, P. Bartlett and W.S. Lee, Boosting the margin: a new explanation for the effectiveness of voting methods. Ann. Statist. 26 (1998) 1651-1686. | Zbl

[191] B. Schölkopf and A. J. Smola, Learning with Kernels. MIT Press, Cambridge, MA (2002).

[192] D. Schuurmans, Characterizing rational versus exponential learning curves, in Computational Learning Theory: Second European Conference. EuroCOLT'95, Springer-Verlag (1995) 272-286.

[193] C. Scovel and I. Steinwart, Fast rates for support vector machines. Los Alamos National Laboratory Technical Report LA-UR 03-9117 (2003).

[194] M. Seeger, PAC-Bayesian generalisation error bounds for gaussian process classification. J. Machine Learning Res. 3 (2002) 233-269. | Zbl

[195] J. Shawe-Taylor, P.L. Bartlett, R.C. Williamson and M. Anthony, Structural risk minimization over data-dependent hierarchies. IEEE Trans. Inform. Theory 44 (1998) 1926-1940. | Zbl

[196] S. Shelah, A combinatorial problem: Stability and order for models and theories in infinity languages. Pacific J. Mathematics 41 (1972) 247-261. | Zbl

[197] G.R. Shorack and J. Wellner, Empirical Processes with Applications in Statistics. Wiley, New York (1986). | MR

[198] H.U. Simon, General lower bounds on the number of examples needed for learning probabilistic concepts, in Proc. of the Sixth Annual ACM Conference on Computational Learning Theory, Association for Computing Machinery, New York (1993) 402-412.

[199] A.J. Smola, P.L. Bartlett, B. Schölkopf and D. Schuurmans Eds, Advances in Large Margin Classifiers. MIT Press, Cambridge, MA (2000). | MR | Zbl

[200] A.J. Smola, B. Schölkopf and K.-R. Müller, The connection between regularization operators and support vector kernels. Neural Networks 11 (1998) 637-649.

[201] D.F. Specht, Probabilistic neural networks and the polynomial Adaline as complementary techniques for classification. IEEE Trans. Neural Networks 1 (1990) 111-121.

[202] J.M. Steele, Existence of submatrices with all possible columns. J. Combin. Theory, Ser. A 28 (1978) 84-88. | Zbl

[203] I. Steinwart, On the influence of the kernel on the consistency of support vector machines. J. Machine Learning Res. (2001) 67-93. | Zbl

[204] I. Steinwart, Consistency of support vector machines and other regularized kernel machines. IEEE Trans. Inform. Theory 51 (2005) 128-142.

[205] I. Steinwart, Support vector machines are universally consistent. J. Complexity 18 (2002) 768-791. | Zbl

[206] I. Steinwart, On the optimal parameter choice in ν-support vector machines. IEEE Trans. Pattern Anal. Machine Intelligence 25 (2003) 1274-1284.

[207] I. Steinwart, Sparseness of support vector machines. J. Machine Learning Res. 4 (2003) 1071-1105. | Zbl

[208] S.J. Szarek and M. Talagrand, On the convexified Sauer-Shelah theorem. J. Combin. Theory, Ser. B 69 (1997) 183-192. | Zbl

[209] M. Talagrand, The Glivenko-Cantelli problem. Ann. Probab. 15 (1987) 837-870. | Zbl

[210] M. Talagrand, Sharper bounds for Gaussian and empirical processes. Ann. Probab. 22 (1994) 28-76. | Zbl

[211] M. Talagrand, Concentration of measure and isoperimetric inequalities in product spaces. Publications Mathématiques de l'I.H.E.S. 81 (1995) 73-205. | Numdam | Zbl

[212] M. Talagrand, The Glivenko-Cantelli problem, ten years later. J. Theoret. Probab. 9 (1996) 371-384. | Zbl

[213] M. Talagrand, Majorizing measures: the generic chaining. Ann. Probab. 24 (1996) 1049-1103. (Special Invited Paper). | Zbl

[214] M. Talagrand, New concentration inequalities in product spaces. Inventiones Mathematicae 126 (1996) 505-563. | Zbl

[215] M. Talagrand, A new look at independence. Ann. Probab. 24 (1996) 1-34. (Special Invited Paper). | Zbl

[216] M. Talagrand, Vapnik-Chervonenkis type conditions and uniform Donsker classes of functions. Ann. Probab. 31 (2003) 1565-1582. | Zbl

[217] M. Talagrand, The generic chaining: upper and lower bounds for stochastic processes. Springer-Verlag, New York (2005). | MR | Zbl

[218] A. Tsybakov. On nonparametric estimation of density level sets. Ann. Stat. 25 (1997) 948-969. | Zbl

[219] A.B. Tsybakov, Optimal aggregation of classifiers in statistical learning. Ann. Statist. 32 (2004) 135-166. | Zbl

[220] A.B. Tsybakov, Introduction à l'estimation non-paramétrique. Springer (2004). | Zbl

[221] A. Tsybakov and S. Van De Geer, Square root penalty: adaptation to the margin in classification and in edge estimation. Ann. Statist., to appear (2005). | MR | Zbl

[222] S. Van De Geer, A new approach to least-squares estimation, with applications. Ann. Statist. 15 (1987) 587-602. | Zbl

[223] S. Van De Geer, Estimating a regression function. Ann. Statist. 18 (1990) 907-924. | Zbl

[224] S. Van De Geer, Empirical Processes in M-Estimation. Cambridge University Press, Cambridge, UK (2000). | MR

[225] A.W. Van Der Waart and J.A. Wellner, Weak convergence and empirical processes. Springer-Verlag, New York (1996). | MR | Zbl

[226] V. Vapnik and A. Lerner, Pattern recognition using generalized portrait method. Automat. Remote Control 24 (1963) 774-780.

[227] V.N. Vapnik, Estimation of Dependencies Based on Empirical Data. Springer-Verlag, New York (1982). | MR | Zbl

[228] V.N. Vapnik, The Nature of Statistical Learning Theory. Springer-Verlag, New York (1995). | MR | Zbl

[229] V.N. Vapnik, Statistical Learning Theory. John Wiley, New York (1998). | MR | Zbl

[230] V.N. Vapnik and A.Ya. Chervonenkis, On the uniform convergence of relative frequencies of events to their probabilities. Theory Probab. Appl. 16 (1971) 264-280. | Zbl

[231] V.N. Vapnik and A.Ya. Chervonenkis, Theory of Pattern Recognition. Nauka, Moscow (1974). (in Russian); German translation: Theorie der Zeichenerkennung, Akademie Verlag, Berlin (1979). | MR

[232] V.N. Vapnik and A.Ya. Chervonenkis, Necessary and sufficient conditions for the uniform convergence of means to their expectations. Theory Probab. Appl. 26 (1981) 821-832. | Zbl

[233] M. Vidyasagar, A Theory of Learning and Generalization. Springer, New York (1997). | MR | Zbl

[234] V. Vu, On the infeasibility of training neural networks with small mean squared error. IEEE Trans. Inform. Theory 44 (1998) 2892-2900. | Zbl

[235] M. Wegkamp, Model selection in nonparametric regression. Ann. Statist. 31(1) (2003) 252-273. | Zbl

[236] R.S. Wenocur and R.M. Dudley, Some special Vapnik-Chervonenkis classes. Discrete Math. 33 (1981) 313-318. | Zbl

[237] Y. Yang, Minimax nonparametric classification. I. Rates of convergence. IEEE Trans. Inform. Theory 45(7) (1999) 2271-2284. | Zbl

[238] Y. Yang, Minimax nonparametric classification. II. Model selection for adaptation. IEEE Trans. Inform. Theory 45(7) (1999) 2285-2292. | Zbl

[239] Y. Yang, Adaptive estimation in pattern recognition by combining different procedures. Statistica Sinica 10 (2000) 1069-1089. | Zbl

[240] V.V. Yurinksii, Exponential bounds for large deviations. Theory Probab. Appl. 19 (1974) 154-155. | Zbl

[241] V.V. Yurinksii, Exponential inequalities for sums of random vectors. J. Multivariate Anal. 6 (1976) 473-499. | Zbl

[242] T. Zhang, Statistical behavior and consistency of classification methods based on convex risk minimization. Ann. Statist. 32 (2004) 56-85. | Zbl

[243] D.-X. Zhou, Capacity of reproducing kernel spaces in learning theory. IEEE Trans. Inform. Theory 49 (2003) 1743-1752.

  • Portier, François Nearest neighbor empirical processes, Bernoulli, Volume 31 (2025) no. 1 | DOI:10.3150/24-bej1729
  • Apidopoulos, Vassilis; Poggio, Tomaso; Rosasco, Lorenzo; Villa, Silvia Iterative regularization in classification via hinge loss diagonal descent, Inverse Problems, Volume 41 (2025) no. 3, p. 035010 | DOI:10.1088/1361-6420/adb06f
  • Zhiguang, Chu; Yingchen, Fan; Xiaolei, Zhang; Ruyan, Zhang; Xing, Zhang An Improved Multi-objective Particle Swarm Optimization Algorithm with Reduced Initial Search Space, PRICAI 2024: Trends in Artificial Intelligence, Volume 15284 (2025), p. 410 | DOI:10.1007/978-981-96-0125-7_34
  • Viviano, Davide Policy Targeting under Network Interference, Review of Economic Studies, Volume 92 (2025) no. 2, p. 1257 | DOI:10.1093/restud/rdae041
  • Hu, Ping; Bordignon, Virginia; Kayaalp, Mert; Sayed, Ali H. Non-asymptotic performance of social machine learning under limited data, Signal Processing, Volume 230 (2025), p. 109849 | DOI:10.1016/j.sigpro.2024.109849
  • Mulla, Yara; Keslassy, Isaac, 2024 20th International Conference on Network and Service Management (CNSM) (2024), p. 1 | DOI:10.23919/cnsm62983.2024.10814298
  • Hanneke, Steve; Larsen, Kasper Green; Zhivotovskiy, Nikita, 2024 IEEE 65th Annual Symposium on Foundations of Computer Science (FOCS) (2024), p. 1968 | DOI:10.1109/focs61266.2024.00118
  • Pellegrina, Leonardo; Vandin, Fabio SILVAN : Estimating Betweenness Centralities with Progressive Sampling and Non-uniform Rademacher Bounds, ACM Transactions on Knowledge Discovery from Data, Volume 18 (2024) no. 3, p. 1 | DOI:10.1145/3628601
  • Yan, Jianjun; Zhou, Junwei; Zhang, Jianrui; Zhao, Peng; Zhang, Ziang; Wang, Weize; Xuan, Fuzhen AP-GAN-DNN based creep fracture life prediction for 7050 aluminum alloy, Engineering Fracture Mechanics, Volume 303 (2024), p. 110096 | DOI:10.1016/j.engfracmech.2024.110096
  • Aghbalou, Anass; Bertail, Patrice; Portier, François; Sabourin, Anne Cross-validation on extreme regions, Extremes, Volume 27 (2024) no. 4, p. 505 | DOI:10.1007/s10687-024-00495-z
  • Cheng, Yi; Pan, Qiong; Li, Jie; Zhang, Nan; Yang, Yang; Wang, Jiawei; Gao, Ningbo Machine learning facilitated the modeling of plastics hydrothermal pretreatment toward constructing an on-ship marine litter-to-methanol plant, Frontiers of Chemical Science and Engineering, Volume 18 (2024) no. 10 | DOI:10.1007/s11705-024-2468-3
  • Le Thi, Hoai An; Luu, Hoang Phuc Hau; Dinh, Tao Pham Online Stochastic DCA With Applications to Principal Component Analysis, IEEE Transactions on Neural Networks and Learning Systems, Volume 35 (2024) no. 5, p. 7035 | DOI:10.1109/tnnls.2022.3213558
  • Giraud, Christophe Fondements mathématiques de l’apprentissage statistique, Journées mathématiques X-UPS (2024), p. 59 | DOI:10.5802/xups.2013-02
  • Zhang, Chenguang; Hou, Yuexian; Song, Dawei Label-Based Disentanglement Measure among Hidden Units of Deep Learning, Neural Processing Letters, Volume 56 (2024) no. 6 | DOI:10.1007/s11063-024-11708-8
  • Ma, Wanteng; Cao, Ying; Tsang, Danny H. K.; Xia, Dong Optimal Regularized Online Allocation by Adaptive Re-Solving, Operations Research (2024) | DOI:10.1287/opre.2022.0486
  • Feng, Kai; Hong, Han Statistical Inference of Optimal Allocations I: Regularities and their Implications, SSRN Electronic Journal (2024) | DOI:10.2139/ssrn.4372556
  • Tien Mai, The Misclassification Excess Risk Bounds for 1‐Bit Matrix Completion, Stat, Volume 13 (2024) no. 4 | DOI:10.1002/sta4.70003
  • Sell, Torben; Berrett, Thomas B.; Cannings, Timothy I. Nonparametric classification with missing data, The Annals of Statistics, Volume 52 (2024) no. 3 | DOI:10.1214/24-aos2389
  • van der Hagen, Liana; Agatz, Niels; Spliet, Remy; Visser, Thomas R.; Kok, Leendert Machine Learning–Based Feasibility Checks for Dynamic Time Slot Management, Transportation Science, Volume 58 (2024) no. 1, p. 94 | DOI:10.1287/trsc.2022.1183
  • Pellizzoni, Paolo; Vandin, Fabio, 2023 IEEE 39th International Conference on Data Engineering (ICDE) (2023), p. 2470 | DOI:10.1109/icde55515.2023.00190
  • Rangamani, Akshay; Rosasco, Lorenzo; Poggio, Tomaso For interpolating kernel machines, minimizing the norm of the ERM solution maximizes stability, Analysis and Applications, Volume 21 (2023) no. 01, p. 193 | DOI:10.1142/s0219530522400115
  • Masiha, Saeed; Gohari, Amin; Yassaee, Mohammad Hossein f-Divergences and Their Applications in Lossy Compression and Bounding Generalization Error, IEEE Transactions on Information Theory, Volume 69 (2023) no. 12, p. 7538 | DOI:10.1109/tit.2023.3268527
  • Bordignon, Virginia; Vlaski, Stefan; Matta, Vincenzo; Sayed, Ali H. Learning From Heterogeneous Data Based on Social Interactions Over Graphs, IEEE Transactions on Information Theory, Volume 69 (2023) no. 5, p. 3347 | DOI:10.1109/tit.2022.3232368
  • Jiang, Yangbangyan; Xu, Qianqian; Zhao, Yunrui; Yang, Zhiyong; Wen, Peisong; Cao, Xiaochun; Huang, Qingming Positive-Unlabeled Learning With Label Distribution Alignment, IEEE Transactions on Pattern Analysis and Machine Intelligence, Volume 45 (2023) no. 12, p. 15345 | DOI:10.1109/tpami.2023.3319431
  • Cui, Hugo; Loureiro, Bruno; Krzakala, Florent; Zdeborová, Lenka Error scaling laws for kernel classification under source and capacity conditions, Machine Learning: Science and Technology, Volume 4 (2023) no. 3, p. 035033 | DOI:10.1088/2632-2153/acf041
  • Garnier, Remy; Langhendries, Raphaël; Rynkiewicz, Joseph Hold-out estimates of prediction models for Markov processes, Statistics, Volume 57 (2023) no. 2, p. 458 | DOI:10.1080/02331888.2023.2183203
  • Safran, Itay; Eldan, Ronen; Shamir, Ohad Depth Separations in Neural Networks: What is Actually Being Separated?, Constructive Approximation, Volume 55 (2022) no. 1, p. 225 | DOI:10.1007/s00365-021-09532-7
  • de Lima, Alane M.; da Silva, Murilo V.G.; Vignatti, André L. Percolation centrality via Rademacher Complexity, Discrete Applied Mathematics, Volume 323 (2022), p. 201 | DOI:10.1016/j.dam.2021.07.023
  • Clémençon, Stephan; Laforgue, Pierre Statistical learning from biased training samples, Electronic Journal of Statistics, Volume 16 (2022) no. 2 | DOI:10.1214/22-ejs2084
  • Puchkin, Nikita; Zhivotovskiy, Nikita Exponential Savings in Agnostic Active Learning Through Abstention, IEEE Transactions on Information Theory, Volume 68 (2022) no. 7, p. 4651 | DOI:10.1109/tit.2022.3156592
  • Carrière, Mathieu; Michel, Bertrand Statistical analysis of Mapper for stochastic and multivariate filters, Journal of Applied and Computational Topology, Volume 6 (2022) no. 3, p. 331 | DOI:10.1007/s41468-022-00090-w
  • Tonon, Andrea; Vandin, Fabio gRosSo: mining statistically robust patterns from a sequence of datasets, Knowledge and Information Systems, Volume 64 (2022) no. 9, p. 2329 | DOI:10.1007/s10115-022-01689-2
  • Dubois-Taine, Benjamin; Vaswani, Sharan; Babanezhad, Reza; Schmidt, Mark; Lacoste-Julien, Simon SVRG meets AdaGrad: painless variance reduction, Machine Learning, Volume 111 (2022) no. 12, p. 4359 | DOI:10.1007/s10994-022-06265-x
  • Lu, K. L. How Can We Identify the Sparsity Structure Pattern of High-Dimensional Data: an Elementary Statistical Analysis to Interpretable Machine Learning, Mathematical Notes, Volume 112 (2022) no. 1-2, p. 223 | DOI:10.1134/s0001434622070264
  • Gonzalez-Lima, Maria D.; Ludeña, Carenne C. Using Locality-Sensitive Hashing for SVM Classification of Large Data Sets, Mathematics, Volume 10 (2022) no. 11, p. 1812 | DOI:10.3390/math10111812
  • Le Thi, Hoai An; Huynh, Van Ngai; Dinh, Tao Pham; Hau Luu, Hoang Phuc Stochastic Difference-of-Convex-Functions Algorithms for Nonconvex Programming, SIAM Journal on Optimization, Volume 32 (2022) no. 3, p. 2263 | DOI:10.1137/20m1385706
  • van der Hagen, Liana; Agatz, Niels A.H.; Spliet, Remy; Visser, Thomas; Kok, Adrianus Machine Learning-Based Feasability Checks for Dynamic Time Slot Management, SSRN Electronic Journal (2022) | DOI:10.2139/ssrn.4011237
  • Touvron, Hugo; Sablayrolles, Alexandre; Douze, Matthijs; Cord, Matthieu; Jegou, Herve, 2021 IEEE/CVF International Conference on Computer Vision (ICCV) (2021), p. 854 | DOI:10.1109/iccv48922.2021.00091
  • Turabieh, Hamza; Ben Abdessalem Karaa, Wahiba, 2021 International Conference of Women in Data Science at Taif University (WiDSTaif ) (2021), p. 1 | DOI:10.1109/widstaif52235.2021.9430233
  • DeVore, Ronald; Hanin, Boris; Petrova, Guergana Neural network approximation, Acta Numerica, Volume 30 (2021), p. 327 | DOI:10.1017/s0962492921000052
  • Ishkina, Sh. Kh.; Vorontsov, K. V. Sharpness Estimation of Combinatorial Generalization Ability Bounds for Threshold Decision Rules, Automation and Remote Control, Volume 82 (2021) no. 5, p. 863 | DOI:10.1134/s0005117921050106
  • Mbakop, Eric; Tabord-Meehan, Max Model Selection for Treatment Choice: Penalized Welfare Maximization, Econometrica, Volume 89 (2021) no. 2, p. 825 | DOI:10.3982/ecta16437
  • Clémençon, Stephan; Limnios, Myrto; Vayatis, Nicolas Concentration inequalities for two-sample rank processes with application to bipartite ranking, Electronic Journal of Statistics, Volume 15 (2021) no. 2 | DOI:10.1214/21-ejs1907
  • Nouri-Moghaddam, Babak; Ghazanfari, Mehdi; Fathian, Mohammad A novel multi-objective forest optimization algorithm for wrapper feature selection, Expert Systems with Applications, Volume 175 (2021), p. 114737 | DOI:10.1016/j.eswa.2021.114737
  • Bordignon, Virginia; Vlaski, Stefan; Matta, Vincenzo; Sayed, Ali H., ICASSP 2021 - 2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) (2021), p. 5185 | DOI:10.1109/icassp39728.2021.9414126
  • Campi, Marta; Peters, Gareth W.; Azzaoui, Nourddine; Matsui, Tomoko Machine Learning Mitigants for Speech Based Cyber Risk, IEEE Access, Volume 9 (2021), p. 136831 | DOI:10.1109/access.2021.3117080
  • Abramovich, Felix; Grinshtein, Vadim; Levy, Tomer Multiclass Classification by Sparse Multinomial Logistic Regression, IEEE Transactions on Information Theory, Volume 67 (2021) no. 7, p. 4637 | DOI:10.1109/tit.2021.3075137
  • Bousquet, Olivier; Zhivotovskiy, Nikita Fast classification rates without standard margin assumptions, Information and Inference: A Journal of the IMA, Volume 10 (2021) no. 4, p. 1389 | DOI:10.1093/imaiai/iaab010
  • Paccolat, Jonas; Spigler, Stefano; Wyart, Matthieu How isotropic kernels perform on simple invariants, Machine Learning: Science and Technology, Volume 2 (2021) no. 2, p. 025020 | DOI:10.1088/2632-2153/abd485
  • Omar, E. Z. A refined denoising method for noisy phase-shifting interference fringe patterns, Optical and Quantum Electronics, Volume 53 (2021) no. 8 | DOI:10.1007/s11082-021-03106-4
  • Haghtalab, Nika; Jackson, Matthew O.; Procaccia, Ariel D. Belief polarization in a complex world: A learning theory perspective, Proceedings of the National Academy of Sciences, Volume 118 (2021) no. 19 | DOI:10.1073/pnas.2010144118
  • Popescu, Claudiu Marius Learning bounds for quantum circuits in the agnostic setting, Quantum Information Processing, Volume 20 (2021) no. 9 | DOI:10.1007/s11128-021-03225-7
  • Klochkov, Yegor; Kroshnin, Alexey; Zhivotovskiy, Nikita Robust k-means clustering for distributions with two moments, The Annals of Statistics, Volume 49 (2021) no. 4 | DOI:10.1214/20-aos2033
  • Chen, Le-Yu; Lee, Sokbae Binary classification with covariate selection through ℓ0-penalised empirical risk minimisation, The Econometrics Journal, Volume 24 (2021) no. 1, p. 103 | DOI:10.1093/ectj/utaa017
  • Cannings, Timothy I. Random projections: Data perturbation for classification problems, WIREs Computational Statistics, Volume 13 (2021) no. 1 | DOI:10.1002/wics.1499
  • Mo, Weibin; Liu, Yufeng Supervised Learning, Wiley StatsRef: Statistics Reference Online (2021), p. 1 | DOI:10.1002/9781118445112.stat08302
  • Tonon, Andrea; Vandin, Fabio, 2020 IEEE International Conference on Data Mining (ICDM) (2020), p. 551 | DOI:10.1109/icdm50108.2020.00064
  • Khuat, Thanh Tung; Chen, Fang; Gabrys, Bogdan, 2020 International Joint Conference on Neural Networks (IJCNN) (2020), p. 1 | DOI:10.1109/ijcnn48605.2020.9207534
  • Vacek, Thomas, 2020 International Joint Conference on Neural Networks (IJCNN) (2020), p. 1 | DOI:10.1109/ijcnn48605.2020.9207004
  • Mey, Alexander; Viering, Tom Julian; Loog, Marco A Distribution Dependent and Independent Complexity Analysis of Manifold Regularization, Advances in Intelligent Data Analysis XVIII, Volume 12080 (2020), p. 326 | DOI:10.1007/978-3-030-44584-3_26
  • Santoro, Diego; Tonon, Andrea; Vandin, Fabio Mining Sequential Patterns with VC-Dimension and Rademacher Complexity, Algorithms, Volume 13 (2020) no. 5, p. 123 | DOI:10.3390/a13050123
  • Omar, E. Z. Investigation and classification of fibre deformation using interferometric and machine learning techniques, Applied Physics B, Volume 126 (2020) no. 4 | DOI:10.1007/s00340-020-7399-1
  • Gadat, Sébastien; Gerchinovitz, Sébastien; Marteau, Clément Optimal functional supervised classification with separation condition, Bernoulli, Volume 26 (2020) no. 3 | DOI:10.3150/19-bej1170
  • Furmańczyk, Konrad; Rejchel, Wojciech Prediction and Variable Selection in High-Dimensional Misspecified Binary Classification, Entropy, Volume 22 (2020) no. 5, p. 543 | DOI:10.3390/e22050543
  • Bu, Yuheng; Zou, Shaofeng; Veeravalli, Venugopal V. Tightening Mutual Information-Based Bounds on Generalization Error, IEEE Journal on Selected Areas in Information Theory, Volume 1 (2020) no. 1, p. 121 | DOI:10.1109/jsait.2020.2991139
  • Kupavskii, Andrey; Zhivotovskiy, Nikita When are epsilon-nets small?, Journal of Computer and System Sciences, Volume 110 (2020), p. 22 | DOI:10.1016/j.jcss.2019.12.006
  • Zhao, Qiang; Karunamuni, Rohana J.; Wu, Jingjing An empirical classification procedure for nonparametric mixture models, Journal of the Korean Statistical Society, Volume 49 (2020) no. 3, p. 924 | DOI:10.1007/s42952-019-00043-7
  • Lecué, Guillaume; Lerasle, Matthieu; Mathieu, Timlothée Robust classification via MOM minimization, Machine Learning, Volume 109 (2020) no. 8, p. 1635 | DOI:10.1007/s10994-019-05863-6
  • Al‐Tahhan, F. E.; Fares, M. E.; Sakr, Ali A.; Aladle, Doaa A. Accurate automatic detection of acute lymphatic leukemia using a refined simple classification, Microscopy Research and Technique, Volume 83 (2020) no. 10, p. 1178 | DOI:10.1002/jemt.23509
  • Chinot, Geoffrey; Lecué, Guillaume; Lerasle, Matthieu Robust statistical learning with Lipschitz and convex loss functions, Probability Theory and Related Fields, Volume 176 (2020) no. 3-4, p. 897 | DOI:10.1007/s00440-019-00931-3
  • Campi, Marta; Peters, Gareth; Azzaoui, Nourddine Machine Learning Mitigants for Speech Based Cyber Risk, SSRN Electronic Journal (2020) | DOI:10.2139/ssrn.3643826
  • Cannings, Timothy I.; Berrett, Thomas B.; Samworth, Richard J. Local nearest neighbour classification with applications to semi-supervised learning, The Annals of Statistics, Volume 48 (2020) no. 3 | DOI:10.1214/19-aos1868
  • Turabieh, Hamza, 2019 2nd International Conference on new Trends in Computing Sciences (ICTCS) (2019), p. 1 | DOI:10.1109/ictcs.2019.8923093
  • Bu, Yuheng; Zou, Shaofeng; Veeravalli, Venugopal V., 2019 IEEE International Symposium on Information Theory (ISIT) (2019), p. 587 | DOI:10.1109/isit.2019.8849590
  • Cortes, Corinna; Greenberg, Spencer; Mohri, Mehryar Relative deviation learning bounds and generalization with unbounded loss functions, Annals of Mathematics and Artificial Intelligence, Volume 85 (2019) no. 1, p. 45 | DOI:10.1007/s10472-018-9613-y
  • Kalainathan, Diviyan; Goudet, Olivier; Sebag, Michèle; Guyon, Isabelle Discriminant Learning Machines, Cause Effect Pairs in Machine Learning (2019), p. 155 | DOI:10.1007/978-3-030-21810-2_4
  • Clémençon, Stephan; Bertail, Patrice; Chautru, Emilie; Papa, Guillaume Optimal survey schemes for stochastic gradient descent with applications to M-estimation, ESAIM: Probability and Statistics, Volume 23 (2019), p. 310 | DOI:10.1051/ps/2018021
  • Abramovich, Felix; Grinshtein, Vadim High-Dimensional Classification by Sparse Logistic Regression, IEEE Transactions on Information Theory, Volume 65 (2019) no. 5, p. 3068 | DOI:10.1109/tit.2018.2884963
  • Quemy, Alexandre Binary classification in unstructured space with hypergraph case-based reasoning, Information Systems, Volume 85 (2019), p. 92 | DOI:10.1016/j.is.2019.03.005
  • Zarrilli, Donato Bidding Renewable Energy in the Electricity Market, Integration of Low Carbon Technologies in Smart Grids (2019), p. 5 | DOI:10.1007/978-3-319-98358-5_2
  • Nguyen, Quang Van; De, Sandip; Lin, Junhong; Cevher, Volkan Chemical machine learning with kernels: The impact of loss functions, International Journal of Quantum Chemistry, Volume 119 (2019) no. 9 | DOI:10.1002/qua.25872
  • Abramovich, Felix; Pensky, Marianna Classification with many classes: Challenges and pluses, Journal of Multivariate Analysis, Volume 174 (2019), p. 104536 | DOI:10.1016/j.jmva.2019.104536
  • Cao, Weiguo; Pomeroy, Marc J.; Gao, Yongfeng; Barish, Matthew A.; Abbasi, Almas F.; Pickhardt, Perry J.; Liang, Zhengrong Multi-scale characterizations of colon polyps via computed tomographic colonography, Visual Computing for Industry, Biomedicine, and Art, Volume 2 (2019) no. 1 | DOI:10.1186/s42492-019-0032-7
  • Vera, Matias; Piantanida, Pablo; Vega, Leonardo Rey, 2018 IEEE International Symposium on Information Theory (ISIT) (2018), p. 1580 | DOI:10.1109/isit.2018.8437679
  • Riondato, Matteo; Upfal, Eli ABRA, ACM Transactions on Knowledge Discovery from Data, Volume 12 (2018) no. 5, p. 1 | DOI:10.1145/3208351
  • Liu, Youming; Zeng, Xiaochen Strong Lp convergence of wavelet deconvolution density estimators, Analysis and Applications, Volume 16 (2018) no. 02, p. 183 | DOI:10.1142/s0219530517500154
  • Knopov, P. S.; Norkin, V. I. Convergence Conditions for the Observed Mean Method in Stochastic Programming, Cybernetics and Systems Analysis, Volume 54 (2018) no. 1, p. 45 | DOI:10.1007/s10559-018-0006-3
  • Aamari, Eddie; Levrard, Clément Stability and Minimax Optimality of Tangential Delaunay Complexes for Manifold Reconstruction, Discrete Computational Geometry, Volume 59 (2018) no. 4, p. 923 | DOI:10.1007/s00454-017-9962-z
  • Guedj, Benjamin; Robbiano, Sylvain PAC-Bayesian high dimensional bipartite ranking, Journal of Statistical Planning and Inference, Volume 196 (2018), p. 70 | DOI:10.1016/j.jspi.2017.10.010
  • Cipollini, Francesca; Oneto, Luca; Coraddu, Andrea; Murphy, Alan John; Anguita, Davide Condition-Based Maintenance of Naval Propulsion Systems with supervised Data Analysis, Ocean Engineering, Volume 149 (2018), p. 268 | DOI:10.1016/j.oceaneng.2017.12.002
  • Zhivotovskiy, N.; Hanneke, S. Localization of VC classes: Beyond local Rademacher complexities, Theoretical Computer Science, Volume 742 (2018), p. 27 | DOI:10.1016/j.tcs.2017.12.029
  • Hanneke, Steve; Yang, Liu Testing piecewise functions, Theoretical Computer Science, Volume 745 (2018), p. 23 | DOI:10.1016/j.tcs.2018.05.019
  • Kozlovskaia, Nataliia; Zaytsev, Alexey, 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA) (2017), p. 908 | DOI:10.1109/icmla.2017.00-39
  • Bilbao, Imanol; Bilbao, Javier, 2017 Eighth International Conference on Intelligent Computing and Information Systems (ICICIS) (2017), p. 173 | DOI:10.1109/intelcis.2017.8260032
  • 曹, 颖 Research and Application of SVM Model Based on Privileged Information, Advances in Applied Mathematics, Volume 06 (2017) no. 09, p. 1248 | DOI:10.12677/aam.2017.69150
  • Zhang, Xinhua Regularization, Encyclopedia of Machine Learning and Data Mining (2017), p. 1083 | DOI:10.1007/978-1-4899-7687-1_718
  • Reid, Mark Generalization Bounds, Encyclopedia of Machine Learning and Data Mining (2017), p. 556 | DOI:10.1007/978-1-4899-7687-1_328
  • Popkov, Yuri; Volkovich, Zeev; Dubnov, Yuri; Avros, Renata; Ravve, Elena Entropy “2”-Soft Classification of Objects, Entropy, Volume 19 (2017) no. 4, p. 178 | DOI:10.3390/e19040178
  • Rastogi, Abhishake; Sampath, Sivananthan Optimal Rates for the Regularized Learning Algorithms under General Source Condition, Frontiers in Applied Mathematics and Statistics, Volume 3 (2017) | DOI:10.3389/fams.2017.00003
  • Bui, Nicola; Cesana, Matteo; Hosseini, S. Amir; Liao, Qi; Malanchini, Ilaria; Widmer, Joerg A Survey of Anticipatory Mobile Networking: Context-Based Classification, Prediction Methodologies, and Optimization Techniques, IEEE Communications Surveys Tutorials, Volume 19 (2017) no. 3, p. 1790 | DOI:10.1109/comst.2017.2694140
  • Gottlieb, Lee-Ad; Kontorovich, Aryeh; Krauthgamer, Robert Efficient Regression in Metric Spaces via Approximate Lipschitz Extension, IEEE Transactions on Information Theory, Volume 63 (2017) no. 8, p. 4838 | DOI:10.1109/tit.2017.2713820
  • Awasthi, Pranjal; Balcan, Maria Florina; Long, Philip M. The Power of Localization for Efficiently Learning Linear Separators with Noise, Journal of the ACM, Volume 63 (2017) no. 6, p. 1 | DOI:10.1145/3006384
  • de Sá, Alex G. C.; Pappa, Gisele L.; Freitas, Alex A., Proceedings of the Genetic and Evolutionary Computation Conference Companion (2017), p. 1125 | DOI:10.1145/3067695.3082053
  • Clémençon, Stephan; Bertail, Patrice; Chautru, Emilie Sampling and empirical risk minimization, Statistics, Volume 51 (2017) no. 1, p. 30 | DOI:10.1080/02331888.2016.1259810
  • Tolstikhin, I. O. Concentration Inequalities for Samples without Replacement, Theory of Probability Its Applications, Volume 61 (2017) no. 3, p. 462 | DOI:10.1137/s0040585x97t988277
  • Yang, Tao; Fu, Dongmei; Hao, Lian, 2016 55th Annual Conference of the Society of Instrument and Control Engineers of Japan (SICE) (2016), p. 1461 | DOI:10.1109/sice.2016.7749177
  • Popkov, Yu. S.; Dubnov, Yu. A.; Popkov, A. Yu., 2016 IEEE 8th International Conference on Intelligent Systems (IS) (2016), p. 27 | DOI:10.1109/is.2016.7737456
  • Shen, Xiang-Jun; Zhang, Wen-Chao; Cai, Wei; Benuw, Ben-Bright B.; Song, He-Ping; Zhu, Qian; Zha, Zheng-Jun Building Locally Discriminative Classifier Ensemble Through Classifier Fusion Among Nearest Neighbors, Advances in Multimedia Information Processing - PCM 2016, Volume 9916 (2016), p. 211 | DOI:10.1007/978-3-319-48890-5_21
  • Zhivotovskiy, Nikita; Hanneke, Steve Localization of VC Classes: Beyond Local Rademacher Complexities, Algorithmic Learning Theory, Volume 9925 (2016), p. 18 | DOI:10.1007/978-3-319-46379-7_2
  • Zhang, Xinhua Regularization, Encyclopedia of Machine Learning and Data Mining (2016), p. 1 | DOI:10.1007/978-1-4899-7502-7_718-1
  • Giannitrapani, Antonio; Paoletti, Simone; Vicino, Antonio; Zarrilli, Donato Bidding Wind Energy Exploiting Wind Speed Forecasts, IEEE Transactions on Power Systems, Volume 31 (2016) no. 4, p. 2647 | DOI:10.1109/tpwrs.2015.2477942
  • Wang, Weiguang; Liang, Yingbin; Xing, Eric P.; Shen, Lixin Nonparametric Decentralized Detection and Sparse Sensor Selection Via Weighted Kernel, IEEE Transactions on Signal Processing, Volume 64 (2016) no. 2, p. 306 | DOI:10.1109/tsp.2015.2474297
  • Riondato, Matteo; Upfal, Eli, Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (2016), p. 1145 | DOI:10.1145/2939672.2939770
  • Gadat, Sébastien; Klein, Thierry; Marteau, Clément Classification in general finite dimensional spaces with the k-nearest neighbor rule, The Annals of Statistics, Volume 44 (2016) no. 3 | DOI:10.1214/15-aos1395
  • Tolstikhin, Ilya O Неравенства концентрации для выборок без возвращений, Теория вероятностей и ее применения, Volume 61 (2016) no. 3, p. 464 | DOI:10.4213/tvp5069
  • Tolstikhin, Ilya; Zhivotovskiy, Nikita; Blanchard, Gilles Permutational Rademacher Complexity, Algorithmic Learning Theory, Volume 9355 (2015), p. 209 | DOI:10.1007/978-3-319-24486-0_14
  • Balcan, Maria Florina; Feldman, Vitaly Statistical Active Learning Algorithms for Noise Tolerance and Differential Privacy, Algorithmica, Volume 72 (2015) no. 1, p. 282 | DOI:10.1007/s00453-014-9954-9
  • Loustau, Sébastien; Marteau, Clément Minimax fast rates for discriminant analysis with errors in variables, Bernoulli, Volume 21 (2015) no. 1 | DOI:10.3150/13-bej564
  • Steidl, Gabriele Supervised Learning by Support Vector Machines, Handbook of Mathematical Methods in Imaging (2015), p. 1393 | DOI:10.1007/978-1-4939-0790-8_22
  • Nicolae, Maria-Irina; Gaussier, Éric; Habrard, Amaury; Sebban, Marc Joint Semi-supervised Similarity Learning for Linear Classification, Machine Learning and Knowledge Discovery in Databases, Volume 9284 (2015), p. 594 | DOI:10.1007/978-3-319-23528-8_37
  • Steinwart, Ingo Measuring the Capacity of Sets of Functions in the Analysis of ERM, Measures of Complexity (2015), p. 217 | DOI:10.1007/978-3-319-21852-6_16
  • Riondato, Matteo; Upfal, Eli, Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (2015), p. 1005 | DOI:10.1145/2783258.2783265
  • Riondato, Matteo; Upfal, Eli, Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (2015), p. 2321 | DOI:10.1145/2783258.2789984
  • Chichignoud, Michaël; Loustau, Sébastien Bandwidth selection in kernel empirical risk minimization via the gradient, The Annals of Statistics, Volume 43 (2015) no. 4 | DOI:10.1214/15-aos1318
  • Brownlees, Christian; Joly, Emilien; Lugosi, Gábor Empirical risk minimization for heavy-tailed losses, The Annals of Statistics, Volume 43 (2015) no. 6 | DOI:10.1214/15-aos1350
  • Clemencon, Stephan; Bertail, Patrice; Chautru, Emilie, 2014 IEEE International Conference on Big Data (Big Data) (2014), p. 25 | DOI:10.1109/bigdata.2014.7004208
  • Riondato, Matteo; Upfal, Eli Efficient Discovery of Association Rules and Frequent Itemsets through Sampling with Tight Performance Guarantees, ACM Transactions on Knowledge Discovery from Data, Volume 8 (2014) no. 4, p. 1 | DOI:10.1145/2629586
  • Frey, A. I.; Tolstikhin, I. O. Cover-based combinatorial bounds on probability of overfitting, Doklady Mathematics, Volume 89 (2014) no. 2, p. 185 | DOI:10.1134/s1064562414020136
  • Steidl, Gabriele Supervised Learning by Support Vector Machines, Handbook of Mathematical Methods in Imaging (2014), p. 1 | DOI:10.1007/978-3-642-27795-5_22-5
  • Gey, Servane; Mary-Huard, Tristan Risk Bounds for Embedded Variable Selection in Classification Trees, IEEE Transactions on Information Theory, Volume 60 (2014) no. 3, p. 1688 | DOI:10.1109/tit.2014.2298874
  • Asor, Ohad; Duan, Hubert Haoyang; Kontorovich, Aryeh On the Additive Properties of the Fat-Shattering Dimension, IEEE Transactions on Neural Networks and Learning Systems, Volume 25 (2014) no. 12, p. 2309 | DOI:10.1109/tnnls.2014.2327065
  • Clémençon, Stéphan A statistical view of clustering performance through the theory ofU-processes, Journal of Multivariate Analysis, Volume 124 (2014), p. 42 | DOI:10.1016/j.jmva.2013.10.001
  • Cuevas, Antonio A partial overview of the theory of statistics with functional data, Journal of Statistical Planning and Inference, Volume 147 (2014), p. 1 | DOI:10.1016/j.jspi.2013.04.002
  • Riondato, Matteo Sampling-Based Data Mining Algorithms: Modern Techniques and Case Studies, Machine Learning and Knowledge Discovery in Databases, Volume 8726 (2014), p. 516 | DOI:10.1007/978-3-662-44845-8_48
  • Awasthi, Pranjal; Balcan, Maria Florina; Long, Philip M., Proceedings of the forty-sixth annual ACM symposium on Theory of computing (2014), p. 449 | DOI:10.1145/2591796.2591839
  • Binev, Peter; Cohen, Albert; Dahmen, Wolfgang; DeVore, Ronald Classification algorithms using adaptive partitioning, The Annals of Statistics, Volume 42 (2014) no. 6 | DOI:10.1214/14-aos1234
  • Giannitrapani, Antonio; Paoletti, Simone; Vicino, Antonio; Zarrilli, Donato, 52nd IEEE Conference on Decision and Control (2013), p. 1013 | DOI:10.1109/cdc.2013.6760015
  • Robbiano, Sylvain Upper bounds and aggregation in bipartite ranking, Electronic Journal of Statistics, Volume 7 (2013) no. none | DOI:10.1214/13-ejs805
  • Giannitrapani, Antonio; Paoletti, Simone; Vicino, Antonio; Zarrilli, Donato, IEEE PES ISGT Europe 2013 (2013), p. 1 | DOI:10.1109/isgteurope.2013.6695355
  • Papantonopoulos, G.; Takahashi, K.; Bountis, T.; Loos, B.G. Aggressive Periodontitis Defined by Recursive Partitioning Analysis of Immunologic Factors, Journal of Periodontology, Volume 84 (2013) no. 7, p. 974 | DOI:10.1902/jop.2012.120444
  • Clémençon, Stéphan; Depecker, Marine; Vayatis, Nicolas An empirical comparison of learning algorithms for nonparametric scoring: the TreeRank algorithm and other methods, Pattern Analysis and Applications, Volume 16 (2013) no. 4, p. 475 | DOI:10.1007/s10044-012-0299-1
  • Ermoliev, Yuri M.; Norkin, Vladimir I. Sample Average Approximation Method for Compound Stochastic Optimization Problems, SIAM Journal on Optimization, Volume 23 (2013) no. 4, p. 2231 | DOI:10.1137/120863277
  • Gottlieb, Lee-Ad; Kontorovich, Aryeh; Krauthgamer, Robert Efficient Regression in Metric Spaces via Approximate Lipschitz Extension, Similarity-Based Pattern Recognition, Volume 7953 (2013), p. 43 | DOI:10.1007/978-3-642-39140-8_3
  • Zaamout, Khobaib; Zhang, John Z., 2012 8th International Conference on Natural Computation (2012), p. 256 | DOI:10.1109/icnc.2012.6234540
  • Rubinstein, Benjamin I. P.; Simma, Aleksandr On the Stability of Empirical Risk Minimization in the Presence of Multiple Risk Minimizers, IEEE Transactions on Information Theory, Volume 58 (2012) no. 7, p. 4160 | DOI:10.1109/tit.2012.2191681
  • Norkin, Vladimir I.; Wets, Roger J-B, International Workshop of "Stochastic Programming for Implementation and Advanced Applications" (2012), p. 94 | DOI:10.5200/stoprog.2012.17
  • Smith, James Edward; Tahir, Muhammad Atif; Sannen, Davy; Van Brussel, Hendrik Making Early Predictions of the Accuracy of Machine Learning Classifiers, Learning in Non-Stationary Environments (2012), p. 125 | DOI:10.1007/978-1-4419-8020-5_6
  • Xu, Huan; Mannor, Shie Robustness and generalization, Machine Learning, Volume 86 (2012) no. 3, p. 391 | DOI:10.1007/s10994-011-5268-1
  • Dogan, Ürün; Glasmachers, Tobias; Igel, Christian A Note on Extending Generalization Bounds for Binary Large-Margin Classifiers to Multiple Classes, Machine Learning and Knowledge Discovery in Databases, Volume 7523 (2012), p. 122 | DOI:10.1007/978-3-642-33460-3_13
  • Villa, S.; Rosasco, L.; Mosci, S.; Verri, A. Consistency of learning algorithms using Attouch–Wets convergence, Optimization, Volume 61 (2012) no. 3, p. 287 | DOI:10.1080/02331934.2010.511671
  • Silva, Jorge F.; Narayanan, Shrikanth S. On signal representations within the Bayes decision framework, Pattern Recognition, Volume 45 (2012) no. 5, p. 1853 | DOI:10.1016/j.patcog.2011.11.015
  • Gey, Servane Risk bounds for CART classifiers under a margin condition, Pattern Recognition, Volume 45 (2012) no. 9, p. 3523 | DOI:10.1016/j.patcog.2012.02.021
  • Rigollet, Philippe Kullback–Leibler aggregation and misspecified generalized linear models, The Annals of Statistics, Volume 40 (2012) no. 2 | DOI:10.1214/11-aos961
  • Samworth, Richard J. Optimal weighted nearest neighbour classifiers, The Annals of Statistics, Volume 40 (2012) no. 5 | DOI:10.1214/12-aos1049
  • Reid, Mark Generalization Bounds, Encyclopedia of Machine Learning (2011), p. 447 | DOI:10.1007/978-0-387-30164-8_328
  • Zhang, Xinhua Regularization, Encyclopedia of Machine Learning (2011), p. 845 | DOI:10.1007/978-0-387-30164-8_712
  • Steidl, Gabriele Supervised Learning by Support Vector Machines, Handbook of Mathematical Methods in Imaging (2011), p. 959 | DOI:10.1007/978-0-387-92920-0_22
  • Luxburg, Ulrike von; Schölkopf, Bernhard Statistical Learning Theory: Models, Concepts, and Results, Inductive Logic, Volume 10 (2011), p. 651 | DOI:10.1016/b978-0-444-52936-7.50016-1
  • Clémençon, Stéphan; Depecker, Marine; Vayatis, Nicolas Adaptive partitioning schemes for bipartite ranking, Machine Learning, Volume 83 (2011) no. 1, p. 31 | DOI:10.1007/s10994-010-5190-y
  • Cavallanti, Giovanni; Cesa-Bianchi, Nicolò; Gentile, Claudio Learning noisy linear classifiers via adaptive and selective sampling, Machine Learning, Volume 83 (2011) no. 1, p. 71 | DOI:10.1007/s10994-010-5191-x
  • Kochedykov, D. A combinatorial approach to hypothesis similarity in generalization bounds, Pattern Recognition and Image Analysis, Volume 21 (2011) no. 4, p. 616 | DOI:10.1134/s1054661811040109
  • Vorontsov, Konstantin; Ivahnenko, Andrey Tight Combinatorial Generalization Bounds for Threshold Conjunction Rules, Pattern Recognition and Machine Intelligence, Volume 6744 (2011), p. 66 | DOI:10.1007/978-3-642-21786-9_13
  • Boucheron, Stéphane; Massart, Pascal A high-dimensional Wilks phenomenon, Probability Theory and Related Fields, Volume 150 (2011) no. 3-4, p. 405 | DOI:10.1007/s00440-010-0278-7
  • Ceseracciu, E.; Reggiani, M.; Sawacha, Z.; Sartori, M.; Spolaor, F.; Cobelli, C.; Pagello, E., 19th International Symposium in Robot and Human Interactive Communication (2010), p. 165 | DOI:10.1109/roman.2010.5598664
  • Guermeur, Yann Sample Complexity of Classifiers Taking Values in ℝQ, Application to Multi-Class SVMs, Communications in Statistics - Theory and Methods, Volume 39 (2010) no. 3, p. 543 | DOI:10.1080/03610920903140288
  • Autin, F.; Le Pennec, E.; Loubes, J. M.; Rivoirard, V. Maxisets for Model Selection, Constructive Approximation, Volume 31 (2010) no. 2, p. 195 | DOI:10.1007/s00365-009-9062-2
  • Minh, Ha Quang Some Properties of Gaussian Reproducing Kernel Hilbert Spaces and Their Implications for Function Approximation and Learning Theory, Constructive Approximation, Volume 32 (2010) no. 2, p. 307 | DOI:10.1007/s00365-009-9080-0
  • Clémençon, Stéphan; Vayatis, Nicolas Overlaying Classifiers: A Practical Approach to Optimal Scoring, Constructive Approximation, Volume 32 (2010) no. 3, p. 619 | DOI:10.1007/s00365-010-9084-9
  • Girard, Robin Plugin procedure in segmentation and application to hyperspectral image segmentation, Electronic Journal of Statistics, Volume 4 (2010) no. none | DOI:10.1214/10-ejs567
  • De Vito, E.; Pereverzyev, S.; Rosasco, L. Adaptive Kernel Methods Using the Balancing Principle, Foundations of Computational Mathematics, Volume 10 (2010) no. 4, p. 455 | DOI:10.1007/s10208-010-9064-2
  • Balcan, Maria-Florina; Blum, Avrim A discriminative model for semi-supervised learning, Journal of the ACM, Volume 57 (2010) no. 3, p. 1 | DOI:10.1145/1706591.1706599
  • Kochedykov, D. A. Combinatorial shell bounds for generalization ability, Pattern Recognition and Image Analysis, Volume 20 (2010) no. 4, p. 459 | DOI:10.1134/s1054661810040061
  • Norkin, Vladimir I.; Keyzer, Michiel A. On Convergence of Kernel Learning Estimators, SIAM Journal on Optimization, Volume 20 (2010) no. 3, p. 1205 | DOI:10.1137/070696817
  • Arlot, Sylvain; Celisse, Alain A survey of cross-validation procedures for model selection, Statistics Surveys, Volume 4 (2010) no. none | DOI:10.1214/09-ss054
  • Anguita, Davide; Ghio, Alessandro; Greco, Noemi; Oneto, Luca; Ridella, Sandro, The 2010 International Joint Conference on Neural Networks (IJCNN) (2010), p. 1 | DOI:10.1109/ijcnn.2010.5596450
  • Huang, Dayu; Unnikrishnan, Jayakrishnan; Meyn, Sean; Veeravalli, Venugopal; Surana, Amit, 2009 IEEE Information Theory Workshop on Networking and Information Theory (2009), p. 62 | DOI:10.1109/itwnit.2009.5158542
  • Clémençon, Stéphan; Vayatis, Nicolas Adaptive Estimation of the Optimal ROC Curve and a Bipartite Ranking Algorithm, Algorithmic Learning Theory, Volume 5809 (2009), p. 216 | DOI:10.1007/978-3-642-04414-4_20
  • Norkin, V. I.; Keyzer, M. A. Efficiency of classification methods based on empirical risk minimization, Cybernetics and Systems Analysis, Volume 45 (2009) no. 5, p. 750 | DOI:10.1007/s10559-009-9153-x
  • Leeb, Hannes; Pötscher, Benedikt M. Model Selection, Handbook of Financial Time Series (2009), p. 889 | DOI:10.1007/978-3-540-71297-8_39
  • Balcan, Maria-Florina; Beygelzimer, Alina; Langford, John Agnostic active learning, Journal of Computer and System Sciences, Volume 75 (2009) no. 1, p. 78 | DOI:10.1016/j.jcss.2008.07.003
  • Vakulenko, S.; Grigoriev, D. Instability, complexity, and evolution, Journal of Mathematical Sciences, Volume 158 (2009) no. 6, p. 787 | DOI:10.1007/s10958-009-9412-4
  • Audibert, Jean-Yves Fast learning rates in statistical inference through aggregation, The Annals of Statistics, Volume 37 (2009) no. 4 | DOI:10.1214/08-aos623
  • Diehl, Christopher P.; Llorens, Ashley J., 2008 IEEE Workshop on Machine Learning for Signal Processing (2008), p. 468 | DOI:10.1109/mlsp.2008.4685525
  • Gschloessl, Bernhard; Guermeur, Yann; Cock, J Mark HECTAR: A method to predict subcellular targeting in heterokonts, BMC Bioinformatics, Volume 9 (2008) no. 1 | DOI:10.1186/1471-2105-9-393
  • Sergienko, I. V.; Gupal, A. M.; Vagis, A. A. Bayesian approach, theory of empirical risk minimization. Comparative analysis, Cybernetics and Systems Analysis, Volume 44 (2008) no. 6, p. 822 | DOI:10.1007/s10559-008-9058-0
  • Lecué, Guillaume Classification with minimax fast rates for classes of Bayes rules with sparse representation, Electronic Journal of Statistics, Volume 2 (2008) no. none | DOI:10.1214/07-ejs015
  • Mendelson, Shahar Lower Bounds for the Empirical Minimization Algorithm, IEEE Transactions on Information Theory, Volume 54 (2008) no. 8, p. 3797 | DOI:10.1109/tit.2008.926323
  • Mendelson, Shahar Obtaining fast error rates in nonconvex situations, Journal of Complexity, Volume 24 (2008) no. 3, p. 380 | DOI:10.1016/j.jco.2007.09.001
  • Balcan, Maria-Florina; Blum, Avrim; Hartline, Jason D.; Mansour, Yishay Reducing mechanism design to algorithm design via machine learning, Journal of Computer and System Sciences, Volume 74 (2008) no. 8, p. 1245 | DOI:10.1016/j.jcss.2007.08.002
  • Alquier, P. PAC-Bayesian bounds for randomized empirical risk minimizers, Mathematical Methods of Statistics, Volume 17 (2008) no. 4, p. 279 | DOI:10.3103/s1066530708040017
  • Balcan, Maria-Florina; Blum, Avrim; Vempala, Santosh, Proceedings of the fortieth annual ACM symposium on Theory of computing (2008), p. 671 | DOI:10.1145/1374376.1374474
  • Kontorovich, Leonid (Aryeh) Constructing processes with prescribed mixing coefficients, Statistics Probability Letters, Volume 78 (2008) no. 17, p. 2910 | DOI:10.1016/j.spl.2008.04.016
  • Clémençon, Stéphan; Lugosi, Gábor; Vayatis, Nicolas Ranking and Empirical Minimization of U-statistics, The Annals of Statistics, Volume 36 (2008) no. 2 | DOI:10.1214/009052607000000910
  • Hofmann, Thomas; Schölkopf, Bernhard; Smola, Alexander J. Kernel methods in machine learning, The Annals of Statistics, Volume 36 (2008) no. 3 | DOI:10.1214/009053607000000677
  • Juditsky, A.; Rigollet, P.; Tsybakov, A. B. Learning by mirror averaging, The Annals of Statistics, Volume 36 (2008) no. 5 | DOI:10.1214/07-aos546
  • Lecué, Guillaume Optimal rates of aggregation in classification under low noise assumption, Bernoulli, Volume 13 (2007) no. 4 | DOI:10.3150/07-bej6044
  • Wu, Qiang; Ying, Yiming; Zhou, Ding-Xuan Multi-kernel regularized classifiers, Journal of Complexity, Volume 23 (2007) no. 1, p. 108 | DOI:10.1016/j.jco.2006.06.007
  • Bauer, Frank; Pereverzev, Sergei; Rosasco, Lorenzo On regularization algorithms in learning theory, Journal of Complexity, Volume 23 (2007) no. 1, p. 52 | DOI:10.1016/j.jco.2006.07.001
  • Lecué, Guillaume Suboptimality of Penalized Empirical Risk Minimization in Classification, Learning Theory, Volume 4539 (2007), p. 142 | DOI:10.1007/978-3-540-72927-3_12
  • Bousquet, Olivier; Elisseeff, André Guest editorial: Learning theory, Machine Learning, Volume 66 (2007) no. 2-3, p. 115 | DOI:10.1007/s10994-007-0753-2
  • Lounici, K. Generalized mirror averaging and D-convex aggregation, Mathematical Methods of Statistics, Volume 16 (2007) no. 3, p. 246 | DOI:10.3103/s1066530707030040
  • Audibert, Jean-Yves; Tsybakov, Alexandre B. Fast learning rates for plug-in classifiers, The Annals of Statistics, Volume 35 (2007) no. 2 | DOI:10.1214/009053606000001217
  • Lecué, Guillaume Simultaneous adaptation to the margin and to complexity in classification, The Annals of Statistics, Volume 35 (2007) no. 4 | DOI:10.1214/009053607000000055
  • Abraham, C.; Biau, G.; Cadre, B. On the Kernel Rule for Function Classification, Annals of the Institute of Statistical Mathematics, Volume 58 (2006) no. 3, p. 619 | DOI:10.1007/s10463-006-0032-1
  • Herbei, Radu; Wegkamp, Marten H. Classification with reject option, Canadian Journal of Statistics, Volume 34 (2006) no. 4, p. 709 | DOI:10.1002/cjs.5550340410
  • Lecué, Guillaume Optimal Oracle Inequality for Aggregation of Classifiers Under Low Noise Condition, Learning Theory, Volume 4005 (2006), p. 364 | DOI:10.1007/11776420_28
  • Fromont, Magalie; Tuleau, Christine Functional Classification with Margin Conditions, Learning Theory, Volume 4005 (2006), p. 94 | DOI:10.1007/11776420_10
  • Peski, Marcin Categorization, SSRN Electronic Journal (2006) | DOI:10.2139/ssrn.884232
  • Biau, Gérard; Bleakley, Kevin Statistical inference on graphs, Statistics Decisions, Volume 24 (2006) no. 2, p. 209 | DOI:10.1524/stnd.2006.24.2.209
  • Vaart, Aad W. van der; Dudoit, Sandrine; Laan, Mark J. van der Oracle inequalities for multi-fold cross validation, Statistics Decisions, Volume 24 (2006) no. 3, p. 351 | DOI:10.1524/stnd.2006.24.3.351
  • Biau, G.; Bunea, F.; Wegkamp, M.H. Functional Classification in Hilbert Spaces, IEEE Transactions on Information Theory, Volume 51 (2005) no. 6, p. 2163 | DOI:10.1109/tit.2005.847705
  • Clémençon, Stéphan; Lugosi, Gábor; Vayatis, Nicolas Ranking and Scoring Using Empirical Risk Minimization, Learning Theory, Volume 3559 (2005), p. 1 | DOI:10.1007/11503415_1
  • Balcan, Maria-Florina; Blum, Avrim A PAC-Style Model for Learning from Labeled and Unlabeled Data, Learning Theory, Volume 3559 (2005), p. 111 | DOI:10.1007/11503415_8
  • Takigawa, Ichigaku; Kudo, Mineichi; Nakamura, Atsuyoshi The Convex Subclass Method: Combinatorial Classifier Based on a Family of Convex Sets, Machine Learning and Data Mining in Pattern Recognition, Volume 3587 (2005), p. 90 | DOI:10.1007/11510888_10

Cité par 214 documents. Sources : Crossref