Legends: |
Link to a Model |
Reference cited by multiple papers |

## References and models cited by this paper | ## References and models that cite this paper | |

Anthony M, Bartlett PL (1999) Neural network learning: Theoretical foundationsAronszajn N (1950) Theory of reproducing kernels Transactions Of The American Mathematical Society 68:337-404Barron AR (1990) Complexity regularization with applications to artificial neural networks Nonparametric functional estimation, Roussa G, ed. pp.561Bartlett PL (1998) The sample complexity of pattern classification with neural networks: The size of the weights is more important than the size of the network IEEE Trans Inform Theory 44:525-536Bartlett PL, Jordan MI, Mcauliffe JD (2003) Convexity, classification, and risk bounds Unpublished manuscriptBlanchard G, Bousquet O, Massart P (2004) Statistical performance of support vector machines Unpublished manuscriptBoser BE, Guyon I, Vapnik V (1992) A training algorithm for optimal margin classifiers Proceedings Of The Fifth Annual Workshop Of Computational Learning Theory 5:144-152Bradley PS, Mangasarian OL (2000) Massive data discrimination via linear support vector machines Optimization Methods And Software 13:1-10Chen DR, Wu Q, Ying Y, Zhou DX (2004) Support vector machine soft margin classifiers: Error analysis J Mach Learn Res 5:1143-1175Cristianini N, Shawe-taylor J (2000) An introduction to support vector machinesDevroye L, Gyorfi L, Lugosi G (1996) A probabilistic theory of pattern recognitionEvgeniou T, Pontil M, Poggio T (2000) Regularization networks and support vector machines Adv Comp Math 13:1-50Lugosi G, Vayatis N (2004) On the Bayes-risk consistency of regularized boosting methods Ann Stat 32:30-55Mendelson S (2002) Improving the sample complexity using global data IEEE Trans Inform Theory 48:1977-1991Mukherjee S, Rifkin R, Poggio T (2002) Regression and classification with regularization Nonlinear estimation and classification, Denison DD:Hnasen MH:Holmes CC:Mallick B:Yu B, ed. pp.107Niyogi P (1998) The informational complexity of learningNiyogi P, Girosi F (1996) On the relationship between generalization error, hypothesis complexity, and sample complexity for radial basis functions Neural Comput 8:819-842Pedroso JP, Murata N (2001) Support vector machines with different norms: Motivation, formulations and results Pattern Recognition Letters 22:1263-1272Rosasco L, De Vito E, Caponnetto A, Piana M, Verri A (2004) Are loss functions all the same? Neural Comput 16:1063-76 [Journal] [PubMed]Smale S, Zhou DX (2004) Shannon sampling and function reconstruction from point values Bull Amer Math Soc 41:279-305Steinwart I, Scovel C (2005) Fast Rates for Support Vector Machines Learning Theory, Auer, Peter and Meir, Ron, ed. pp.279van_der_Vaart AW, Wellner JA (1996) Weak convergence and empirical processesVapnik V (1998) Statistical Learning TheoryWahba G (1990) Splines models for observational dataWu Q, Ying Y, Zhou D (2007) Multi-kernel regularized classifiers Journal of Complexity 23:108 - 134 [Journal] Wu Q, Zhou DX (2004) Analysis of support vector machine classification Manuscript submitted for publicationZhang T (2002) Covering number bounds of certain regularized linear function classes J Mach Learn Res 2:527-550Zhang T (2004) Statistical behavior and consistency of classification methods based on convex risk minimization Ann Stat 32:56-85Zhou DX (2003) Capacity of reproducing kernel spaces in learning theory IEEE Trans Inform Theory 49:1743-1752 |