Zhang T. (2005). Learning bounds for kernel regression using effective data dimensionality. Neural computation. 17 [PubMed]

See more from authors: Zhang T

References and models cited by this paper

Bartlett P, Bousquet O, Mendelson S. (2002). Localized Radeacher complexity Proceedings of the Annual Conference on Computational Learning Theory.

Bartlett P, Williamson R, Lee W. (1998). The importance of convexity in learning with squared loss IEEE Trans Inform Theory. 44

Mendelson S. (2002). Improving the sample complexity using global data IEEE Trans Inform Theory. 48

Mendelson S. (2003). On the performance of kernel classes JMLR. 4

Scholkopf B, Smola A, Williamson RC. (2001). Generalization performance of regularization networks and support vector machines via entropy numbers of compact operators IEEE Trans Inform Theory. 47

Shawe-taylor J, Williamson RC, Bartlett PL, Guo Y. (2002). Covering numbers for support vector machines IEEE Trans Inform Theory. 48

Smale S, Chucker F. (2002). On the mathematical foundations of learning Bull Am Math Soc. 39

Stone CJ. (1982). Optimal global rates of convergence for nonparametric regression Ann Stat. 10

Wahba G. (1990). Splines models for observational data.

Yurinsky V. (1995). Sums and gaussian vectors.

Zhang T. (2002). Covering number bounds of certain regularized linear function classes J Mach Learn Res. 2

Zhang T. (2003). Leave-one-out bounds for kernel methods Neural Comput. 15

Zhang T. (2003). Effective dimension and generalization of kernel learning Neural information processing systems. 15

van_de_Geer S. (2000). Empirical processes in M-estimation.

References and models that cite this paper
This website requires cookies and limited processing of your personal data in order to function. By continuing to browse or otherwise use this site, you are agreeing to this use. See our Privacy policy and how to cite and terms of use.