Jiang W. (2004). Boosting with Noisy Data: Some Views from Statistical Theory Neural Comput. 16

See more from authors: Jiang W

References and models cited by this paper

Avnimelech R, Intrator N. (1999). Boosting regression estimators. Neural computation. 11 [PubMed]

Bartlett P, Baxter J, Mason L, Frean M. (1999). Boosting algorithms as gradient descent in function space Tech Rep.

Bartlett P, Freund Y, Schapire R, Lee W. (1998). Boosting the margin: A new explanation for the effectiveness of voting methods Ann Stat. 26

Breiman L. (1996). Bagging predictors Mach Learn. 24

Breiman L. (1997). Prediction games and arcing classifiers Tech. Rep. No. 504.

Breiman L. (1997). Arcing the edge Tech. Rep. No. 486.

Breiman L. (1998). Arcing classifiers (with discussion) Ann Stat. 26

Breiman L. (1999). Using adaptive bagging to debias regressions Tech. Rep. No. 547.

Breiman L. (2000). Some infinity theory for predictor ensembles Tech. Rep. No. 579.

Devroye L, Gyorfi L, Lugosi G. (1996). A probabilistic theory of pattern recognition.

Efron B, Tibshirani R, Hastie T, Johnstone I. (2002). Least angle regression Tech Rep.

Freund Y. (1995). Boosting a weak learning algorithm by majority Information And Computation. 12

Freund Y, Schapire R. (1997). A decision-theoretic generalization of on-line learning and an application to boosting J Comput Sys Sci. 55

Freund Y, Schapire RE. (1996). Game theory, on-line prediction and boosting Proceedings of the Ninth Annual Conference on Computational Learning Theory.

Friedman J. (2001). Greedy function approximation: A gradient boosting machine Ann Stat. 29

Friedman JH. (2002). Stochastic gradient boosting Computational Statistics And Data Analysis. 38

Jiang W. (1999). On weak base hypotheses and their implications for boosting regression and classification Tech Rep (Available on-line: http:--neyman.stats.nwu.edu-jiang-boost-boost.largetime2.ps).

Jiang W. (2000). Some results on weakly accurate base learners for boosting regression and classification Lecture notes in computer science: Multiple classifier systems (Available on-line: http:--neyman.stats.nwu.edu-jiang-boost-boost.mcs.ps).

Jiang W. (2000). Does boosting overfit: Views from an exact solution Tech Rep (Available on-line: http:--neyman.stats.nwu.edu-jiang-boost-boost.onedim.ps).

Jiang W. (2000). Is regularization unnecessary for boosting? Proceedings Of The Eighth International Workshop On Artificial Intelligence And Statistics (Available on-line: http:--neyman.stats.nwu.edu-jiang-boost-boost.flor.ps).

Jiang W. (2004). Process consistency for AdaBoost Ann Stat.

Jordan MI, Bartlett PL, Mcauliffe JD. (2003). Convexity, classification, and risk bounds Unpublished manuscript.

Koltchinskii VI, Panchenko D. (2002). Empirical margin distributions and bounding the generalization error of combined classifiers Ann Stat. 30

Lin Y. (2001). A note on margin-based loss functions in classification (Tech. Rep. 1044r.

Lugosi G, Blanchard G, Vayatis N. (2003). On the rate of convergence of regularized boosting methods Unpublished manuscript (Available on-line: http:--www.proba.jussieu.fr-pageperso-vayatis-download-statboost.pdf).

Lugosi G, Vayatis N. (2002). A consistent strategy for boosting algorithms COLT 2002: Lecture Notes In Artificial Intelligence. 2375

Meir R, Mannor S. (2002). On the existence of linear weak learners and applications Machine Learning. 48

Ridgeway G, Madigan D, Richardson T. (1999). Boosting methodology for regression problems Proc. of the 7th International Workshop on Artificial Intelligence and Statistics .

Schapire R. (1999). Theoretical view of boosting Proceedings of the 4th European Conference on Computational Learning Theory.

Schuurmans D, Grove AJ. (1998). Boosting in the limit: Maximizing the margin of learned ensembles Proceedings of the Fifteenth National Conference on Artificial Intelligence (AAAI-98).

Tibshirani R, Hastie T, Friedman J. (2000). Additive logistic regression: A statictical view of boosting Ann Stat. 28

Wyner A, Krieger A, Long C. (2001). Boosting noisy data Proceedings of the Eighteenth International Conference on Machine Learning.

Yang Y. (1999). Minimax nonparametric classification-Part I: Rates of convergence IEEE Trans Info Theory. 45

Yu B, Buhlmann P. (2002). Analyzing bagging Ann Stat. 30

Yu B, Buhlmann P. (2004). Boosting with the L2-loss: Regression and classification J Am Stat Assoc.

Zhang T. (2004). Statistical behavior and consistency of classification methods based on convex risk minimization Ann Stat. 32

Zhang T, Meir R, Mannor S. (2002). The consistency of greedy algorithms for classification COLT 2002: Lecture Notes In Artificial Intelligence. 2375

Zhang T, Yu B. (2003). Boosting with early stopping: convergence and consistenc Tech. Rep. No. 635.

Zhang Z, Mallat SG. (1993). Matching pursuits with time-frequency dictionaries IEEE Transactions On Signal Processing. 41

Zhu J, Hastie T, Rosset S. (2002). Boosting as a regularized path to a maximum margin classifier Tech Rep.

References and models that cite this paper
This website requires cookies and limited processing of your personal data in order to function. By continuing to browse or otherwise use this site, you are agreeing to this use. See our Privacy policy and how to cite and terms of use.