DATE: | Tuesday, Apr. 03, 2007 |
TIME: | 2:30 pm |
PLACE: | CBY A707 |
TITLE: | Which Supervised Learning Method Works Best for What? An Empirical Comparison of Learning Methods and Metrics++ |
PRESENTER: | Rich Caruana Cornell University |
ABSTRACT:
Decision trees are intelligible, but do they perform well enough that
you should use them? Have SVMs replaced neural nets, or are neural
nets still best for regression, and SVMs best for classification?
Boosting maximizes margins similar to SVMs, but can boosting compete
with SVMs? And if it does compete, is it better to boost weak models,
as theory might suggest, or to boost stronger models? Bagging is
simpler than boosting -- how well does bagging stack up against
boosting? Breiman said Random Forests are better than bagging and as
good as boosting. Was he right? And what about old friends like
logistic regression, KNN, and naive bayes? Should they be relegated
to the history books, or do they still fill important niches?
|