DATE: | Tuesday, Feb. 07, 2006 |
TIME: | 2:30 pm |
PLACE: | Council Room (SITE 5-084) |
TITLE: | A framework for measuring differences between classifiers |
PRESENTER: | William Elazmeh University of Ottawa |
ABSTRACT:
Measuring classifier performance examines its predictions against class labels. Applying an evaluation metric to the resulting confusion matrix produces a representation of performance. We proposes a general framework for classifier evaluation with respect to a property measured by a test that produces a score. Developed by a biostatistician for medical studies, Tango's test can produce the 95\%-confidence intervals for the difference in proportions. We use Tango's test to identify confident segments or points along an ROC curve. We propose to compares the observed difference in error proportions of classification against the area under these confident segments. The objective is to determine, with confidence, how different classifier predictions are from class labels. The proposed method presents the trade off between confidence and average difference in classification error. Our experimental results show that the proposed evaluation measure is more reliable than general ROC and AUC methods. |