DATE: Thursday, March 19, 2009
TIME: 2:45 pm
PLACE: Council Room (SITE 5-084)
TITLE: Sensibility Analysis of Evaluating Learning Methods
PRESENTER: William Klement
University of Ottawa
ABSTRACT:

When evaluating the performance of machine learning algorithms, three components are prime candidates for inspection, the data, the learning algorithm, and the evaluation method being used. The error of a learning method is a composite of the bias and the variance. The variance is a result of how well the data represent the concept being learned. The bias is related to how well the learning algorithm fits the concept. Several machine learning methods are able to reduce the variance. Reducing the bias is the essence of developing new machine learning methods.

In this work, we present a novel method to measure how well a classifier fits the data and how well the data suits the classifier. Our intuition is based on measuring how well the probability estimates, produced by a probability estimator, fit the data. We call such analysis the "Sensibility Analysis" that determine whether a learning method makes sensible decisions on suitable test data. Our experiments show that measuring the sensibility of classification allows us to determine how a test data set is suited for testing a given classifier, effectively, allowing us to observe changes in the underlying concept in the data, hence, the ability to measure a concept drift.