DATE: Tue, Jan 12, 2021
TIME: 1 pm
PLACE: On Zoom
TITLE: Enhancing fairness in supervised machine learning
PRESENTER: Bita Omidi
University of Ottawa
ABSTRACT:

Lately, there have been several attempts to reduce bias in artificial intelligence in order to maintain fairness in machine learning projects. These methods fall under three categories of pre-processing, in processing and post-processing techniques. There are at least 21 notation of fairness in the recent literature, which not only provide different measurement methods of fairness but also lead to completely different concepts. It is worth mentioning that, it is impossible to satisfy all of the definitions of fairness at the same time and some of them are incompatible with each other. As a result, it is important to choose a fairness definition that need to be satisfied according to the context that we are working on. The current study investigates some of the most common definitions and metrics for fairness introduced by researchers to compare three of the proposed de-biasing techniques regarding their effects on the performance and fairness measures through empirical experiments on four different datasets. The de-biasing methods include the "Reweighting Algorithm", "Adversarial De-biasing Method", the "Reject Option Classification Method" performed on the classification tasks of "Survival of patients with heart failure", "Prediction of hospital readmission among diabetes patients", "Credit classification of bank account holders", and "The COVID19 related anxiety level classification of Canadians". Findings show the adversarial de-biasing in-processing method can be the best technique for mitigating bias working with the deep learning classifiers when we are capable of changing the classification process. This method has not led to a considerable reduction of accuracy except for the CAMH dataset.