DATE: | Thu, Apr 6, 2023 |
TIME: | 10:00 am |
PLACE: | In SITE 5084 and on Zoom |
TITLE: | Fidelity and Sample Faithfulness in Lottery Ticket Hypothesis |
PRESENTER: |
Yuanzheng Hu
University of Ottawa |
ABSTRACT:
|
Abstract: Pruning on deep learning models aims to decrease the size of the models and still achieve a comparable accuracy as the original model. While most of the work evaluates performance of pruning on accuracy, we believe that accuracy cannot solely be used for the evaluation on pruning. In this study, we use fidelity (Yang et al, 2019) to evaluate the pruned models with respect to the original model. We define fidelity as a metric of commonality the pruned models share with the original model in their predictions. For a pruned model, we also exploit a notion of faithful samples, i.e., a subset of test samples on which the original and the pruned model predict the same labels. We used Lottery Ticket Hypothesis (Frankel et al, 2018) as our pruning strategy, the hypothesis states that for a randomly initialized, dense network, there exists a subnetwork, when trained in isolation it can achieve a comparable test accuracy of the original network.
We conduct our experiments on two different datasets (OPEN-I, Visual Spatial Reasoning) and three different models (VisualBert, LXMERT and UNITER). Three evaluation metrics are used for the experiment, the accuracy, fidelity and faithfulness accuracy.