DATE: Wed, Oct 30, 2019
TIME: 1 pm
PLACE: SITE 5084
TITLE: Aggregated Learning: A Vector-Quantization Approach to Learning Neural Network Classifiers
PRESENTER: Masoumeh Soflaei Shahrbabak
University of Ottawa
ABSTRACT:

Learning a sufficient representation is at the heart of classification based on neural network models. Under the recent information bottleneck (IB) principle, such a learning problem can be formulated as a constrained optimization problem, which we call the ``IB learning'' problem. In this work, we formulate a special class of quantization problems, referred to as ``IB quantization''. We show that given a classification setting, its associated IB learning problem and IB quantization problem are theoretically equivalent in the sense that optimal IB quantizers necessarily give rise to optimal representations in IB learning. As is well known in rate-distortion theory, vector quantizers provide superior performances to scalar quantizers. The discovered equivalence between IB learning and IB quantization then motivates us to take a vector-quantization approach to IB learning. This gives rise to a new learning framework for neural network classification models, which we call Aggregated Learning. Instead of classifying the input objects one at a time, in Aggregated Learning, several objects are jointly classified simultaneously by a single neural network model. The effectiveness of the proposed framework is verified through extensive experiments on the standard image classification tasks. This is joint work with Yongyi Mao (University of Ottawa), Hongyu Guo (National Research Council Canada), Ali Al-Bashabsheh (Beihang University), and Richong Zhang (Beihang University).