DATE: Thu, Jan 19, 2017
TIME: 1 pm
PLACE: SITE 5084
TITLE: Enhancing and Combining Sequential and Tree LSTM for Natural Language Inference
PRESENTERS: Qian Chen (1) and Xiaodan Zhu (2)
(1) University of Science and Technology of China, (2) NRC
ABSTRACT:

Reasoning and inference are central to human and artificial intelligence. Modeling inference in human language is notoriously challenging but is a fundamental problem in natural language understanding and many applications. In this paper, we present a new state-of-the-art result, achieving the accuracy of 88.3% on the Stanford Natural Language Inference dataset. This result is achieved first through our enhanced sequential encoding model, which outperforms the previous best model that employs more complicated network architectures. We further show that by explicitly considering recursive architectures, we achieve additional improvement. Particularly, incorporating syntactic parse information contributes to our best result; it improves the performance even when the parse information is added to an already very strong system.

Bios:
Qian Chen is a visiting PhD student working with Prof. Diana Inkpen and Xiaodan Zhu at EECS, University of Ottawa. Qian's research interests include natural language processing, speech synthesis, and machine learning. Qian is from the University of Science and Technology of China.
Xiaodan Zhu is a Research Officer of National Research Council Canada and an adjunct professor of EECS at the University of Ottawa. His research interests include natural language processing, spoken language processing, and machine learning.