DATE: | Mon, Apr 17, 2023 |
TIME: | 10:30 am |
PLACE: | In SITE 5084 and on Zoom |
TITLE: | Knowledge Manifolds in Transformer Models of NLP |
PRESENTER: |
name
Dalhousie University |
ABSTRACT:
|
Abstract:
Despite the benefits of deep neural network models, their opaqueness is a
major cause of concern. Deep neural network models work as a black box and
it can be impossible to understand what and how much a model learns about
language to solve a task. In this talk, I will present my work on
discovering knowledge manifolds in transformer models of NLP and seek an
answer to the question of how knowledge of the language is structured in
our models. Some notable findings suggest that: i) models structure
knowledge in diverse multifacet manifolds that consist of a combination of
linguistic concepts. ii) lower-layer manifolds are dominated by
subword-based units and semantics, middle layers represent core-linguistic
concepts, and the model forms class-based concepts on the higher layers. I
will also present several use cases for discovering knowledge manifolds,
such as a concept-based explanation of model prediction, intrinsic
evaluation to detect potential issues and biases, and editing the model,
that my team is working on.
Bio:
Hassan Sajjad is an Associate Professor in the Faculty of Computer Science
at Dalhousie University, Canada, and the director of HyperMatrix lab. His
research interests come under the umbrella of NLP and trustworthy AI,
specifically, robustness, generalization, interpretation, and
explainability of NLP models. He has done extensive work on model
interpretation and machine translation which is recognized at several
prestigious venues such as CL, ICLR, and ACL and has also been featured in
tech blogs including MIT News. Dr. Sajjad regularly serves as an area
chair and reviewer at various machine learning and computational
linguistics conferences and journals. He is the tutorial chair at EMNLP
2023, co-organized the BlackboxNLP workshop 2020/21, and the shared task
on MT Robustness 2019/20.