Quantifying Attention Flow In Transformers (Effective Way to Interpret Attention in BERT) Explained
Deep Learning Explainer
This video walks you through the paper "Quantifying Attention Flow In Transformers" that proposes a simple yet effective method to better analyze transformer-base models' attention weights.
Line to the paper: https://arxiv.org/abs/2005.00928 (Quantifying Attention Flow In Transformers)
The official code implementation of the paper: https://github.com/samiraabnar/attention_flow
Relevant video: Revealing Dark Secrets of BERT (Analysis of BERT's Attention Heads) - Paper Explained https://youtu.be/mnU9ILoDH68
Abstract of the paper: In the Transformer model, “self-attention” combines information from attended embed- dings into the representation of the focal em- bedding in the next layer. Thus, across layers of the Transformer, information originating from different tokens gets increasingly mixed. This makes attention weights unreliable as ex- planations probes. In this paper, we consider the problem of quantifying this flow of information through self-attention. We propose two methods for approximating the attention to in- put tokens given attention weights, attention rollout and attention flow, as post hoc methods when we use attention weights as the relative relevance of the input tokens. We show that these methods give complementary views on the flow of information, and compared to raw attention, both yield higher correlations with importance scores of input tokens obtained using an ablation method and input gradients. ... https://www.youtube.com/watch?v=3Q0ZXqVaQPo
27814009 Bytes