Shortformer: Better Language Modeling using Shorter Inputs (Paper Explained)
Deep Learning Explainer
Modelling long sequences has always been hard for transformer-based models. This paper proposes a super innovative way for the transformer to cache previously processed tokens. And it makes generation 9X faster. Truly Amazing!
0:00 - What does Shortformer offer 2:10 - Problems with modelling long sequences 3:09 - Non-overlapping 5:39 - Sliding window 8:22 - Effective context window sizes 10:13 - How context window size affects Transformer 13:46 - Staged training 19:41 - Repositioning position embeddings 24:20 - PIA enables caching 28:59 - Staged training + PIA = Shortformer 29:28 - Shorformer against other SOTAs 31:03 - Position-infused attention (PIA) 31:28 - Summary
Paper https://arxiv.org/abs/2012.15832
Code https://github.com/ofirpress/shortformer
Abstract We explore the benefits of decreasing the input length of transformers. First, we show that initially training the model on short subsequences, before moving on to longer ones, both reduces overall training time and, surprisingly, gives a large improvement in perplexity. We then show how to improve the efficiency of recurrence methods in transformers, which let models condition on previously processed tokens (when generating sequences that are larger than the maximal length that the transformer can handle at once). Existing methods require computationally expensive relative position embeddings; we introduce a simple alternative of adding absolute position embeddings to queries and keys instead of to word embeddings, which efficiently produces superior results. By combining these techniques, we increase training speed by 65%, make generation nine times faster, and substantially improve perplexity on WikiText-103, without adding any parameters.
Connect Twitter https://twitter.com/home Email edwindeeplearning@gmail.com ... https://www.youtube.com/watch?v=WuwR5WTMteM
110448251 Bytes