Too many papers to read? Try TLDR - Extreme Summarization of Scientific Documents
Deep Learning Explainer
There are over 300 deep learning papers published every day. I find it very hard to keep up with. This paper introduces a cool way to summarize papers to extremely short summaries (TLDRs). More interestingly, Semantic Scholar uses the proposed method to build its TLDR feature and it's available in beta for nearly 10 million papers now!
0:00 - Too many papers 1:33 - What's special about this paper 2:25 - Sci-TLDR 5:16 - Controlled abstraction for TLDRs with title scaffolding 9:04 - During training 9:51 - Extractive summarization baselines 10:46 - Abstractive summarization baselines 11:14 - Input space 12:59 - Oracle 13:50 - ROGUE metrics 14:35 - Experiment results 17:25 - Model generated examples 17:59 - Demo - real-word application
Code and data: TLDR Extreme Summarization of Scientific Documents https://github.com/allenai/scitldr
TLDR feature in Semantic Scholar: https://tldr.semanticscholar.org/
Abstract We introduce TLDR generation, a new form of extreme summarization, for scientific papers. TLDR generation involves high source compression and requires expert background knowledge and understanding of complex domain-specific language. To facilitate study on this task, we introduce SciTLDR, a new multi-target dataset of 5.4K TLDRs over 3.2K papers. SciTLDR contains both author-written and expert-derived TLDRs, where the latter are collected using a novel annotation protocol that produces high-quality summaries while minimizing annotation burden. We propose CATTS, a simple yet effective learning strategy for generating TLDRs that exploits titles as an auxiliary training signal. CATTS improves upon strong baselines under both automated metrics and human evaluations.
Connect Twitter https://twitter.com/home email edwindeeplearning@gmail.com ... https://www.youtube.com/watch?v=wNSiWJxVGQ8
33205066 Bytes