STraTA Self Training with Task Augmentation for Better Few shot Learning
Deep Learning Explainer
A super cool method that improve model accuracy drastically without using additional task-specific annotated data
Connect Linkedin https://www.linkedin.com/in/xue-yong-fu-955723a6/ Twitter https://twitter.com/home email edwindeeplearning@gmail.com
0:00 - Intro 3:07 - Task augmentation + self-training 5:13 - Intermediate fine-tuning 6:09 - Task augmentation setup 10:49 - Overgeneration & filtering 12:17 - Self-training algorithm 16:15 - Results 20:23 - My thoughts
STraTA: Self-Training with Task Augmentation for Better Few-shot Learning https://arxiv.org/pdf/2109.06270.pdf
Abstract Despite their recent successes in tackling many NLP tasks, large-scale pre-trained language models do not perform as well in few-shot settings where only a handful of training examples are available. To address this shortcoming, we propose STraTA, which stands for Self-Training with Task Augmentation, an approach that builds on two key ideas for effective leverage of unlabeled data. First, STraTA uses task augmentation, a novel technique that synthesizes a large amount of data for auxiliary-task fine-tuning from target-task unlabeled texts. Second, STraTA performs selftraining by further fine-tuning the strong base model created by task augmentation on a broad distribution of pseudo-labeled data. Our experiments demonstrate that STraTA can substantially improve sample efficiency across 12 fewshot benchmarks. Remarkably, on the SST-2 sentiment dataset, STraTA, with only 8 training examples per class, achieves comparable results to standard fine-tuning with 67K training examples. Our analyses reveal that task augmentation and self-training are both complementary and independently effective. ... https://www.youtube.com/watch?v=0yriOQbNWmo
80965208 Bytes