Axial-DeepLab: Stand-Alone Axial-Attention for Panoptic Segmentation (Paper Explained)
Yannic Kilcher
#ai #machinelearning #attention
Convolutional Neural Networks have dominated image processing for the last decade, but transformers are quickly replacing traditional models. This paper proposes a fully attentional model for images by combining learned Positional Embeddings with Axial Attention. This new model can compete with CNNs on image classification and achieve state-of-the-art in various image segmentation tasks.
OUTLINE: 0:00 - Intro & Overview 4:10 - This Paper's Contributions 6:20 - From Convolution to Self-Attention for Images 16:30 - Learned Positional Embeddings 24:20 - Propagating Positional Embeddings through Layers 27:00 - Traditional vs Position-Augmented Attention 31:10 - Axial Attention 44:25 - Replacing Convolutions in ResNet 46:10 - Experimental Results & Examples
Paper: https://arxiv.org/abs/2003.07853 Code: https://github.com/csrhddlam/axial-deeplab
My Video on BigBird: https://youtu.be/WVPE62Gk3EM My Video on ResNet: https://youtu.be/GWt6Fu05voI My Video on Attention: https://youtu.be/iDulhoQ2pro
Abstract: Convolution exploits locality for efficiency at a cost of missing long range context. Self-attention has been adopted to augment CNNs with non-local interactions. Recent works prove it possible to stack self-attention layers to obtain a fully attentional network by restricting the attention to a local region. In this paper, we attempt to remove this constraint by factorizing 2D self-attention into two 1D self-attentions. This reduces computation complexity and allows performing attention within a larger or even global region. In companion, we also propose a position-sensitive self-attention design. Combining both yields our position-sensitive axial-attention layer, a novel building block that one could stack to form axial-attention models for image classification and dense prediction. We demonstrate the effectiveness of our model on four large-scale datasets. In particular, our model outperforms all existing stand-alone self-attention models on ImageNet. Our Axial-DeepLab improves 2.8% PQ over bottom-up state-of-the-art on COCO test-dev. This previous state-of-the-art is attained by our small variant that is 3.8x parameter-efficient and 27x computation-efficient. Axial-DeepLab also achieves state-of-the-art results on Mapillary Vistas and Cityscapes.
Authors: Huiyu Wang, Yukun Zhu, Bradley Green, Hartwig Adam, Alan Yuille, Liang-Chieh Chen
Links: YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher Discord: https://discord.gg/4H8xxDF BitChute: https://www.bitchute.com/channel/yannic-kilcher Minds: https://www.minds.com/ykilcher Parler: https://parler.com/profile/YannicKilcher LinkedIn: https://www.linkedin.com/in/yannic-kilcher-488534136/
If you want to support me, the best thing to d ... https://www.youtube.com/watch?v=hv3UO3G0Ofo
195799800 Bytes