Lightweight Structure-Aware Attention for Visual Understanding - Apprentissage de modèles visuels à partir de données massives Accéder directement au contenu
Pré-Publication, Document De Travail Année : 2022

Lightweight Structure-Aware Attention for Visual Understanding

Résumé

Vision Transformers (ViTs) have become a dominant paradigm for visual representation learning with self-attention operators. Although these operators provide flexibility to the model with their adjustable attention kernels, they suffer from inherent limitations: (1) the attention kernel is not discriminative enough, resulting in high redundancy of the ViT layers, and (2) the complexity in computation and memory is quadratic in the sequence length. In this paper, we propose a novel attention operator, called lightweight structure-aware attention (LiSA), which has a better representation power with log-linear complexity. Our operator learns structural patterns by using a set of relative position embeddings (RPEs). To achieve log-linear complexity, the RPEs are approximated with fast Fourier transforms. Our experiments and ablation studies demonstrate that ViTs based on the proposed operator outperform self-attention and other existing operators, achieving state-of-the-art results on ImageNet, and competitive results on other visual understanding benchmarks such as COCO and Something-Something-V2. The source code of our approach will be released online.
Fichier principal
Vignette du fichier
LiSA.pdf (1.7 Mo) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-03916268 , version 1 (30-12-2022)

Identifiants

Citer

Heeseung Kwon, Francisco M. Castro, Manuel J. Marin-Jimenez, Nicolas Guil, Karteek Alahari. Lightweight Structure-Aware Attention for Visual Understanding. 2022. ⟨hal-03916268⟩
34 Consultations
23 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More