GraphiT: Encoding Graph Structure in Transformers - Apprentissage de modèles visuels à partir de données massives Accéder directement au contenu
Pré-Publication, Document De Travail Année : 2021

GraphiT: Encoding Graph Structure in Transformers

Résumé

We show that viewing graphs as sets of node features and incorporating structural and positional information into a transformer architecture is able to outperform representations learned with classical graph neural networks (GNNs). Our model, GraphiT, encodes such information by (i) leveraging relative positional encoding strategies in self-attention scores based on positive definite kernels on graphs, and (ii) enumerating and encoding local sub-structures such as paths of short length. We thoroughly evaluate these two ideas on many classification and regression tasks, demonstrating the effectiveness of each of them independently, as well as their combination. In addition to performing well on standard benchmarks, our model also admits natural visualization mechanisms for interpreting graph motifs explaining the predictions, making it a potentially strong candidate for scientific applications where interpretation is important.
Fichier principal
Vignette du fichier
GraphiT.pdf (734.89 Ko) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-03256708 , version 1 (10-06-2021)

Identifiants

  • HAL Id : hal-03256708 , version 1

Citer

Grégoire Mialon, Dexiong Chen, Margot Selosse, Julien Mairal. GraphiT: Encoding Graph Structure in Transformers. 2021. ⟨hal-03256708⟩
1818 Consultations
962 Téléchargements

Partager

Gmail Facebook X LinkedIn More