Vision Transformer with Deformable Attention
Papers Read on AI

Vision Transformer with Deformable Attention

2022-01-24

We propose a novel deformable self-attention module, where the positions of key and value pairs in self-attention are selected in a data-dependent way. This flexible scheme enables the self-attention module to focus on relevant regions and capture more informative features.

2022: Zhuofan Xia, Xuran Pan, Shiji Song, Li Erran Li, Gao Huang Ranked

#1 on Object Detection on COCO test-dev (AP metric)

https://arxiv.org/pdf/2201.00520v1.pdf

Comments (3)

More Episodes

All Episodes>>

Get this podcast on your phone, Free

Create Your Podcast In Minutes

  • Full-featured podcast site
  • Unlimited storage and bandwidth
  • Comprehensive podcast stats
  • Distribute to Apple Podcasts, Spotify, and more
  • Make money with your podcast
Get Started
It is Free