Grounded Language-Image Pre-training
Papers Read on AI

Grounded Language-Image Pre-training

2021-12-17
This paper presents a grounded language-image pretraining (GLIP) model for learning object-level, languageaware, and semantic-rich visual representations. 2021: Liunian Harold Li, Pengchuan Zhang, Haotian Zhang, Jianwei Yang, Chunyuan Li, Yiwu Zhong, Lijuan Wang, Lu Yuan, Lei Zhang, Jenq-Neng Hwang, Kai-Wei Chang, Jianfeng Gao Ranked #1 on Phrase Grounding on Flickr30k Entities Test (using extra training data) https://arxiv.org/pdf/2112.03857v1.pdf
Comments (3)

More Episodes

All Episodes>>

Get this podcast on your phone, Free

Creat Yourt Podcast In Minutes

  • Full-featured podcast site
  • Unlimited storage and bandwidth
  • Comprehensive podcast stats
  • Distribute to Apple Podcasts, Spotify, and more
  • Make money with your podcast
Get Started
It is Free