MAGMA - Multimodal Augmentation of Generative Models through Adapter-based Finetuning
Papers Read on AI

MAGMA - Multimodal Augmentation of Generative Models through Adapter-based Finetuning

2022-03-28
Large-scale pretraining is fast becoming the norm in Vision-Language (VL) modeling. However, prevailing VL approaches are limited by the requirement for labeled data and the use of complex multi-step pretraining objectives. We present MAGMA a simple method for augmenting generative language models with additional modalities using adapter-based finetuning. Building on Frozen [52], we train a series of VL models that autoregressively generate text from arbitrary combinations of visual and textual...
View more
Comments (3)

More Episodes

All Episodes>>

Get this podcast on your phone, Free

Creat Yourt Podcast In Minutes

  • Full-featured podcast site
  • Unlimited storage and bandwidth
  • Comprehensive podcast stats
  • Distribute to Apple Podcasts, Spotify, and more
  • Make money with your podcast
Get Started
It is Free