Masked image modeling (MIM), an emerging self-supervised pre-training method, has shown impressive success across numerous downstream vision tasks with Vision transformers (ViT). Its underlying idea is simple: a portion of the input image is randomly masked out and then reconstructed via the pre-text task. However, why MIM works well is not well explained, and previous studies insist that MIM primarily works for the Transformer family but is incompatible with CNNs. In this paper, we first study interactions among patches to understand what knowledge is learned and how it is acquired via the MIM task. We observe that MIM essentially teaches the model to learn better middle-level interactions among patches and extract more generalized features. Based on this fact, we propose an Architecture-Agnostic Masked Image Modeling framework (A 2 MIM), which is compatible with not only Transformers but also CNNs in a unified way. Extensive experiments on popular benchmarks show that our A 2 MIM learns better representations and endows the backbone model with the stronger capability to transfer to various downstream tasks for both Transformers and CNNs.
2022: Siyuan Li, Di Wu, Fang Wu, Z. Zang, Kai Wang, Lei Shang, Baigui Sun, Haoyang Li, Stan.Z.Li
https://arxiv.org/pdf/2205.13943v2.pdf
Make Your LLM Fully Utilize the Context
How Far Are We to GPT-4V? Closing the Gap to Commercial Multimodal Models with Open-Source Suites
Dynamic Generation of Personalities with Large Language Models
Megalodon: Efficient LLM Pretraining and Inference with Unlimited Context Length
Phi-3 Technical Report: A Highly Capable Language Model Locally on Your Phone
Actions Speak Louder than Words: Trillion-Parameter Sequential Transducers for Generative Recommendations
Assisting in Writing Wikipedia-like Articles From Scratch with Large Language Models
Mini-Gemini: Mining the Potential of Multi-modality Vision Language Models
InstantMesh: Efficient 3D Mesh Generation from a Single Image with Sparse-view Large Reconstruction Models
From Words to Numbers: Your Large Language Model Is Secretly A Capable Regressor When Given In-Context Examples
AutoCodeRover: Autonomous Program Improvement
TrustLLM: Trustworthiness in Large Language Models
AniPortrait: Audio-Driven Synthesis of Photorealistic Portrait Animation
Fast Timing-Conditioned Latent Audio Diffusion
Gaussian Head Avatar: Ultra High-fidelity Head Avatar via Dynamic Gaussians
ReFT: Representation Finetuning for Language Models
Long-form factuality in large language models
Jamba: A Hybrid Transformer-Mamba Language Model
QA-LoRA: Quantization-Aware Low-Rank Adaptation of Large Language Models
MegaBlocks: Efficient Sparse Training with Mixture-of-Experts
Join Podbean Ads Marketplace and connect with engaged listeners.
Advertise Today
Create your
podcast in
minutes
It is Free
gm! crypto
CyberWire Daily
Babbage from The Economist
The WAN Show
Cybersecurity Today