PowerInfer: Fast Large Language Model Serving with a Consumer-grade GPU
Pearl: A Production-ready Reinforcement Learning Agent
Are Emergent Abilities in Large Language Models just In-Context Learning?
Mixture-of-Experts Meets Instruction Tuning:A Winning Combination for Large Language Models
Instruction Tuning for Large Language Models: A Survey
MegaBlocks: Efficient Sparse Training with Mixture-of-Experts
Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer
Sequential Modeling Enables Scalable Learning for Large Vision Models
Magicoder: Source Code Is All You Need
Mamba: Linear-Time Sequence Modeling with Selective State Spaces
Adversarial Diffusion Distillation
Instruction Tuning with Human Curriculum
Initializing Models with Larger Ones
Improving Sample Quality of Diffusion Models Using Self-Attention Guidance
GPT4Vis: What Can GPT-4 Do for Zero-shot Visual Recognition?
TaskWeaver: A Code-First Agent Framework
Efficient LLM Inference on CPUs
Igniting Language Intelligence: The Hitchhiker’s Guide From Chain-of-Thought Reasoning to Language Agents
STaR: Bootstrapping Reasoning With Reasoning
Language Models are Super Mario: Absorbing Abilities from Homologous Models as a Free Lunch
Join Podbean Ads Marketplace and connect with engaged listeners.
Advertise Today
Create your
podcast in
minutes
It is Free
Panic World
Cyber Security Headlines
Click Here
The WAN Show
The 404 Media Podcast