KwaiAgents: Generalized Information-seeking Agent System with Large Language Models
Principled Instructions Are All You Need for Questioning LLaMA-1/2, GPT-3.5/4
Fast Inference of Mixture-of-Experts Language Models with Offloading
Retrieval-Augmented Generation for Large Language Models: A Survey
PowerInfer: Fast Large Language Model Serving with a Consumer-grade GPU
Pearl: A Production-ready Reinforcement Learning Agent
Are Emergent Abilities in Large Language Models just In-Context Learning?
Mixture-of-Experts Meets Instruction Tuning:A Winning Combination for Large Language Models
Instruction Tuning for Large Language Models: A Survey
MegaBlocks: Efficient Sparse Training with Mixture-of-Experts
Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer
Sequential Modeling Enables Scalable Learning for Large Vision Models
Magicoder: Source Code Is All You Need
Mamba: Linear-Time Sequence Modeling with Selective State Spaces
Adversarial Diffusion Distillation
Instruction Tuning with Human Curriculum
Initializing Models with Larger Ones
Improving Sample Quality of Diffusion Models Using Self-Attention Guidance
GPT4Vis: What Can GPT-4 Do for Zero-shot Visual Recognition?
TaskWeaver: A Code-First Agent Framework
Join Podbean Ads Marketplace and connect with engaged listeners.
Advertise Today
Create your
podcast in
minutes
It is Free
The WAN Show
Cyber Security Headlines
Panic World
Software Engineering Daily
Babbage from The Economist