Formal-LLM: Integrating Formal Language and Natural Language for Controllable LLM-based Agents
GaLore: Memory-Efficient LLM Training by Gradient Low-Rank Projection
TripoSR: Fast 3D Object Reconstruction from a Single Image
Diffusion Model-Based Image Editing: A Survey
The Era of 1-bit LLMs: All Large Language Models are in 1.58 Bits
Learning to Generate Instruction Tuning Datasets for Zero-Shot Task Adaptation
Intent-based Prompt Calibration: Enhancing prompt optimization with synthetic boundary cases
Sora: A Review on Background, Technology, Limitations, and Opportunities of Large Vision Models
BitDelta: Your Fine-Tune May Only Be Worth One Bit
Ring Attention with Blockwise Transformers for Near-Infinite Context
Premise Order Matters in Reasoning with Large Language Models
Generative Representational Instruction Tuning
DoRA: Weight-Decomposed Low-Rank Adaptation
Model soups: averaging weights of multiple fine-tuned models improves accuracy without increasing inference time
World Model on Million-Length Video And Language With RingAttention
Self-Play Fine-Tuning Converts Weak Language Models to Strong Language Models
Precise Zero-Shot Dense Retrieval without Relevance Labels
ColBERTv2: Effective and Efficient Retrieval via Lightweight Late Interaction
Relevance-guided Supervision for OpenQA with ColBERT
Join Podbean Ads Marketplace and connect with engaged listeners.
Advertise Today
Create your
podcast in
minutes
It is Free
WSJ Tech News Briefing
Rebel Tech
CyberWire Daily
Cyber Security Headlines
The WAN Show