CodeT5+: Open Code Large Language Models for Code Understanding and Generation
VanillaNet: the Power of Minimalism in Deep Learning
QLoRA: Efficient Finetuning of Quantized LLMs
SpeechGPT: Empowering Large Language Models with Intrinsic Cross-Modal Conversational Abilities
LLM-Pruner: On the Structural Pruning of Large Language Models
Tree of Thoughts: Deliberate Problem Solving with Large Language Models
Training language models to follow instructions with human feedback
Language Models Trained on Media Diets Can Predict Public Opinion
LoRA: Low-Rank Adaptation of Large Language Models
Pretraining Without Attention
ImageBind: One Embedding Space To Bind Them All
ZipIt! Merging Models from Different Tasks without Training
Chain of Thought Prompting Elicits Reasoning in Large Language Models
CodeGen2: Lessons for Training LLMs on Programming and Natural Languages
Shap-E: Generating Conditional 3D Implicit Functions
OPT: Open Pre-trained Transformer Language Models
LaMini-LM: A Diverse Herd of Distilled Models from Large-Scale Instructions
Large Language Models Can Self-Improve
Text-to-Audio Generation using Instruction-Tuned LLM and Latent Diffusion Model
WizardLM: Empowering Large Language Models to Follow Complex Instructions
Join Podbean Ads Marketplace and connect with engaged listeners.
Advertise Today
Create your
podcast in
minutes
It is Free
WSJ Tech News Briefing
Rebel Tech
CyberWire Daily
Cyber Security Headlines
The WAN Show