LW - LLMs for Alignment Research: a safety priority? by abramdemski
The Nonlinear Library: LessWrong

LW - LLMs for Alignment Research: a safety priority? by abramdemski

2024-04-04
Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: LLMs for Alignment Research: a safety priority?, published by abramdemski on April 4, 2024 on LessWrong. A recent short story by Gabriel Mukobi illustrates a near-term scenario where things go bad because new developments in LLMs allow LLMs to accelerate capabilities research without a correspondingly large...
View more
Comments (3)

More Episodes

All Episodes>>

Get this podcast on your phone, Free

Create Your Podcast In Minutes

  • Full-featured podcast site
  • Unlimited storage and bandwidth
  • Comprehensive podcast stats
  • Distribute to Apple Podcasts, Spotify, and more
  • Make money with your podcast
Get Started
It is Free