Episode 11.10: Some remarks about the hardware resources needed to process Large Language Models.
Unmaking Sense

Episode 11.10: Some remarks about the hardware resources needed to process Large Language Models.

2023-11-22
Large Language Models are pretty big. The smallest usable ones have around 7 billion parameters; chatGPT in its first (2021) version based on GPT3 had around 175 billion; many later ones are much larger. That limits what we can run on a local machine, but there are some workarounds.
Comments (3)

More Episodes

All Episodes>>

Get this podcast on your phone, Free

Creat Yourt Podcast In Minutes

  • Full-featured podcast site
  • Unlimited storage and bandwidth
  • Comprehensive podcast stats
  • Distribute to Apple Podcasts, Spotify, and more
  • Make money with your podcast
Get Started
It is Free