This week, we discuss the implications of Text-to-Video Generation and speculate as to the possibilities (and limitations) of this incredible technology with some hot takes. Dat Ngo, ML Solutions Engineer at Arize, is joined by community member and AI Engineer Vibhu Sapra to review OpenAI’s technical report on their Text-To-Video Generation Model: Sora.
According to OpenAI, “Sora can generate videos up to a minute long while maintaining visual quality and adherence to the user’s prompt.” At the time of this recording, the model had not been widely released yet, but was becoming available to red teamers to assess risk, and also to artists to receive feedback on how Sora could be helpful for creatives.
At the end of our discussion, we also explore EvalCrafter: Benchmarking and Evaluating Large Video Generation Models. This recent paper proposed a new framework and pipeline to exhaustively evaluate the performance of the generated videos, which we look at in light of Sora.
To learn more about ML observability, join the Arize AI Slack community or get the latest on our LinkedIn and Twitter.
Breaking Down EvalGen: Who Validates the Validators?
Keys To Understanding ReAct: Synergizing Reasoning and Acting in Language Models
Demystifying Chronos: Learning the Language of Time Series
Anthropic Claude 3
Reinforcement Learning in the Era of LLMs
RAG vs Fine-Tuning
Phi-2 Model
HyDE: Precise Zero-Shot Dense Retrieval without Relevance Labels
A Deep Dive Into Generative's Newest Models: Gemini vs Mistral (Mixtral-8x7B)–Part I
How to Prompt LLMs for Text-to-SQL: A Study in Zero-shot, Single-domain, and Cross-domain Settings
The Geometry of Truth: Emergent Linear Structure in LLM Representation of True/False Datasets
Towards Monosemanticity: Decomposing Language Models With Dictionary Learning
RankVicuna: Zero-Shot Listwise Document Reranking with Open-Source Large Language Models
Explaining Grokking Through Circuit Efficiency
Large Content And Behavior Models To Understand, Simulate, And Optimize Content And Behavior
Skeleton of Thought: LLMs Can Do Parallel Decoding
Llama 2: Open Foundation and Fine-Tuned Chat Models
Lost in the Middle: How Language Models Use Long Contexts
Orca: Progressive Learning from Complex Explanation Traces of GPT-4
Create your
podcast in
minutes
It is Free
The Universe Speaks in Numbers
Breaking Math Podcast
Opinionated History of Mathematics
Biostatistics Podcast
SOA Podcasts - Society of Actuaries