Google Lyria 3: Inside New Music Generation Model

Аватар автора
Muz News
Jeff Chang, Myriam Hamed Torres and Jason Baldridge from Google DeepMind team join host Logan Kilpatrick for deep dive into Lyria 3, Google&latest music generation model. Their conversation explores transition from simple audio generation to model that acts as collaborative instrument, providing creators with fine-grained control over mood, instrumentation and vocals. Learn more about technical challenges of prompt adherence in music, importance of "vibe" in human evaluations and future of layered, iterative music composition. Timecodes: 00:00 - Introduction 01:00 - Defining Music Generation Models 01:40 - Lyria as New Instrument 03:05 - Connecting Language & Creative Intent 05:08 - Guest Backgrounds & Musical Journeys 07:57 - Demo: Instrumental Funk Jam 08:29 - Bridging Gap for Non-Musicians 12:03 - Demo: Exploring Lyrics & Vocals 15:07 - Magic of Iterative Co-Creation 15:40 - Meeting Users Across Expertise Spectrum 17:01 - Empowering New Musical Expressions 18:29 - Emotional & Communal Impact of Music 19:51 - Opportunities for Developers & Community 21:09 - Real-Time vs Song Generation Models 23:23 - Creating Experimental Sonic Landscapes 25:08 - Demo: Capturing Unexpectedness & Energy 28:33 - Evaluating Music through Taste & Expertise 31:30 - Diligence of Music Evaluation 31:52 - Future of Lyria & AI First Workflows 35:07 - Articulating Creative Vision through Language www.deepmind.google/models/lyria/ Playlists: | | | #AI Original video:

0/0


0/0

0/0

0/0

Скачать популярное видео

Популярное видео

0/0