Tri-Ergon: Fine-grained Video-to-Audio Generation with Multi-modal Conditions and LUFS Control

AAAI 2025
*Indicates Equal Contribution
Model architecture

Tri-Ergon Overview. (a) Unbalanced multi-modal prompting module, which take any combination of prompts. (b) Stereo loudness control with LUFS. (c) Timing embeddings for variable audio length. (d) The DiT diffusion backbone of Tri- Ergon.

Abstract

Video-to-audio (V2A) generation utilizes visual-only video features to produce realistic sounds that correspond to the scene. However, current V2A models often lack fine-grained control over the generated audio, especially in terms of loudness variation and the incorporation of multi-modal conditions. To overcome these limitations, we introduce Tri-Ergon, a diffusion-based V2A model that incorporates textual, auditory, and visual prompts to enable detailed and semantically rich audio synthesis. Additionally, we introduce Loudness Units relative to Full Scale (LUFS) embedding, which allows for precise manual control of the loudness changes over time for individual audio channels, enabling our model to effectively address the intricate correlation of video and audio in real-world Foley workflows. Tri-Ergon is capable of creating 44.1 kHz high-fidelity stereo audio clips of varying lengths up to 60 seconds, which significantly outperforms existing state-of-the-art V2A methods that typically generate mono audio for a fixed duration.



Demo Video

Headphones on 😎🎧 for the 44.1 kHz stereo audio! The video can take long to load, you can download the video here.

BibTeX

BibTex (Coming Soon)