Open Access System for Information Sharing

Login Library

 

Conference
Cited 0 time in webofscience Cited 0 time in scopus
Metadata Downloads

Sound to Visual Scene Generation by Audio-to-Visual Latent Alignment

Title
Sound to Visual Scene Generation by Audio-to-Visual Latent Alignment
Authors
Kim, Sung-BinSenocak, Arda하현우Owens, AndrewOh, Tae-Hyun
Date Issued
2023-06-18
Publisher
IEEE/CVF
Abstract
How does audio describe the world around us? In this paper, we propose a method for generating an image of a scene from sound. Our method addresses the challenges of dealing with the large gaps that often exist between sight and sound. We design a model that works by scheduling the learning procedure of each model component to associate audio-visual modalities despite their information gaps. The key idea is to enrich the audio features with visual information by learning to align audio to visual latent space. We translate the input audio to visual features, then use a pre-trained generator to produce an image. To further improve the quality of our generated images, we use sound source localization to select the audio-visual pairs that have strong cross-modal correlations. We obtain substantially better results on the VEGAS and VGGSound datasets than prior approaches. We also show that we can control our model's predictions by applying simple manipulations to the input waveform, or to the latent space.
URI
https://oasis.postech.ac.kr/handle/2014.oak/116243
Article Type
Conference
Citation
Conference on Computer Vision and Pattern Recognition (CVPR), 2023-06-18
Files in This Item:
There are no files associated with this item.

qr_code

  • mendeley

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.

Views & Downloads

Browse