Chenliang Xu

pic

Cross-Modal Audio-Visual Generation


Overview of Goals and Challenges:

Cross-modal perception, or intersensory phenomenon, has been a long-lasting research topic in psychology and neurology; various studies have discovered strong correlations in human perception to auditory and visual stimuli. Despite many existing works in computational multimodal modeling, a large portion of the effort has been focused on indexing and retrieval the multimedia content. Although joint representations of multiple modalities and their correlations are explored in these works, they do not need to model the details of the samples. Here, the focus of our research is on fine-grained cross-modal audio-visual generation that advances the frontier in multimodal modeling. We have developed algorithms in audio-visual source association that are able to segment corresponding audio-visual data pairs; we have created deep generative neural networks utilizing adversarial training that are able to generate one modality, i.e., audio/visual, from the other modality, i.e., visual/audio. The outputs of cross-modal generation are beneficial to many applications, such as aiding hearing- or visually-impaired and content creation in virtual reality.

Related Publications:

  1. Deep cross-modal audio-aisual generation. L. Chen, S. Srivastava, Z. Duan and C. Xu. ACM International Conference on Multimedia Thematic Workshops (ACMMMW), 2017.
  2. Audio-visual source association for string ensembles through multi-modal vibrato analysis. B. Li, C. Xu and Z. Duan. Sound and Music Computing (SMC), 2017.