Chenliang Xu

pic

Audio-Visual Scene Understanding


PI: Chenliang Xu, Co-PI: Zhiyao Duan
Graduate Students: Yapeng Tian, Lele Chen, Hao Huang, Wei Zhang, Sefik Emre Eskimez, Bochen Li, Rui Lu, Yujia Yan
Undergraduate Students: Justin Goodman, Marc Moore, Chenxiao Guan
Award Number: NSF IIS 1741472
Award Title: BIGDATA: F: Audio-Visual Scene Understanding
Award Amount: $666,000.00
Duration:September 1, 2017 to August 31, 2021 (Estimated)

Overview of Goals and Challenges:

Understanding scenes around us, i.e., recognizing objects, human actions and events, and inferring their spatial, temporal, correlative and causal relations, is a fundamental capability in human intelligence. Similarly, designing computer algorithms that can understand scenes is a fundamental problem in artificial intelligence. Humans consciously or unconsciously use all five senses (vision, audition, taste, smell, and touch) to understand a scene, as different senses provide complimentary information. For example, watching a movie with the sound muted makes it very difficult to understand the movie; walking on a street with eyes closed without other guidance can be dangerous. Existing machine scene understanding algorithms, however, are designed to rely on just a single modality. Take the two most commonly used senses, vision and audition, as an example, there are scene understanding algorithms designed to deal with each single modality. However, no systematic investigations have been conducted to integrate these two modalities towards more comprehensive audio-visual scene understanding. Designing algorithms that jointly model audio and visual modalities towards a complete audio-visual scene understanding is important, not only because this is how humans understand scenes, but also because it will enable novel applications in many fields. These fields include multimedia (video indexing and scene editing), healthcare (assistive devices for visually and aurally impaired people), surveillance security (comprehensive monitoring of the suspicious activities), and virtual and augmented reality (generation and alternation of visuals and/or sound tracks).

This project aims to achieve human-like audio-visual scene understanding that overcomes the limitations of single-modality approaches through big data analysis of Internet videos. The core idea is to learn to parse a scene into elements and infer their relations, i.e., forming an audio-visual scene graph. Specifically, an element of the audio-visual scene can be a joint audio-visual component of an event when the event shows correlated audio and visual features. It can also be an audio component or a visual component if the event only appears in one modality. The relations between the elements include spatial and temporal relations at a lower level, as well as correlative and causal relations at a higher level. Through this scene graph, information across the two modalities can be extracted, exchanged and interpreted. The investigators propose three main research thrusts: (1) Learning joint audio-visual representations of scene elements; (2) Learning a scene graph to organize scene elements; and (3) Cross-modality scene completion. Each of the three research thrusts explores a dimension in the space of audio-visual scene understanding, yet they are also inter-connected. For example, the audio-visual scene elements are nodes in the scene graph, and the scene graph, in turn, guides the learning of relations among scene elements with structured information; the cross-modality scene completion generates missing data in the scene graph and is necessary for good audio-visual understanding of the scene.

Current Results:

Data, demos, and software are downloadable by following the individual tasks below.





Broader Impacts:

This research is of key interest to both computer vision and computer audition communities. First, the task of scene understanding is a long-lasting topic in both communities and is being tackled independently. This project ties researches in these communities together and advances both. Second, our learned bimodal representations can be used in solving other problems, in which relying on a single modality may have much difficulty. Third, our audio-visual computational models reveal interesting scientific inquires of how humans understand videos. This can lead to better design of the future AI system.

This research is intellectually transformative in advancing multi-modal modeling and robot perception. The methodologies and techniques developed in learning audio-visual bimodal representations in this project can be applied to multi-modal modeling with data from various sources such as tactile perception, GPS, radar and lasers, leading to a coherent robot perception system. One immediate application could be the perception problem in autonomous vehicles. The techniques developed in this research also enable novel applications in many other fields. These fields include multimedia (video indexing and scene editing), healthcare (assistive devices for visually and aurally impaired people), surveillance security (comprehensive monitoring of the suspicious activities), and virtual and augmented reality (generation and alternation of visuals and/or sound tracks).

Publications from the Team:

  1. R. A. Jacobs and C. Xu. Can multisensory training aid visual learning?: A computational investigation. Journal of Vision, 2019. [coming soon]
  2. H. Huang, L. Zhou, W. Zhang, J. J. Corso, and C. Xu. Dynamic graph modules for modeling object-object interactions in activity recognition. In Proc. of British Machine Vision Conference, 2019. [pdf]
  3. W. Zhao, S. Wang, Z. Xie, J. Shi and C. Xu. GAN-EM: GAN based EM learning framework. In Proc. of International Joint Conference on Artificial Intelligence, 2019. [pdf]
  4. B. Li, K. Dinesh, C. Xu, G. Sharma, and Z. Duan. Online audio-visual source association for chamber music performances. Transcations of the International Society for Music Information Retrieval, 2019. [ pdf ]
  5. L. Chen, R. Maddox, Z. Duan, and C. Xu. Hierarchical cross-modal talking face generation with dynamic pixel-wise loss. In Proc. of IEEE Conference on Computer Vision and Pattern Recognition, 2019. [pdf]
  6. J. Shi, J. Xu, B. Gong and C. Xu. Not all frames are equal: Weakly-supervised video grounding with contextual similarity and visual clustering losses. In Proc. of IEEE Conference on Computer Vision and Pattern Recognition, 2019. [pdf]
  7. Y. Tian, C. Guan, J. Goodman, M. Moore and C. Xu. Audio-visual interpretable and controllable video captioning. In Proc. of IEEE Conference on Computer Vision and Pattern Recognition Workshops, 2019. [pdf]
  8. L. Chen, H. Zheng, R. Maddox, Z. Duan and C. Xu. Sound to visual: Hierarchical cross-modal talking face video generation. In Proc. of IEEE Conference on Computer Vision and Pattern Recognition Workshops, 2019. [pdf]
  9. Y. Tian, J. Shi, B. Li, Z. Duan and C. Xu. Audio-visual event localization in the wild. In Proc. of IEEE Conference on Computer Vision and Pattern Recognition Workshops, 2019. [pdf]
  10. B. Li, X. Liu, K. Dinesh, Z. Duan, and G. Sharma. Creating a multi-track classical music performance dataset for multi-modal music analysis: challenges, insights, and applications. IEEE Transactions on Multimedia, 2019. [pdf]
  11. Z. Duan, S. Essid, C. Liem, G. Richard, and G. Sharma. Audio-visual analysis of music performances: Overview of an emerging field. IEEE Signal Processing Magazine, 2019. [pdf]
  12. R. Lu, Z. Duan, and C. Zhang. Audio-visual deep clustering for speech separation. IEEE/ACM Transactions on Audio Speech and Language Processing, 2019. [pdf]
  13. Y. Tian, J. Shi, B. Li, Z. Duan and C. Xu. Audio-visual event localization in unconstrained videos. In Proc. of European Conference on Computer Vision, 2018. [pdf]
  14. L. Chen, Z. Li, R. Maddox, Z. Duan and C. Xu. Lip movements generation at a glance. In Proc. of European Conference on Computer Vision, 2018. [pdf]
  15. L. Ding and C. Xu. Weakly-supervised action segmentation with iterative soft boundary assignment. In Proc. of IEEE Conference on Computer Vision and Pattern Recognition, 2018. [pdf]
  16. S. E. Eskimez, R. Maddox, C. Xu and Z. Duan. Generating talking face landmarks from speech. In Proc. of International Conference on Latent Variable Analysis and Signal Separation, 2018. [pdf]
  17. L. Zhou, C. Xu and J. J. Corso. Towards automatic learning of procedures from web instructional videos. In Proc. of AAAI Conference on Artificial Intelligence, 2018. [pdf]
  18. R. Lu, Z. Duan, and C. Zhang. Listen and look: audio-visual matching assisted speech source separation. IEEE Signal Processing Letters, 2018. [pdf]
  19. B. Li, A. Maezawa, and Z. Duan. Skeleton plays piano: End-to-end online generation of pianist body movements from MIDI performance. In Proc. of International Society for Music Information Retrieval Conference, 2018. [pdf]
  20. X. Wang, R. Stables, B. Li, and Z. Duan. Score-aligned polyphonic microtiming estimation. In Proc. of IEEE International Conference on Acoustics, Speech and Signal Processing, 2018. [pdf]
Acknowledgements: This material is based upon work supported by the National Science Foundation under Grant No. 1741472.
Disclaimer: Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation.
Point of Contact: Chenliang Xu
Date of Last Update: August, 2019