Chenliang Xu

pic

My research thrusts include computer vision and its relations to natural language, robotics and data science. I make every effort to keep this page up to date, but please refer to the publications for the most recent listing.

Learning Dynamics and Evolution towards Cognitive Understanding of Videos

A fundamental capability of human intelligence is being able to learn to act by watching instructional videos. Such capability is reflected in abstraction and summarization of the instructional procedures as well as in answering questions such as "why" and "how" something happened in the video. This project aims to build computational models that are able to perform well in above tasks, which require, beyond the conventional recognition of objects, actions and attributes in the scene, the higher-order inference of any relations therein. Here, the higher-order inference refers to inference that cannot be answered immediately by direct observations and thus requires stronger semantics. The research will develop a framework to perform higher-order inference in understanding web instructional videos, such that models devised in this framework are capable of not only discovering and captioning procedures that constitute the instructional event but also answering questions such as why and how something happened. The framework is built on a video story graph that models the dynamics (the composition of actions at different scales) and evolution (the change in object states and attributes), and it supports higher-order inference upon deep learning units and incorporation of external knowledge graph in a unified framework. Methodologies to extract such video story graphs and use them to discover, caption procedures and perform question-answering will be explored.

Applying Video Segmentation to Coarse-grain Mapping Operators in Molecular Simulations

We apply advances in computer vision to improve models of multiscale systems in chemistry. Multiscale systems describe chemical and physical processes that occur on many different time and spatial scales, for example, both very fast and very slow motions may contribute to the overall process. In both the computer processing of videos and the modeling of multiscale chemical systems, reducing complexity via removing extraneous details is essential. Without removing some model details, simulating multiscale processes like DNA transcription or the peptide aggregation which leads to plaque formation in Alzheimer's disease is impossible. Current approaches to reduce the number of atoms in a model rely on intuition and tradition due to the near infinite ways in which atoms can be removed or combined. We are developing a novel approach built upon advances in video segmentation. Video segmentation is the process of identifying foreground, background, and objects in a video. Surprisingly, the same mathematical structure can be applied to chemical systems and that is the goal of this research.

Audio-Visual Scene Understanding

Understanding scenes around us, i.e., recognizing objects, human actions and events, and inferring their spatial, temporal, correlative and causal relations, is a fundamental capability in human intelligence. Similarly, designing computer algorithms that can understand scenes is a fundamental problem in artificial intelligence. Humans consciously or unconsciously use all five senses (vision, audition, taste, smell, and touch) to understand a scene, as different senses provide complimentary information. For example, watching a movie with the sound muted makes it very difficult to understand the movie; walking on a street with eyes closed without other guidance can be dangerous. Existing machine scene understanding algorithms, however, are designed to rely on just a single modality. Take the two most commonly used senses, vision and audition, as an example, there are scene understanding algorithms designed to deal with each single modality. However, no systematic investigations have been conducted to integrate these two modalities towards more comprehensive audio-visual scene understanding. Designing algorithms that jointly model audio and visual modalities towards a complete audio-visual scene understanding is important, not only because this is how humans understand scenes, but also because it will enable novel applications in many fields. These fields include multimedia (video indexing and scene editing), healthcare (assistive devices for visually and aurally impaired people), surveillance security (comprehensive monitoring of the suspicious activities), and virtual and augmented reality (generation and alternation of visuals and/or sound tracks).

Cross-Modal Audio-Visual Generation

(generated from sound)
Cross-modal perception, or intersensory phenomenon, has been a long-lasting research topic in psychology and neurology; various studies have discovered strong correlations in human perception to auditory and visual stimuli. Despite many existing works in computational multimodal modeling, a large portion of the effort has been focused on indexing and retrieval the multimedia content. Although joint representations of multiple modalities and their correlations are explored in these works, they do not need to model the details of the samples. Here, the focus of my research is on fine-grained cross-modal audio-visual generation that advances the frontier in multimodal modeling. We have developed algorithms in audio-visual source association that are able to segment corresponding audio-visual data pairs; we have created deep generative neural networks utilizing adversarial training that are able to generate one modality, i.e., audio/visual, from the other modality, i.e., visual/audio. The outputs of cross-modal generation are beneficial to many applications, such as aiding hearing- or visually-impaired and content creation in virtual reality. List below are relevant publications:
  1. L. Chen, S. Srivastava, Z. Duan and C. Xu. Deep Cross-Modal Audio-Visual Generation. arXiv:1704.08292, 2017.
  2. B. Li, C. Xu and Z. Duan. Audio-Visual Source Association for String Ensembles through Multi-Modal Vibrato Analysis. In Proceedings of Sound and Music Computing (SMC), 2017.

Sequence Discovery, Reasoning and Summarization in Video Storytelling

(example output of v2t)
The problem of generating natural language descriptions of images and videos has been steadily gaining prominence in the computer vision community and beyond. It is important for at least three reasons: i) transducing visual data into textual data would permit well-understood text-based indexing and retrieval mechanisms essentially for free; ii) fine-grained object models and region labeling would provide significant richness to multimedia retrieval techniques; and iii) grounding representations of visual data in natural language has great potential to overcome the inherent semantic ambiguity in closed-world recognition tasks. On the other hand, videos, compared to images, contain rich temporal structures and causalities, hence they introduce a new level of difficulties. Here, the focus of my research is on designing efficient, scalable methods to harness the temporal richness in video storytelling by automatically discovering sequences, reasoning their orderings and summarizing their content in natural languages. We have developed early prototypes using both bottom-up and top-down information to translate short clips to texts. More recently, we have developed methods utilizing deep neural networks to automatically discover and summarize procedures in long untrimmed instructional videos. The YouCook dataset we collected has been widely used in the high-level video understanding community. List below are relevant publications:
  1. L. Ding and C. Xu. TricorNet: A Hybrid Temporal Convolutional and Recurrent Network for Video Action Segmentation. arXiv:1705.07818, 2017.
  2. L. Zhou, C. Xu and J. J. Corso. ProcNets: Learning to Segment Procedures in Untrimmed and Unconstrained Videos. arXiv:1703.09788, 2017.
  3. L. Zhou, C. Xu, P. Koch and J. J. Corso. Watch What You Just Said: Image Captioning with Text-Conditional Attention. arXiv:1606.04621, 2017.
  4. P. Das, C. Xu, R. F. Doell and J. J. Corso. A Thousand Frames in Just a Few Words: Lingual Description of Videos through Latent Topics and Sparse Object Stitching. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2013.

Fine-Grained Recognition and Segmentation of Actors and Actions in Videos


(example frames from A2D dataset)
Action is the heart of video understanding. As such, it has received a significant amount of attention in the last decade. The emphasis has moved from small datasets of a handful of actions to large datasets with many dozens of actions; from constrained domains like sporting to videos in-the-wild. However, existing works emphasize a small subset of the broader action understanding problem. First, they all assume the agent of the action, which we call the actor, is a human adult, ignoring the diversity of actions performed by other actors. Second, the prior literature largely focuses on action recognition, which is posed as the classification of a temporally pre-trimmed clip into one of k action classes from a closed-world setting. Here, my research overcomes both narrow viewpoints and introduces a new level of generality to the action understanding problem by considering multiple different classes of actors undergoing multiple different classes of actions. I am interested in simultaneously solving the set of 3W questions, namely, what action is happening, who is performing the action, and where is the action happening in space and time. We have developed various methods in advancing probabilistic graphical models to model actor-action interactions and have found that inference jointly over actors and actions outperforms inference independently over them. Our work has also enriched the scope of action understanding to consider temporally untrimmed long videos, adaptive video scales and weakly supervised learning setting. List below are relevant publications:
  1. Y. Yan, C. Xu, D. Cai and J. J. Corso. Weakly Supervised Actor-Action Segmentation via Robust Multi-Task Ranking. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017.
  2. C. Xu, C. Xiong and J. J. Corso. Action Understanding with Multiple Classes of Actors. arXiv:1704.08723, 2017.
  3. T. Han, H. Yao, C. Xu, X. Sun, Y. Zhang and J. J. Corso. Dancelets Mining for Video Recommendation Based on Dance Styles. IEEE Transactions on Multimedia (TMM), 2017.
  4. C. Xu and J. J. Corso. Actor-Action Semantic Segmentation with Grouping Process Models. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016.
  5. C. Xu, S.-H. Hsieh, C. Xiong and J. J. Corso. Can Humans Fly? Action Understanding with Multiple Classes of Actors. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2015.

Supervoxels--Toward an Efficient Hierarchical Representation in Early Video Analysis


(supervoxels of streamgbh)
Images have many pixels; videos have more. It has thus become standard practice to first preprocess images and videos into more tractable sets by either extraction of salient points or oversegmentation into superpixels. Preprocessing these outputs is more perceptually meaningful than raw pixels, which are merely a consequence of digital sampling. However, the same practice does not entirely exist in video analysis. Although many methods do indeed initially extract salient points or dense trajectories, few methods rely on a supervoxel segmentation, which is the video analog to a superpixel segmentation. Here, the focuses of my research are on exploring the potential of supervoxel segmentations in advancing various video analysis tasks and on designing efficient supervoxel algorithms to better model space-time of video. We have created a supervoxel library and benchmark, i.e., LIBSVX, that is widely used in the early video analysis community. We have promoted the idea of streaming processing in video segmentation and developed the first streaming hierarchical supervoxel algorithm. The supervoxels generated by our algorithm are being used in various video analysis tasks, such as object tracking, video object segmentation, and action localization. List below are relevant publications:
  1. C. Xu and J. J. Corso. LIBSVX: A Supervoxel Library and Benchmark for Early Video Processing. International Journal of Computer Vision (IJCV), 2016.
  2. C. Xu, S. Whitt and J. J. Corso. Flattening Supervoxel Hierarchies by the Uniform Entropy Slice. In Proceedings of IEEE International Conference on Computer Vision (ICCV), 2013.
  3. C. Xu, C. Xiong and J. J. Corso. Streaming Hierarchical Video Segmentation. In Proceedings of European Conference on Computer Vision (ECCV), 2012.
  4. C. Xu and J. J. Corso. Evaluation of Super-Voxel Methods for Early Video Processing. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2012.

Perception Analysis of Semantic Retention in Video Supervoxel Segmentation


(supervoxel video of baby crawling)
Video supervoxel segmentation has recently been established and applied to large-scale data processing, which potentially serves as an intermediate representation to high level video semantic extraction. The supervoxels are rich decompositions of video content: they capture object shape and motion well. Yet, it is not known if the supervoxel segmentation retains the semantics of the underlying video content. Here, my research studies the preliminary human perception of video supervoxel segmentation and its utilities in guiding the design of machine vision algorithms. We have conducted a systematic study of how well the actor and action semantics are retained in video supervoxel segmentation. Our study has human observers watching supervoxel segmentation videos and trying to discriminate both actor and action. We have gathered and analyzed a large set of 640 human perceptions over 96 videos at 3 different supervoxel scales. The ultimate findings have suggested that a significant amount of semantics have been well retained in the video supervoxel hierarchies. The results have further inspired our design of a new supervoxel feature. List below are relevant publications:
  1. C. Xu, R. F. Doell, S. J. Hanson, C. Hanson and J. J. Corso. A Study of Actor and Action Semantic Retention in Video Supervoxel Segmentation. International Journal of Semantic Computing (IJSC), 2013.
  2. C. Xu, R. F. Doell, S. J. Hanson, C. Hanson and J. J. Corso. Are Actor and Action Semantics Retained in Video Supervoxel Segmentation? In Proceedings of IEEE International Conference on Semantic Computing (ICSC), 2013.