Chenliang Xu

pic

Video-To-Text Summarization


Overview of Goals and Challenges:

The problem of generating natural language descriptions of images and videos has been steadily gaining prominence in the computer vision community and beyond. It is important for at least three reasons: i) transducing visual data into textual data would permit well-understood text-based indexing and retrieval mechanisms essentially for free; ii) fine-grained object models and region labeling would provide significant richness to multimedia retrieval techniques; and iii) grounding representations of visual data in natural language has great potential to overcome the inherent semantic ambiguity in closed-world recognition tasks. On the other hand, videos, compared to images, contain rich temporal structures and causalities, hence they introduce a new level of difficulties. Here, the focus of our research is on designing efficient, scalable methods to harness the temporal richness in videos by automatically discovering sequences, reasoning their orderings, and summarizing their content in natural languages. We have developed early prototypes using both bottom-up and top-down information to translate short clips to texts. More recently, we have developed methods utilizing deep neural networks to automatically discover and summarize procedures in long untrimmed instructional videos. The YouCook dataset we collected has been widely used in the research community.

Related Publications:

  1. Audio-visual interpretable and controllable video captioning. Y. Tian, C. Guan, J. Goodman, M. Moore, and C. Xu. IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), 2019
  2. Not all frames are equal: Weakly-supervised video grounding with contextual similarity and visual clustering losses. J. Shi, J. Xu, B. Gong, and C. Xu. IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2019
  3. Weakly-supervised action segmentation with iterative soft boundary assignment. L. Ding and C. Xu. IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2018.
  4. Towards automatic learning of procedures from web instructional videos. L. Zhou, C. Xu and J. J. Corso. AAAI Conference on Artificial Intelligence (AAAI), 2018
  5. TricorNet: A hybrid temporal convolutional and recurrent network for video action segmentation. L. Ding and C. Xu. arXiv:1705.07818, 2017.
  6. Watch what you just said: Image captioning with text-conditional attention. L. Zhou, C. Xu, P. Koch and J. J. Corso. ACM International Conference on Multimedia Thematic Workshops (ACMMMW), 2017.
  7. A thousand frames in just a few words: Lingual description of videos through latent topics and sparse object stitching. P. Das, C. Xu, R. F. Doell and J. J. Corso. IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2013.