Tuesday, May 17, 2022
1:00 PM
WH 2506; https://rochester.zoom.us/j/9241564885 (hybrid)
Ph.D. Thesis Defense
Lele Chen
University of Rochester
High-Fidelity Talking Avatar Video Generation
In the last two years, the COVID-19 Pandemic has changed our daily life completely, especially, the way we communicate with each other. Telecommunication with photorealistic avatars in virtual or augmented reality is a promising path for achieving authentic face-to-face communication in 3D over remote physical distances that each communicating party feels the genuine co-location presence of the others. The recent rise of large-scale video data and high-performance computation have enabled many video avatar technology. However, today most avatars in AR are cartoon-like (e.g., Apple Memoji, Tiktok FaceAnimation, Hyprsense), and the photo-realistic avatar (e.g., Siren) requires many high-quality motion capture data and the aid of computer graphics artists. In this dissertation, we define and study cross-modal visual generation. Specifically, we systemically investigate the cross-modal audio-visual generation and then focus on the field of avatar video generation. By learning from both generative adversarial network and computer graphics knowledge, we dive into high-fidelity avatar generation step by step, driven by audio signal or monocular video.

In our cross-modal audio-visual generation work, we introduce the problem of cross modal audiovisual generation and made the first attempt to use conditional Generative Adversarial Networks (GANs) on intersensory generation. Based the audio-visual correlation we have studied, we devise a cascade GAN approach to generate talking face video in the 2D talking-head avatar generation work. Subsequently, in 2.5D generation work, we propose a 3D-aware generative network along with a hybrid embedding module and a non-linear composition module. At last, in the 3D generation work, we propose a learning-based lighting model, that in combination with a high-quality 3D face tracking algorithm, provides a method for subtle and robust facial motion transfer from a regular video to a 3D photo-realistic avatar. Aside from the computational methods, we present a survey and benchmark to discuss what comprises a good talking-head video generation. We define identity xvii preserving, lip synchronization, high video quality, and natural spontaneous motion as desired properties for a good talking-head video. By conducting a thoughtful analysis across several state-of-the-art talking-head generation approaches, we aim to uncover the merits and drawbacks of current methods and point out promising directions for future work.

Advisor: Prof. Chenliang Xu (Computer Science) Committee: Prof. Ehsan Hoque (Computer Science), Prof. Jiebo Luo (Computer Science), Prof. Zhiyao Duan (Electrical and Computer Engineering), and Prof. Xin Li (Electrical & Computer Engineering, Louisiana State University) Chair: Prof. Mujdat Cetin (Electrical and Computer Engineering)