In our cross-modal audio-visual generation work, we introduce the problem of cross modal audiovisual generation and made the first attempt to use conditional Generative Adversarial Networks (GANs) on intersensory generation. Based the audio-visual correlation we have studied, we devise a cascade GAN approach to generate talking face video in the 2D talking-head avatar generation work. Subsequently, in 2.5D generation work, we propose a 3D-aware generative network along with a hybrid embedding module and a non-linear composition module. At last, in the 3D generation work, we propose a learning-based lighting model, that in combination with a high-quality 3D face tracking algorithm, provides a method for subtle and robust facial motion transfer from a regular video to a 3D photo-realistic avatar. Aside from the computational methods, we present a survey and benchmark to discuss what comprises a good talking-head video generation. We define identity xvii preserving, lip synchronization, high video quality, and natural spontaneous motion as desired properties for a good talking-head video. By conducting a thoughtful analysis across several state-of-the-art talking-head generation approaches, we aim to uncover the merits and drawbacks of current methods and point out promising directions for future work.
Advisor: Prof. Chenliang Xu (Computer Science) Committee: Prof. Ehsan Hoque (Computer Science), Prof. Jiebo Luo (Computer Science), Prof. Zhiyao Duan (Electrical and Computer Engineering), and Prof. Xin Li (Electrical & Computer Engineering, Louisiana State University) Chair: Prof. Mujdat Cetin (Electrical and Computer Engineering)