Tuesday, June 14, 2022
1:00 PM
WH 2506; https://rochester.zoom.us/j/3626732673 (hybrid)
Ph.D. Thesis Defense
Wei Xiong
University of Rochester
Guidance-driven Visual Synthesis with Generative Models
Visual synthesis has drawn increasing attention in recent years. It incorporates a set of tasks that synthesize and manipulate visual contents from given inputs, including unconditional object generation, image-to-image translation, image inpainting, video generation, image enhancement, text-to-image synthesis, style transfer, and other applications. Visual synthesis provides an effective way of understanding the inherent nature of data.

In this thesis, we focus on two interesting research directions in the visual synthesis and content creation field. (1) We focus on visually pleasing data synthesis. With the progress of generative models, high-resolution structured objects can be successfully generated. However, for more complicated tasks such as scene generation and editing, synthesizing realistic data is still a big challenge. We devise effective generative models for complicated visual synthesis tasks. (2) We study recognition-oriented data synthesis. The generated visual data can be regarded as new data samples for data augmentation and thereby benefit the downstream tasks, such as long-tailed recognition and few-shot learning. We explore and control the generation process of visual data that can indeed benefit the visual recognition systems.

To accomplish our goals in these research directions, we need to synthesize visual contents lying on the expected data manifold. For example, for visually pleasing data synthesis, the generated data are required to be lying as close to the manifold of the real data as possible. For recognition-oriented synthesis, the generated data are required to be lying in the space specified by the downstream task. It is non-trivial to synthesize such samples by merely learning from data itself, as the generative model will be confused about what knowledge exactly needs to be learned, therefore it will either generate many artifacts or generate data that are not needed by downstream tasks. To address this issue, in this thesis, unlike the conventional data-driven synthesis, we explore guidance-driven visual synthesis, i.e., when learning to synthesize contents, we propose to leverage reasonable guidance to constrain the learning process of the generative models so that the synthesized visual contents lie on the expected manifold.

We apply this idea to both the two research directions. In the first part, we introduce \textit{guidance-driven visually pleasing data synthesis}. Specifically, we investigate guided synthesis in image inpainting, unsupervised low-light image enhancement, video prediction, and caricature generation tasks. In the second part, we explore \textit{guidance-driven synthesis for visual recognition}. Specifically, we derive guidance from downstream tasks to modulate the generation of the visual data and show that the generated data indeed benefit the downstream tasks.

Advisor: Prof. Jiebo Luo (Computer Science) Committee: Prof. Dan Gildea (Computer Science), Prof. Chenliang Xu (Computer Science), and Dr. Zhe Lin (Adobe) Chair: Prof. Zhiyao Duan (Electrical and Computer Engineering)