Tuesday, May 04, 2021
2:00 PM
https://rochester.zoom.us/s/3626732673
Ph.D. Thesis Proposal
Haitian Zheng
University of Rochester
Visual Content Manipulation with Correspondence Modeling and Representation Modeling
Visual content manipulation has drawn increasing attention in recent years. It incorporates a set of tasks that synthesis and manipulate visual contents using high-level guidance inputs, including pose-guided transfer, example-guided image synthesis, layout-guided semantic manipulation of image, text or scene-graph guided image manipulation, semantic guided inpainting and other applications. Visual content manipulation provides effective way of editing and recreating visual content and it stimulates new applications in virtual reality and creative expression. This thesis will study the following research direction in the field of guided visual content manipulation. (1) We primarily focus on correspondence modeling for visual content manipulation. The advance of GANs boosts a stream of promising methods that can synthesize high-fidelity imagery from layout guidance such as human pose or semantic labeling map. However, manipulating the layout of given images according layout guidance remains challenging as it is nontrivial to incorporate spatial alignment to into GANs. We will devise effective correspondence modeling mechanisms for generative adversarial networks (GANs) for the tasks including pose-guided person image transfer, example-guided scene image generation, local semantic layout manipulation. Besides designing correspondence mechanisms to solving these tasks, we will also explore how the internal representation of visual content could facilitate guided visual content manipulation. (2) We will study representation modeling for visual content manipulation. The internal representation of visual contents provide structure and semantic prior of input contents. Leveraging such prior can thereby benefit visual content manipulation tasks such as layout-guided semantic manipulation, semantic guided inpainting, scene graph guided image manipulation, etc. We will explore how to model and transfer internal visual representation for visual content manipulation. In this proposal, we present our preliminary works in the research directions of correspondence modeling and representation modeling for visual content manipulation. First, we propose several correspondence learning frameworks based on cross-domain flow estimation and efficient attention for guided visual transfer. We focus on the design of visual alignment mechanisms for visual content manipulation, including a pose flow learning scheme for pose-guided person image transfer, iii i.e. we design a scheme that learns pose flow for transferring person appearance to a target pose; a decoupled attention learning scheme for example-guided scene image synthesis, i.e., we design decoupled attention for example-guided image synthesis; a sparse attention scheme for semantic image manipulation, i.e., we train a sparse attention correspondence scheme to transfer visual content for semantic layout manipulation of an image. Next, we focus on designing internal representation of visual content for guided manipulation. We propose a training-free framework for the task of image semantic layout editing. We propose a conditional maximum mean discrepancy criteria to matches the internal conditional distribution of input and target images. At the end of this proposal, we will also present brand-new ideas for the remaining thesis work.

Advisor: Prof. Jiebo Luo (Computer Science) Committee: Prof. Henry Kautz (Computer Science), Prof. Chenliang Xu (Computer Science), and Dr. Zhe Lin (Adobe)