Skip to main content

News & Events

Events

 

RSS

May 17, 2022, 01:00 PM
Lele Chen: High-Fidelity Talking Avatar Video Generation

[Tuesday, May 17, 2022 at 1:00 PM in WH 2506; https://rochester.zoom.us/j/9241564885 (hybrid)] In the last two years, the COVID-19 Pandemic has changed our daily life completely, especially, the way we communicate with each other. Telecommunication with photorealistic avatars in virtual or augmented reality is a promising path for achieving authentic face-to-face communication in 3D over remote physical distances that each communicating party feels the genuine co-location presence of the others. The recent rise of large-scale video data and high-performance computation have enabled many video avatar technology. However, today most avatars in AR are cartoon-like (e.g., Apple Memoji, Tiktok FaceAnimation, Hyprsense), and the photo-realistic avatar (e.g., Siren) requires many high-quality motion capture data and the aid of computer graphics artists. In this dissertation, we define and study cross-modal visual generation. Specifically, we systemically investigate the cross-modal audio-visual generation and then focus on the field of avatar video generation. By learning from both generative adversarial network and computer graphics knowledge, we dive into high-fidelity avatar generation step by step, driven by audio signal or monocular video. In our cross-modal audio-visual generation work, we introduce the problem of cross modal audiovisual generation and made the first attempt to use conditional Generative Adversarial Networks (GANs) on intersensory generation. Based the audio-visual correlation we have studied, we devise a cascade GAN approach to generate talking face video in the 2D talking-head avatar generation work. Subsequently, in 2.5D generation work, we propose a 3D-aware generative network along with a hybrid embedding module and a non-linear composition module. At last, in the 3D generation work, we propose a learning-based lighting model, that in combination with a high-quality 3D face tracking algorithm, provides a method for subtle and robust facial motion transfer from a regular video to a 3D photo-realistic avatar. Aside from the computational methods, we present a survey and benchmark to discuss what comprises a good talking-head video generation. We define identity xvii preserving, lip synchronization, high video quality, and natural spontaneous motion as desired properties for a good talking-head video. By conducting a thoughtful analysis across several state-of-the-art talking-head generation approaches, we aim to uncover the merits and drawbacks of current methods and point out promising directions for future work. Advisor: Prof. Chenliang Xu (Computer Science) Committee: Prof. Ehsan Hoque (Computer Science), Prof. Jiebo Luo (Computer Science), Prof. Zhiyao Duan (Electrical and Computer Engineering), and Prof. Xin Li (Electrical & Computer Engineering, Louisiana State University) Chair: Prof. Mujdat Cetin (Electrical and Computer Engineering)


June 6, 2022, 11:00 AM
Divya Ojha: Redesigning Caches to Resist Side Channel Attacks

[Monday, June 06, 2022 at 11:00 AM in WH 2506; https://rochester.zoom.us/j/99102392332?pwd=ZThLeThmeHlqTXVjaWJhVE5CZmROUT09] Timing side channels have long been used to extract cryptographic keys and sensitive documents, and recent research has shown the potential harm they can cause when combined with the speculative properties of modern processors. Side channels in cache and cache like structures are used as disclosure primitives to craft attacks that are capable of leaking information in the presence of security guarantees like memory safety, enclave separation, control-flow integrity, privilege separation, and process isolation. In this dissertation, we address side channel leaks due to processor state in cache, translation lookaside buffers (TLBs), and coherence directories. The two main types of vulnerabilities in shared states are due to data reuse and contention. Reuse attacks are more precise and rely on sharing memory content, whereas contention attacks can leak information even in the absence of shared memory. This dissertation presents low-cost solutions to both types of vulnerabilities. In TimeCache, a cache designed to defend against reuse-based attacks, we disallow a cache hit on a cache line that has not been touched by an executing context, since the line was brought in. We preserve caching behaviour across process contexts without leaking timing information using a novel low-latency hardware design to compare access times. In RollingCache, a cache designed to defend against contention attacks, we dynamically change the set of addresses contending for cache sets. By using one level of indirection to implement dynamic mapping controlled by the whole-cache runtime behaviour, we avoid the need for additional computationally-expensive address encryption. We extend the ideas from RollingCache and TimeCache to address timing side channels in TLBs and coherence directories. Our designs target commonly used disclosure primitives, thereby preventing a wide range of attacks that include those from the speculative domain. Advisor: Prof. Sandhya Dwarkadas (Computer Science) Committee: Prof. Michael Scott (Computer Science), Prof. Yuhao Zhu (Computer Science), Prof. Michael Huang (Electrical and Computer Engineering), and Dr. Abhishek Basak (NVIDIA) Chair: Prof. Selcuk Kose (Electrical and Computer Engineering)


June 14, 2022, 10:00 AM
Andrew Read-McFarland: Sampling and Decision Problems with Connectivity Constraints

[Tuesday, June 14, 2022 at 10:00 AM in WH2506; https://rochester.zoom.us/s/93330546240 (hybrid)] In this work we focus on categorizing various problems on graphs with connectivity constraints, proving when it is easy or hard to solve (whether that be deciding, approximately sampling, or exactly counting). While we generally prefer to show a problem is easy (and give an algorithm to solve it), if a problem is hard it useful to know a polynomial time solution is unlikely to exist. Thus, we seek to find the hardness thresholds of graph theoretic problems with some form of connectivity constraints. We first consider perhaps the most elementary of such a problem, sampling a connected subgraph. We find unless NP = RP there is no FPAS to sample a connected subgraph of a fixed size k. Examining a variant where a subgraph of size k is sampled with probability proportional to λk, we find it easy to sample for λ < λd and hard for λd < λ < 1 (where λd is a constant based upon the maximum degree of the graph). Moreover, we show local Markov chains sampling either of these models do not rapidly mix on a family of trees, making it unlikely such Markov chain are useful for sampling these objects. We then consider precedence constrained scheduling, where we have a set of tasks, some of which need to be concluded before others can start, and schedule assigning how many tasks can be executed in parallel at a given time. We expand upon the known results for the decision variant (finding an order preserving mapping from G to H), showing if H is a general graph it is hard to decide, if G is a tree and H is a complete layered graph it is also hard to decide, and if G is a collection of path graphs and H is a complete layered graph it is easy to decide. We then move on to consider sampling an order preserving mapping from G to H, giving a FPAS when G is a collection of path graphs and H is a complete layered graph. Finally, we give a dynamic programming algorithm for exactly counting the number of order preserving mappings from G to H when G is a collection of path graphs and H is a complete layered graph of bounded width. Lastly, we examine the problem of fairly partitioning a space. We first examine doing so in a geometric space with Voronoi partitions, proving a Voronoi partition exists in any convex space. We then consider a graphical model, where we find a spanning tree of a graph and then remove an edge so that the resulting components are as balanced in size as possible. We prove if G is a complete graph then we sample a tree that gives a close to equitable split with high probability, and give experimental data supporting the conjectures that the same holds for G being a n x n grid or sampled from the Gn,p model. Advisor: Prof. Daniel Stefankovic (Computer Science) Committee: Prof. Lane Hemaspaandra (Computer Science), Prof. Muthu Venkitasubramaniam (Computer Science, Georgetown University), and Prof. Arjun Krishnan (Mathematics) Chair: Prof. Sevak Mkrtchyan (Mathematics)


June 14, 2022, 01:00 PM
Wei Xiong: Guidance-driven Visual Synthesis with Generative Models

[Tuesday, June 14, 2022 at 1:00 PM in WH 2506; https://rochester.zoom.us/j/3626732673 (hybrid)] Visual synthesis has drawn increasing attention in recent years. It incorporates a set of tasks that synthesize and manipulate visual contents from given inputs, including unconditional object generation, image-to-image translation, image inpainting, video generation, image enhancement, text-to-image synthesis, style transfer, and other applications. Visual synthesis provides an effective way of understanding the inherent nature of data. In this thesis, we focus on two interesting research directions in the visual synthesis and content creation field. (1) We focus on visually pleasing data synthesis. With the progress of generative models, high-resolution structured objects can be successfully generated. However, for more complicated tasks such as scene generation and editing, synthesizing realistic data is still a big challenge. We devise effective generative models for complicated visual synthesis tasks. (2) We study recognition-oriented data synthesis. The generated visual data can be regarded as new data samples for data augmentation and thereby benefit the downstream tasks, such as long-tailed recognition and few-shot learning. We explore and control the generation process of visual data that can indeed benefit the visual recognition systems. To accomplish our goals in these research directions, we need to synthesize visual contents lying on the expected data manifold. For example, for visually pleasing data synthesis, the generated data are required to be lying as close to the manifold of the real data as possible. For recognition-oriented synthesis, the generated data are required to be lying in the space specified by the downstream task. It is non-trivial to synthesize such samples by merely learning from data itself, as the generative model will be confused about what knowledge exactly needs to be learned, therefore it will either generate many artifacts or generate data that are not needed by downstream tasks. To address this issue, in this thesis, unlike the conventional data-driven synthesis, we explore guidance-driven visual synthesis, i.e., when learning to synthesize contents, we propose to leverage reasonable guidance to constrain the learning process of the generative models so that the synthesized visual contents lie on the expected manifold. We apply this idea to both the two research directions. In the first part, we introduce \textit{guidance-driven visually pleasing data synthesis}. Specifically, we investigate guided synthesis in image inpainting, unsupervised low-light image enhancement, video prediction, and caricature generation tasks. In the second part, we explore \textit{guidance-driven synthesis for visual recognition}. Specifically, we derive guidance from downstream tasks to modulate the generation of the visual data and show that the generated data indeed benefit the downstream tasks. Advisor: Prof. Jiebo Luo (Computer Science) Committee: Prof. Dan Gildea (Computer Science), Prof. Chenliang Xu (Computer Science), and Dr. Zhe Lin (Adobe) Chair: Prof. Zhiyao Duan (Electrical and Computer Engineering)