Lele (Vincent) Chen    

I am a Ph.D candidate advised by Prof. Chenliang Xu at University of Rochester, where I work on computer vision and mechine learning. My research interests are multimodal modeling and video object detection/segmentation.

I received my M.S. degree from University of Rochester in 2018. I've spent time at visualDx Medical Image Lab and JD.com JDX Autonomous Driving Lab as R&D intern.

Email  /  CV  /  Biography  /  Google Scholar  /  LinkedIn

@Vienna, May. 2015
Research

I'm interested in computer vision, machine learning, optimization and image processing. Much of my research is about inferring other modulities (language, audio, lidar point cloud ) to images. I have also worked in autonomous driving.

Lip Movements Generation at a Glance
Lele Chen, Zhiheng Li, Ross K Maddox, Zhiyao Duan , Chenliang Xu
ECCV, 2018
paper / poster / video / bibtex / news / code

Given an arbitrary audio speech and one lip image of arbitrary target identity, generate synthesized lip movements of the target identity saying the speech. To perform well, a model needs to not only consider the retention of target identity, photo-realistic of synthesized images, consistency and smoothness of lip images in a sequence, but more importantly, learn the correlations between audio speech and lip movements.

Deep cross-modal audio-visual generation
Lele Chen, Sudhanshu Srivastava, Zhiyao Duan , Chenliang Xu
ACM MMW, 2017
paper / poster / bibtex / code

We have developed algorithms in audio-visual source association that are able to segment corresponding audio-visual data pairs; we have created deep generative neural networks utilizing adversarial training that are able to generate one modality, i.e., audio/visual, from the other modality, i.e., visual/audio. The outputs of cross-modal generation are beneficial to many applications, such as aiding hearing- or visually-impaired and content creation in virtual reality.

MRI Tumor Segmentation with Densely Connected 3D CNN
Lele Chen, Yue Wu, Adora M. DSouza , Anas Z. Abidin, Axel Wismuller, Chenliang Xu
SPIE Image Processing, 2017
paper / slides / bibtex / code

In this paper, we introduce a new approach for brain tumor segmentation in MRI scans. DenseNet was initially introduced for the image classification problem. In this work, we explore the potential of densely connected blocks in 3D segmentation tasks. Compared with traditional networks with no skip connections, the improved information flow extracts better features and significantly help the optimization. We take multi-scale receptive fields into account to accurately classify voxels.

Toward a visual assistive listening device: Real-time synthesis of a virtual talking face from acoustic speech using deep neural networks
Lele Chen, Emre Eskimez, Zhiheng Li, Zhiyao Duan , Chenliang Xu , Ross K Maddox
The Journal of the Acoustical Society of America, 2018
paper / bibtex

Speech perception is a crucial function of the human auditory system, but speech is not only an acoustic signal-visual cues from a talker's face and articulators (lips, teeth, and tongue) carry considerable linguistic information. These cues offer substantial and important improvements to speech comprehension when the acoustic signal suffers degradations like background noise or impaired hearing. However, useful visual cues are not always available, such as when talking on the phone or listening to a podcast. We are developing a system for generating a realistic speaking face from speech audio input. The system uses novel deep neural networks trained on a large audio-visual speech corpus. It is designed to run in real time so that it can be used as an assistive listening device. Previous systems have shown improvements in speech perception only for the most degraded speech. Our design differs notably from earlier ones in that it does not use a language model-instead, it makes a direct transformation from speech audio to face video. This allows the temporal coherence between the acoustic and visual modalities to be preserved, which has been shown to be crucial to cross-modal perceptual binding.

Teaching

CIS442F Big Data - Spring 2018

This class offers an introduction to big data concepts, environments, processes, and tools from the perspective of data analysts and data scientists. The course will set the scene for the emergence of big data as an important trend in the business world and explain the technical architectures that make analyzing data at scale possible. The hands-on portion of the class will focus on the major tools of the Hadoop big data ecosystem such as HDFS, Pig, Hive, Sqoop, Hue, Zeppelin, and Spark. In addition, students will gain a broad understanding of the role of MapReduce, Tez, Impala, YARN, and other big data technologies.



GBA 464 Programming for Analytics - Fall 2017

This course provides some foundations for programming in the R environment. We cover traditional programming concepts such as operators, data structures, control structures, repetition and user-defined functions. In addition, these concepts will be taught in the context of marketing and business analytics problems related to data management and visualization. Other than high-level programming, the students will gain a foundational understanding of how data is can be stored, organized and pulled, in given data analytics context.


Other

CSC400 Graduate Problem Seminar - Fall 2018

You can find my NSF ITRG Miniproposal in here .



UP HOME
URCS People | URCS Home Page

this guy's website is awesome