I am Qingyang Tan, a Ph.D. candidate at the University of Maryland, College Park advised by Prof. Dinesh Manocha. I received a B.Eng. degree in Computer Science and Technology from the University of Chinese Academy of Sciences.

In the 2022 summer, I interned in Adobe Research, advised by Dr. Noam Aigerman. In the 2021 summer, I interned in Adobe Research, advised by Dr. Yi Zhou, Dr. Tuanfeng Y. Wang, Dr. Duygu Ceylan and Dr. Xin Sun. In the 2020 summer, I interned in Facebook Reality Lab, advised by Dr. Takaaki Shiratori and Dr. Breannan Smith.

I conducted my undergraduate thesis in VIPL Group at the Institute of Computing Technology (ICT), Chinese Academy of Sciences (CAS), under the supervision of Prof. Xilin Chen and Prof. Xiujuan Chai. Before that, I interned in Human Motion Research Group at ICT, CAS, supervised by Prof. Lin Gao, Prof. Yu-Kun Lai and Prof. Shihong Xia. I also have several research experiences in interdisciplinary science, including UROP projects in Institute of Medical Engineering & Science and MIT Sloan School of Management of MIT, and synthetic biology project for the International Genetically Engineered Machine (iGEM) competition.

Here is my resume. Please feel free to contact me.

Interests

  • Computer Graphics
  • Physical Simulation
  • Geometry Processing
  • Computer Vision
  • Machine Learning

Education

  • Ph.D. Candidate in Computer Science, 2018-present

    University of Maryland, College Park

  • B.Eng. in Computer Science and Technology, 2014-2018

    University of Chinese Academy of Sciences

  • Special Student in EECS, 2017

    Massachusetts Institute of Technology

Publications

A Repulsive Force Unit for Garment Collision Handling in Neural Networks

To address interpenetration problems in neural garment prediction, we propose a novel collision handling neural network layer called Repulsive Force Unit (ReFU). Based on the signed distance function (SDF) of the underlying body and the current garment vertex positions, ReFU predicts the per-vertex offsets that push any interpenetrating vertex to a collision-free configuration while preserving the fine geometric details. Our experiments show that ReFU significantly reduces the number of collisions between the body and the garment and better preserves geometric details compared to prior methods based on collision loss or post-processing optimization.

N-Penetrate: Active Learning of Neural Collision Handler for Complex 3D Mesh Deformations

We present a robust learning algorithm to detect and handle collisions in 3D deforming meshes. We first train a neural network to detect collisions and then use a numerical optimization algorithm to resolve penetrations guided by the network. To obtain stable network performance in such large and unseen spaces, we apply active learning by progressively inserting new collision data based on the network inferences. We automatically label these new data using an analytical collision detector and progressively fine-tune our detection networks.

Multiscale Mesh Deformation Component Analysis with Attention-based Autoencoders

We propose a novel method to exact multiscale deformation components automatically with a stacked attention-based autoencoder. The attention mechanism is designed to learn to softly weight multi-scale deformation components in active deformation regions, and the stacked attention-based autoencoder is learned to represent the deformation components at different scales.

Variational Autoencoders for Localized Mesh Deformation Component Analysis

We propose a mesh-based variational autoencoder architecture that is able to cope with meshes with irregular connectivity and nonlinear deformations. To help localize deformations, we introduce sparse regularization in this framework, along with spectral graph convolutional operations. Through modifying the regularization formulation and allowing dynamic change of sparsity ranges, we improve the visual quality and reconstruction ability of the extracted deformation components. As an important application of localized deformation components and a novel approach on its own, we further develop a neural shape editing method, achieving shape editing and deformation component extraction in a unified framework, and ensuring plausibility of the edited shapes.

LCollision: Fast Generation of Collision-Free Human Poses using Learned Non-Penetration Constraints

We present LCollision, a learning-based method that synthesizes collision-free 3D human poses. At the crux of our approach is a novel deep architecture that simultaneously decodes new human poses from the latent space and predicts colliding body parts. These two components of our architecture are used as the objective function and surrogate hard constraints in a constrained optimization for collision-free human pose generation. A novel aspect of our approach is the use of a bilevel autoencoder that decomposes whole-body collisions into groups of collisions between localized body parts. By solving the constrained optimizations, we show that a significant amount of collision artifacts can be resolved.

DeepMNavigate: Deep Reinforced Multi-Robot Navigation Unifying Local & Global Collision

We present a novel algorithm (DeepMNavigate) for global multi-agent navigation in dense scenarios using deep reinforcement learning. Our approach uses local and global information for each robot based on motion information maps. We demonstrate the performance on complex, dense benchmarks with narrow passages on environments with tens of agents. We highlight the algorithm’s benefits over prior learning methods and geometric decentralized algorithms in complex scenarios.

Realtime Simulation of Thin-Shell Deformable Materials using CNN-Based Mesh Embedding

We address the problem of accelerating thin-shell deformable object simulations by dimension reduction. We present a new algorithm to embed a high-dimensional configuration space of deformable objects in a low-dimensional feature space, where the configurations of objects and feature points have approximate one-to-one mapping.

Variational Autoencoders for Deforming 3D Mesh Models

We propose a novel framework which we call mesh variational autoencoders, to explore the probabilistic latent space of 3D surfaces. The framework is easy to train, and requires very few training examples. We also propose an extended model which allows flexibly adjusting the significance of different latent variables by altering the prior distribution.

Attention-based Isolated Gesture Recognition with Multi-task Learning

We propose to combine the identification of key temporal and spatial regions and sign language classification into one deep learning network architecture, using attention mechanisms to focus on effective features, and realizing end-to-end automatic sign language recognition. Our proposed architecture can enhance the efficiency of sign language recognition and maintain a comparable recognition accuracy with state-of-art work.

Mesh-based Autoencoders for Localized Deformation Component Analysis

We propose a novel mesh-based autoencoder architecture that is able to cope with meshes with irregular topology. We introduce sparse regularization in this framework, which along with convolutional operations, helps localize mesh deformations. Our framework is capable of extracting localized deformation components from mesh data sets with large-scale deformations and is robust to noise.

Contact

  • qytan@outlook.com
  • qytan@umd.edu