I am Qingyang Tan, a Research Scientist at Meta Reality Labs. I obtained my Ph.D. degree the University of Maryland, College Park advised by Prof. Dinesh Manocha. I received a B.Eng. degree in Computer Science and Technology from the University of Chinese Academy of Sciences.
In the 2022 summer, I interned in Adobe Research, advised by Dr. Noam Aigerman. In the 2021 summer, I interned in Adobe Research, advised by Dr. Yi Zhou, Dr. Tuanfeng Y. Wang, Dr. Duygu Ceylan and Dr. Xin Sun. In the 2020 summer, I interned in Facebook Reality Lab, advised by Dr. Takaaki Shiratori and Dr. Breannan Smith.
I conducted my undergraduate thesis in VIPL Group at the Institute of Computing Technology (ICT), Chinese Academy of Sciences (CAS), under the supervision of Prof. Xilin Chen and Prof. Xiujuan Chai. Before that, I interned in Human Motion Research Group at ICT, CAS, supervised by Prof. Lin Gao, Prof. Yu-Kun Lai and Prof. Shihong Xia. I also have several research experiences in interdisciplinary science, including UROP projects in Institute of Medical Engineering & Science and MIT Sloan School of Management of MIT, and synthetic biology project for the International Genetically Engineered Machine (iGEM) competition.
Here is my resume. Please feel free to contact me.
Ph.D. in Computer Science, 2018-2023
University of Maryland, College Park
B.Eng. in Computer Science and Technology, 2014-2018
University of Chinese Academy of Sciences
Special Student in EECS, 2017
Massachusetts Institute of Technology
To address interpenetration problems in neural garment prediction, we propose a novel collision handling neural network layer called Repulsive Force Unit (ReFU). Based on the signed distance function (SDF) of the underlying body and the current garment vertex positions, ReFU predicts the per-vertex offsets that push any interpenetrating vertex to a collision-free configuration while preserving the fine geometric details. Our experiments show that ReFU significantly reduces the number of collisions between the body and the garment and better preserves geometric details compared to prior methods based on collision loss or post-processing optimization.
We present a robust learning algorithm to detect and handle collisions in 3D deforming meshes. We first train a neural network to detect collisions and then use a numerical optimization algorithm to resolve penetrations guided by the network. To obtain stable network performance in such large and unseen spaces, we apply active learning by progressively inserting new collision data based on the network inferences. We automatically label these new data using an analytical collision detector and progressively fine-tune our detection networks.
We propose a novel method to exact multiscale deformation components automatically with a stacked attention-based autoencoder. The attention mechanism is designed to learn to softly weight multi-scale deformation components in active deformation regions, and the stacked attention-based autoencoder is learned to represent the deformation components at different scales.
We propose a mesh-based variational autoencoder architecture that is able to cope with meshes with irregular connectivity and nonlinear deformations. To help localize deformations, we introduce sparse regularization in this framework, along with spectral graph convolutional operations. Through modifying the regularization formulation and allowing dynamic change of sparsity ranges, we improve the visual quality and reconstruction ability of the extracted deformation components. As an important application of localized deformation components and a novel approach on its own, we further develop a neural shape editing method, achieving shape editing and deformation component extraction in a unified framework, and ensuring plausibility of the edited shapes.
We present LCollision, a learning-based method that synthesizes collision-free 3D human poses. At the crux of our approach is a novel deep architecture that simultaneously decodes new human poses from the latent space and predicts colliding body parts. These two components of our architecture are used as the objective function and surrogate hard constraints in a constrained optimization for collision-free human pose generation. A novel aspect of our approach is the use of a bilevel autoencoder that decomposes whole-body collisions into groups of collisions between localized body parts. By solving the constrained optimizations, we show that a significant amount of collision artifacts can be resolved.
We present a novel algorithm (DeepMNavigate) for global multi-agent navigation in dense scenarios using deep reinforcement learning. Our approach uses local and global information for each robot based on motion information maps. We demonstrate the performance on complex, dense benchmarks with narrow passages on environments with tens of agents. We highlight the algorithm’s benefits over prior learning methods and geometric decentralized algorithms in complex scenarios.
We address the problem of accelerating thin-shell deformable object simulations by dimension reduction. We present a new algorithm to embed a high-dimensional configuration space of deformable objects in a low-dimensional feature space, where the configurations of objects and feature points have approximate one-to-one mapping.
We propose a novel framework which we call mesh variational autoencoders, to explore the probabilistic latent space of 3D surfaces. The framework is easy to train, and requires very few training examples. We also propose an extended model which allows flexibly adjusting the significance of different latent variables by altering the prior distribution.
We propose to combine the identification of key temporal and spatial regions and sign language classification into one deep learning network architecture, using attention mechanisms to focus on effective features, and realizing end-to-end automatic sign language recognition. Our proposed architecture can enhance the efficiency of sign language recognition and maintain a comparable recognition accuracy with state-of-art work.
We propose a novel mesh-based autoencoder architecture that is able to cope with meshes with irregular topology. We introduce sparse regularization in this framework, which along with convolutional operations, helps localize mesh deformations. Our framework is capable of extracting localized deformation components from mesh data sets with large-scale deformations and is robust to noise.