Self-Supervised Collision Handling via Generative 3D Garment Models for Virtual Try-On

Igor Santesteban, Nils Thuerey, Miguel A. Otaduy and Dan Casas
IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2021



Abstract

We propose a new generative model for 3D garment deformations that enables us to learn, for the first time, a data-driven method for virtual try-on that effectively addresses garment-body collisions. In contrast to existing methods that require an undesirable postprocessing step to fix garment-body interpenetrations at test time, our approach directly outputs 3D garment configurations that do not collide with the underlying body. Key to our success is a new canonical space for garments that removes pose-and-shape deformations already captured by a new diffused human body model, which extrapolates body surface properties such as skinning weights and blendshapes to any 3D point. We leverage this representation to train a generative model with a novel self-supervised collision term that learns to reliably solve garment-body interpenetrations. We extensively evaluate and compare our results with recently proposed data-driven methods, and show that our method is the first to successfully address garment-body contact in unseen body shapes and motions, without compromising realism and detail.


Citation

@article {santesteban2021garmentcollisions,
    journal = {IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
    title = {{Self-Supervised Collision Handling via Generative 3D Garment Models for Virtual Try-On}},
    author = {Santesteban, Igor and Thuerey, Nils and Otaduy, Miguel A and Casas, Dan},
    year = {2021}
}

Description and Results

We propose a new data-driven method for virtual try-on that effectively addresses garment-body collisions. Our generative model predicts natural deformations and fine wrinkles that do not collide with the underlying body, even for complex garments such as this asymmetrical dress:

These results are possible thanks to several novel components: 1) a diffused human model that extends body properties (e.g., skinning weights, blendshapes) to any point in 3D space, 2) an unposed and deshaped canonical space of garment deformations, 3) a self-supervised strategy that effectively addresses garment-body collisions.

Thanks to these contributions, we are able to train a network that is robust to collisions even for poses and body shapes far from the training data, while previous state-of-the-art methods rely on a postprocessing step to fix the predicted garments.

Below we compare our canonical space with other approaches. Unposing the ground-truth data with constant weights introduces collisions in unpose, and using dynamic weights computed with nearest vertex search at each frame suffers from very noticeable artifacts. In contrast, our optimization-based approach avoids those artifacts while having much better temporal stability. By representing the ground-truth data in this space, our model becomes easier to train and collisions can be handled very efficiently since the body mesh is constant.

In the following comparison we interpolate between two real shapes from the AMASS dataset, unseen at training time. Our approach successfully predicts garment deformations that do not collide with the underlying body shape, even for extreme shapes far from the training set.

Our method runs at interactive frame rates. Bellow we show a live recording of our demo, using test sequences from the AMASS dataset unseen at training time, without any postprocessing step. Notice how our approach enables us to interactively manipulate the shape parameter of the subject, while producing highly realistic garment deformations.


Acknowledgments

Igor Santesteban was supported by the Predoctoral Training Programme of the Basque Government (PRE_2020_2_0133). The work was also funded in part by the European Research Council (ERC Consolidator Grant no. 772738 TouchDesign) and Spanish Ministry of Science (RTI2018-098694-B-I00 VizLearning)

Contact

Dan Casas – dan.casas@urjc.es
Igor Santesteban – igor.santesteban@urjc.es