Fully Convolutional Graph Neural Networks for Parametric Virtual Try-On

Raquel Vidaurre, Igor Santesteban, Elena Garces, and Dan Casas
Computer Graphics Forum
Proc. of ACM SIGGRAPH Symposium on Computer Animation
, 2020



Abstract

We present a learning-based approach for virtual try-on applications based on a fully convolutional graph neural network. In contrast to existing data-driven models, which are trained for a specific garment or mesh topology, our fully convolutional model can cope with a large family of garments, represented as parametric predefined 2D panels with arbitrary mesh topology, including long dresses, shirts, and tight tops. Under the hood, our novel geometric deep learning approach learns to drape 3D garments by decoupling the three different sources of deformations that condition the fit of clothing: garment type, target body shape, and material. Specifically, we first learn a regressor that predicts the 3D drape of the input parametric garment when worn by a mean body shape. Then, after a mesh topology optimization step where we generate a sufficient level of detail for the input garment type, we further deform the mesh to reproduce deformations caused by the target body shape. Finally, we predict fine-scale details such as wrinkles that depend mostly on the garment material. We qualitatively and quantitatively demonstrate that our fully convolutional approach outperforms existing methods in terms of generalization capabilities and memory requirements, and therefore it opens the door to more general learning-based models for virtual try-on applications.

Files



Citation

@article {vidaurre2020virtualtryon,
    journal = {Computer Graphics Forum (Proc. SCA)},
    title = {{Fully Convolutional Graph Neural Networks for Parametric Virtual Try-On}},
    author = {Vidaurre, Raquel and Santesteban, Igor and Garces, Elena and Casas, Dan},
    year = {2020}
}

Description and Results

Our goal is to predict the accurate 3D draping of garments, worn by any body shape, for virtual try-on purposes. We put special emphasis on the ability to cope with a large variety of garments, a feature mostly ignored by existing works since it requires a model that can deal with varying topology input. To this end, we propose a fully convolutional graph neural network approach that is able to predict the nonrigid deformations of parametric garments with arbitrary mesh topology.

This is an example of 4 garments fitted into a large range of bodies, deformed with our approach. Notice how the wrinkles naturally match the expected behavior of the garment, and change for each shape-garment pair.

Under the hood, our novel geometric deep learning approach learns to drape 3D garments by decoupling the three different sources of deformations that condition the fit of clothing: garment type, target body shape, and material.

The regressor Rmean estimates the 3D mesh of the designed garment fitted into a mean body shape. Then, after a mesh topology optimization step to generate the optimal topology for the designed garment, regressors Rsmooth and Rfine deform the mesh to reproduce deformations caused by the target body shape and material. Importantly, these regressors are implemented in a novel fully convolutional graph neural network (FCGNN) that is able to cope with any combination of garment, topology, and target body. Below we depict the architecture of Rsmooth and Rfine, based on a U-Net and graph convolutions.

Below we show a variety of our results visualized from an orbital camera.


Our method is highly efficient and can be used in design applications where fast feedback is required. Below we show our tool that allows to interactively manipulate the design parameters of the garment, and quickly visualize the fit onto arbitrary target bodies.



Acknowledgments

Igor Santesteban was supported by the Predoctoral Training Programme of the Department of Education of the Basque Government (PRE_2019_2_0104), and Elena Garces was supported by a Torres Quevedo Fellowship (PTQ2018-009868). The work was also funded in part by the Spanish Ministry of Science (project RTI2018-098694-B-I00 VizLearning).

Contact

Dan Casas – dan.casas@urjc.es