This is the official pytorch code for "Content-Style Decoupling for Unsupervised Makeup Transfer without Generating Pseudo Ground Truth", which has been accepted by CVPR2024.
The training code, testing code, and pre-trained model have all been open sourced
Zhaoyang Sun; Shengwu Xiong; Yaxiong Chen; Yi Rong
-
Our paper SHMT was accepted by NeurIPS2024. Paper link and code link.
-
Our paper CSD-MT was accepted by CVPR2024. Paper link and code link.
-
Our paper SSAT++ was accepted by TNNLS2023. Paper link and code link.
-
Our paper SSAT was accepted by AAAI2022. Paper link and code link.
If you only want to get results quickly, please go to the "quick_start" folder and follow the readme.md inside to download the pre trained model to generate results quickly.
We also used Gradio to generate an interactive interface for easy use.
We recommend that you just use your own pytorch environment; the environment needed to run our model is very simple. If you do so, please ignore the following environment creation.
A suitable conda environment named CSDMT
can be created
and activated with:
conda env create -f environment.yaml
conda activate CSDMT
- MT dataset can be downloaded here BeautyGAN. Extract the downloaded file and place it on top of this folder.
- Prepare face parsing. Face parsing is used in this code. In our experiment, face parsing is generated by https://github.com/zllrunning/face-parsing.PyTorch.
- Put the results of face parsing in the .\MT-Dataset\seg1\makeup and .\MT-Dataset\seg1\non-makeup
We have set the default hyperparameters in the options.py file, please modify them yourself if necessary.
Tip: If you find that the model excessively transfers the shadows of the reference image or generates disharmonious shadows, you can increase “weight_identity” in options.py to 0.2-0.5 appropriately.
To train the model, please run the following command directly
python train.py
python inference.py
If this work is helpful for your research, please consider citing the following BibTeX entry.
@inproceedings{sun2024content,
title={Content-Style Decoupling for Unsupervised Makeup Transfer without Generating Pseudo Ground Truth},
author={Sun, Zhaoyang and Xiong, Shengwu and Chen, Yaxiong and Rong, Yi}
journal={Proceedings of the IEEE/CVF conference on computer vision and pattern recognition},
year={2024}
}
Some of the codes are build upon PSGAN, Face Parsing and aster.Pytorch.
This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.