Skip to content

[CVPR2024] Content-Style Decoupling for Unsupervised Makeup Transfer without Generating Pseudo Ground Truth

License

Notifications You must be signed in to change notification settings

Snowfallingplum/CSD-MT

Repository files navigation

Content-Style Decoupling for Unsupervised Makeup Transfer without Generating Pseudo Ground Truth

This is the official pytorch code for "Content-Style Decoupling for Unsupervised Makeup Transfer without Generating Pseudo Ground Truth", which has been accepted by CVPR2024.

The training code, testing code, and pre-trained model have all been open sourced

Author

Zhaoyang Sun; Shengwu Xiong; Yaxiong Chen; Yi Rong

News

The framework of CSD-MT

Quick Start

If you only want to get results quickly, please go to the "quick_start" folder and follow the readme.md inside to download the pre trained model to generate results quickly.

We also used Gradio to generate an interactive interface for easy use.

Image text

Requirements

We recommend that you just use your own pytorch environment; the environment needed to run our model is very simple. If you do so, please ignore the following environment creation.

A suitable conda environment named CSDMT can be created and activated with:

conda env create -f environment.yaml
conda activate CSDMT

Download MT dataset

  1. MT dataset can be downloaded here BeautyGAN. Extract the downloaded file and place it on top of this folder.
  2. Prepare face parsing. Face parsing is used in this code. In our experiment, face parsing is generated by https://github.com/zllrunning/face-parsing.PyTorch.
  3. Put the results of face parsing in the .\MT-Dataset\seg1\makeup and .\MT-Dataset\seg1\non-makeup

Training code

We have set the default hyperparameters in the options.py file, please modify them yourself if necessary.

Tip: If you find that the model excessively transfers the shadows of the reference image or generates disharmonious shadows, you can increase “weight_identity” in options.py to 0.2-0.5 appropriately.

To train the model, please run the following command directly

python train.py

Inference code

python inference.py

Our results

Citation

If this work is helpful for your research, please consider citing the following BibTeX entry.

@inproceedings{sun2024content,
  title={Content-Style Decoupling for Unsupervised Makeup Transfer without Generating Pseudo Ground Truth},
  author={Sun, Zhaoyang and Xiong, Shengwu and Chen, Yaxiong and Rong, Yi}
  journal={Proceedings of the IEEE/CVF conference on computer vision and pattern recognition},
  year={2024}
}

Acknowledgement

Some of the codes are build upon PSGAN, Face Parsing and aster.Pytorch.

License

This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.

CC BY-NC-SA 4.0

About

[CVPR2024] Content-Style Decoupling for Unsupervised Makeup Transfer without Generating Pseudo Ground Truth

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published