# Manifest **Repository Path**: A930637378/Manifest ## Basic Information - **Project Name**: Manifest - **Description**: No description available - **Primary Language**: Python - **License**: Apache-2.0 - **Default Branch**: main - **Homepage**: None - **GVP Project**: No ## Statistics - **Stars**: 0 - **Forks**: 0 - **Created**: 2023-10-24 - **Last Updated**: 2023-10-24 ## Categories & Tags **Categories**: Uncategorized **Tags**: None ## README # ManiFest: Manifold Deformation for Few-shot Image Translation [ManiFest: Manifold Deformation for Few-shot Image Translation](https://arxiv.org/abs/2111.13681) Fabio Pizzati, Jean-François Lalonde, Raoul de Charette ECCV 2022 ## Preview ![teaser](teaser.png) ## Citation To cite our paper, please use ``` @inproceedings{pizzati2021manifest, title={{ManiFest: Manifold Deformation for Few-shot Image Translation}}, author={Pizzati, Fabio and Lalonde, Jean-François and de Charette, Raoul}, booktitle={ECCV}, year={2022} } ``` ## Prerequisites Please create an environment using the `requirements.yml` file provided. ```conda env create -f requirements.yml``` Download the pretrained models and the pretrained VGG used for the style alignment loss by following the link: ``` https://www.rocq.inria.fr/rits_files/computer-vision/manifest/manifest_checkpoints.tar.gz ``` Move the VGG network weights in the `res` folder and the checkpoints in the `checkpoints` one. ## Inference We provide pretrained models for the day2night, day2twilight and clear2fog tasks as described in the paper. To perform `general` inference using the pretrained model, please run the following command: ``` python inference_general.py --input_dir --output_dir --checkpoint ``` To perform `exemplar` inference, please use ``` python inference_exemplar.py --input_dir --output_dir --checkpoint --exemplar_image ``` ## Training We provide training code for all three tasks. Download the [ACDC](https://acdc.vision.ee.ethz.ch/), [VIPER](https://playing-for-benchmarks.org/) and [Dark Zurich](https://www.trace.ethz.ch/publications/2019/GCMA_UIoU/) datasets. Then, run the scripts provided in the `datasets' directory to create symbolic links. ``` python create_dataset.py --root_acdc --root_viper --root_dz ``` To start training, modify the `data/anchor_dataset.py` file and choose among `day2night`, `day2twilight` or `clear2fog` in the `root` option. Finally, start the training with ``` python train.py --comment "review training" --model fsmunit --dataset anchor ``` If you don't have a WANDB api key, please run ``` WANDB_MODE=offline python train.py --comment "review training" --model fsmunit --dataset anchor ``` ## Code structure When extending the code, please consider the following structure. The `train.py` file intializes logging utilities and set up callbacks for model saving and debug. The main training logic is in `networks/fsmunit_model.py`. In `networks/backbones/fsmunit.py` it's possible to find the architectural components.