# transunet2 **Repository Path**: loxs/transunet2 ## Basic Information - **Project Name**: transunet2 - **Description**: transunet中使用Indian_pines数据集 - **Primary Language**: Unknown - **License**: Apache-2.0 - **Default Branch**: master - **Homepage**: None - **GVP Project**: No ## Statistics - **Stars**: 0 - **Forks**: 0 - **Created**: 2022-12-02 - **Last Updated**: 2022-12-02 ## Categories & Tags **Categories**: Uncategorized **Tags**: None ## README # TransUNet This repo holds code for [TransUNet: Transformers Make Strong Encoders for Medical Image Segmentation](https://arxiv.org/pdf/2102.04306.pdf) ## Usage ### 1. Download Google pre-trained ViT models * [Get models in this link](https://console.cloud.google.com/storage/vit_models/): R50-ViT-B_16, ViT-B_16, ViT-L_16... ```bash wget https://storage.googleapis.com/vit_models/imagenet21k/{MODEL_NAME}.npz && mkdir ../model/vit_checkpoint/imagenet21k && mv {MODEL_NAME}.npz ../model/vit_checkpoint/imagenet21k/{MODEL_NAME}.npz ``` ### 2. Prepare data Please go to ["./datasets/README.md"](datasets/README.md) for details, or please send an Email to jienengchen01 AT gmail.com to request the preprocessed data. If you would like to use the preprocessed data, please use it for research purposes and do not redistribute it. ### 3. Environment Please prepare an environment with python=3.7, and then use the command "pip install -r requirements.txt" for the dependencies. ### 4. Train/Test - Run the train script on synapse dataset. The batch size can be reduced to 12 or 6 to save memory (please also decrease the base_lr linearly), and both can reach similar performance. ```bash CUDA_VISIBLE_DEVICES=0 python train.py --dataset Synapse --vit_name R50-ViT-B_16 ``` - Run the test script on synapse dataset. It supports testing for both 2D images and 3D volumes. ```bash python test.py --dataset Synapse --vit_name R50-ViT-B_16 ``` ## Reference * [Google ViT](https://github.com/google-research/vision_transformer) * [ViT-pytorch](https://github.com/jeonsworld/ViT-pytorch) * [segmentation_models.pytorch](https://github.com/qubvel/segmentation_models.pytorch) ## Citations ```bibtex @article{chen2021transunet, title={TransUNet: Transformers Make Strong Encoders for Medical Image Segmentation}, author={Chen, Jieneng and Lu, Yongyi and Yu, Qihang and Luo, Xiangde and Adeli, Ehsan and Wang, Yan and Lu, Le and Yuille, Alan L., and Zhou, Yuyin}, journal={arXiv preprint arXiv:2102.04306}, year={2021} } ```