diff --git a/imgs/inference.png b/imgs/inference.png
new file mode 100644
index 0000000..6bec014
Binary files /dev/null and b/imgs/inference.png differ
diff --git a/imgs/training.png b/imgs/training.png
new file mode 100644
index 0000000..09573b9
Binary files /dev/null and b/imgs/training.png differ
diff --git a/readme.md b/readme.md
index fdae486..dd7c1e8 100644
--- a/readme.md
+++ b/readme.md
@@ -1,20 +1,82 @@
+
# Slide-SAM: Medical SAM meets sliding window
-We upload the SlideSAM-H checkpoint recently!
-Please download by
-Slide-SAM-B: https://pan.baidu.com/s/1jvJ2W4MK24JdpZLwPqMIfA [code:7be9]
-SlideSAM-H: https://pan.baidu.com/s/1jnOwyWd-M1fBIauNi3IA4w [code: 05dy]
-## Before Training
-### install tutils
+
+
+ Quan Quan1,2*,
+
+
+ Fenghe Tang3*,
+
+
+
Zikang Xu3,
+
+
+ Heqin Zhu3,
+
+
+ S.Kevin Zhou1,2,3
+
+
+
+
+[](https://arxiv.org/pdf/2311.10121.pdf)
+[](https://github.com/Curli-quan/Slide-SAM)
+
+
+
+
+## TODOs
+
+- [x] Paper released
+- [x] Code released
+- [x] Slide-SAM-B weights
+- [x] Slide-SAM-H weights
+
+
+
+## Models
+
+### Large scale Medical Image Pretrained Weights
+
+| name | resolution | Prompt | Weights |
+| :---------: | :--------: | :---------: | :----------------------------------------------------------: |
+| Slide-SAM-B | 1024x1024 | box & point | [ckpt](https://pan.baidu.com/s/1jvJ2W4MK24JdpZLwPqMIfA) [code:7be9] |
+| Slide-SAM-H | 224x224 | 78.6 | [ckpt](https://pan.baidu.com/s/1jnOwyWd-M1fBIauNi3IA4w) [code: 05dy] |
+
+
+
+## Getting Started
+
+### Install tutils tools
+
```
pip install trans-utils
```
-### prepare datasets
+### Prepare datasets
+
We recommend you to convert the dataset into the nnUNet format.
+
```
00_custom_dataset
imagesTr
@@ -24,39 +86,74 @@ We recommend you to convert the dataset into the nnUNet format.
xxx.nii.gz
...
```
-try to use the function ```organize_in_nnunet_style``` or ```organize_by_names``` to prepare your custom datasets.
-Then run
-```
+Try to use the function organize in [nnunet-style]([nnUNet/documentation/dataset_format.md at master · MIC-DKFZ/nnUNet (github.com)](https://github.com/MIC-DKFZ/nnUNet/blob/master/documentation/dataset_format.md)) or ```organize_by_names``` to prepare your custom datasets.
+
+Then run :
+
+```python
python -m datasets.generate_txt
```
A ```[example]_train.txt``` will be generated in ```./datasets/dataset_list/```
The content should be like below
+
```
01_BCV-Abdomen/Training/img/img0001.nii.gz 01_BCV-Abdomen/Training/label/label0001.nii.gz
01_BCV-Abdomen/Training/img/img0002.nii.gz 01_BCV-Abdomen/Training/label/label0002.nii.gz
01_BCV-Abdomen/Training/img/img0003.nii.gz 01_BCV-Abdomen/Training/label/label0003.nii.gz
```
-### cache 3d data into slices
+### Cache 3d volume into slices
+
After generating the ```[example]_train.txt``` file, check the config file ```configs/vit_b.yaml```.
Update the params in ```dataset``` by yours. And the ```dataset_list``` should be the name of the generated txt file ```[example]```.
Then run
+
```
python -m datasets.cache_dataset3d
```
-## Training
-run training
+
+
+## Start Training
+
+Run training on multi-GPU
+
```
CUDA_VISIBLE_DEVICES=0,1,2,3 python -m core.ddp --tag debug
```
-## Testing
+
+
+## Sliding Inference and Test
+
```
python -m core.volume_predictor
```
+
+
+
+
+
+
+
+## Citation
+
+If the code, paper and weights help your research, please cite:
+
+```
+@inproceedings{quan2024slide,
+ title={Slide-SAM: Medical SAM Meets Sliding Window},
+ author={Quan, Quan and Tang, Fenghe and Xu, Zikang and Zhu, Heqin and Zhou, S Kevin},
+ booktitle={Medical Imaging with Deep Learning},
+ year={2024}
+}
+```
+
+## License
+
+This project is released under the Apache 2.0 license. Please see the [LICENSE](LICENSE) file for more information.