sankim2 commited on
Commit
77f7a16
·
verified ·
1 Parent(s): 241bfc5

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -1
README.md CHANGED
@@ -11,5 +11,8 @@ tags:
11
 
12
  Authors: [Sanghwan Kim](https://kim-sanghwan.github.io/), [Rui Xiao](https://www.eml-munich.de/people/rui-xiao), [Mariana-Iuliana Georgescu](https://lilygeorgescu.github.io/), [Stephan Alaniz](https://www.eml-munich.de/people/stephan-alaniz), [Zeynep Akata](https://www.eml-munich.de/people/zeynep-akata)
13
 
14
- COSMOS is introduced in the paper [COSMOS: Cross-Modality Self-Distillation for Vision Language Pre-training](https://arxiv.org/abs/2412.01814). COSMOS is trained in self-supervised learning framework with multi-modal augmentation and cross-attention module. It outperforms CLIP-based models trained on larger datasets in visual perception and contextual understanding tasks. COSMOS achieves strong performance in downstream tasks including zero-shot image-text retrieval, classification, and semantic segmentation segmentation.
15
 
 
 
 
 
11
 
12
  Authors: [Sanghwan Kim](https://kim-sanghwan.github.io/), [Rui Xiao](https://www.eml-munich.de/people/rui-xiao), [Mariana-Iuliana Georgescu](https://lilygeorgescu.github.io/), [Stephan Alaniz](https://www.eml-munich.de/people/stephan-alaniz), [Zeynep Akata](https://www.eml-munich.de/people/zeynep-akata)
13
 
14
+ COSMOS is introduced in the paper [COSMOS: Cross-Modality Self-Distillation for Vision Language Pre-training](https://arxiv.org/abs/2412.01814). COSMOS is trained in self-supervised learning framework with multi-modal augmentation and cross-attention module. It outperforms CLIP-based models trained on larger datasets in visual perception and contextual understanding tasks. COSMOS also achieves strong performance in downstream tasks including zero-shot image-text retrieval, classification, and semantic segmentation segmentation.
15
 
16
+ **Usage**
17
+
18
+ Please refer to our [Github repo](https://github.com/ExplainableML/cosmos) for detailed usage.