-
EVA-CLIP-18B: Scaling CLIP to 18 Billion Parameters
Paper • 2402.04252 • Published • 28 -
Vision Superalignment: Weak-to-Strong Generalization for Vision Foundation Models
Paper • 2402.03749 • Published • 14 -
ScreenAI: A Vision-Language Model for UI and Infographics Understanding
Paper • 2402.04615 • Published • 44 -
EfficientViT-SAM: Accelerated Segment Anything Model Without Performance Loss
Paper • 2402.05008 • Published • 23
Collections
Discover the best community collections!
Collections including paper arXiv:2502.14786
-
seanghay/khmer_mpwt_speech
Viewer • Updated • 2.06k • 140 • 8 -
DeepSeek LLM: Scaling Open-Source Language Models with Longtermism
Paper • 2401.02954 • Published • 48 -
openai/whisper-large-v3-turbo
Automatic Speech Recognition • 0.8B • Updated • 4.01M • • 2.67k -
3.45k
The Ultra-Scale Playbook
🌌The ultimate guide to training LLM on large GPU Clusters
-
SigLIP 2: Multilingual Vision-Language Encoders with Improved Semantic Understanding, Localization, and Dense Features
Paper • 2502.14786 • Published • 154 -
Scaling Text-Rich Image Understanding via Code-Guided Synthetic Multimodal Data Generation
Paper • 2502.14846 • Published • 14 -
RelaCtrl: Relevance-Guided Efficient Control for Diffusion Transformers
Paper • 2502.14377 • Published • 12
-
53
Compare Siglip1 Siglip2
🚀Compare SigLIP1 and SigLIP2 on zero shot classification
-
SigLIP 2: Multilingual Vision-Language Encoders with Improved Semantic Understanding, Localization, and Dense Features
Paper • 2502.14786 • Published • 154 -
google/siglip2-base-patch16-224
Zero-Shot Image Classification • 0.4B • Updated • 477k • 75 -
google/siglip2-base-patch16-256
Zero-Shot Image Classification • 0.4B • Updated • 52.4k • 6
-
SigLIP 2: Multilingual Vision-Language Encoders with Improved Semantic Understanding, Localization, and Dense Features
Paper • 2502.14786 • Published • 154 -
LongWriter-V: Enabling Ultra-Long and High-Fidelity Generation in Vision-Language Models
Paper • 2502.14834 • Published • 24 -
Qwen2.5-VL Technical Report
Paper • 2502.13923 • Published • 208 -
DICEPTION: A Generalist Diffusion Model for Visual Perceptual Tasks
Paper • 2502.17157 • Published • 52
-
QLIP: Text-Aligned Visual Tokenization Unifies Auto-Regressive Multimodal Understanding and Generation
Paper • 2502.05178 • Published • 10 -
Scaling Text-Rich Image Understanding via Code-Guided Synthetic Multimodal Data Generation
Paper • 2502.14846 • Published • 14 -
SigLIP 2: Multilingual Vision-Language Encoders with Improved Semantic Understanding, Localization, and Dense Features
Paper • 2502.14786 • Published • 154 -
Efficient LLaMA-3.2-Vision by Trimming Cross-attended Visual Features
Paper • 2504.00557 • Published • 15
-
EVA-CLIP-18B: Scaling CLIP to 18 Billion Parameters
Paper • 2402.04252 • Published • 28 -
Vision Superalignment: Weak-to-Strong Generalization for Vision Foundation Models
Paper • 2402.03749 • Published • 14 -
ScreenAI: A Vision-Language Model for UI and Infographics Understanding
Paper • 2402.04615 • Published • 44 -
EfficientViT-SAM: Accelerated Segment Anything Model Without Performance Loss
Paper • 2402.05008 • Published • 23
-
53
Compare Siglip1 Siglip2
🚀Compare SigLIP1 and SigLIP2 on zero shot classification
-
SigLIP 2: Multilingual Vision-Language Encoders with Improved Semantic Understanding, Localization, and Dense Features
Paper • 2502.14786 • Published • 154 -
google/siglip2-base-patch16-224
Zero-Shot Image Classification • 0.4B • Updated • 477k • 75 -
google/siglip2-base-patch16-256
Zero-Shot Image Classification • 0.4B • Updated • 52.4k • 6
-
seanghay/khmer_mpwt_speech
Viewer • Updated • 2.06k • 140 • 8 -
DeepSeek LLM: Scaling Open-Source Language Models with Longtermism
Paper • 2401.02954 • Published • 48 -
openai/whisper-large-v3-turbo
Automatic Speech Recognition • 0.8B • Updated • 4.01M • • 2.67k -
3.45k
The Ultra-Scale Playbook
🌌The ultimate guide to training LLM on large GPU Clusters
-
SigLIP 2: Multilingual Vision-Language Encoders with Improved Semantic Understanding, Localization, and Dense Features
Paper • 2502.14786 • Published • 154 -
LongWriter-V: Enabling Ultra-Long and High-Fidelity Generation in Vision-Language Models
Paper • 2502.14834 • Published • 24 -
Qwen2.5-VL Technical Report
Paper • 2502.13923 • Published • 208 -
DICEPTION: A Generalist Diffusion Model for Visual Perceptual Tasks
Paper • 2502.17157 • Published • 52
-
SigLIP 2: Multilingual Vision-Language Encoders with Improved Semantic Understanding, Localization, and Dense Features
Paper • 2502.14786 • Published • 154 -
Scaling Text-Rich Image Understanding via Code-Guided Synthetic Multimodal Data Generation
Paper • 2502.14846 • Published • 14 -
RelaCtrl: Relevance-Guided Efficient Control for Diffusion Transformers
Paper • 2502.14377 • Published • 12
-
QLIP: Text-Aligned Visual Tokenization Unifies Auto-Regressive Multimodal Understanding and Generation
Paper • 2502.05178 • Published • 10 -
Scaling Text-Rich Image Understanding via Code-Guided Synthetic Multimodal Data Generation
Paper • 2502.14846 • Published • 14 -
SigLIP 2: Multilingual Vision-Language Encoders with Improved Semantic Understanding, Localization, and Dense Features
Paper • 2502.14786 • Published • 154 -
Efficient LLaMA-3.2-Vision by Trimming Cross-attended Visual Features
Paper • 2504.00557 • Published • 15