kimyoungjune commited on
Commit
28ed419
·
verified ·
1 Parent(s): 8647152

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +184 -25
README.md CHANGED
@@ -1,40 +1,199 @@
1
  ---
2
- base_model: []
 
 
 
3
  library_name: transformers
4
  tags:
5
- - mergekit
6
- - merge
7
-
 
 
 
 
 
 
8
  ---
9
- # vv21_llava_qwen3_linear_250711_15
10
 
11
- This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
12
 
13
- ## Merge Details
14
- ### Merge Method
 
15
 
16
- This model was merged using the [Linear](https://arxiv.org/abs/2203.05482) merge method.
 
 
 
 
 
 
17
 
18
- ### Models Merged
 
 
 
 
 
 
19
 
20
- The following models were included in the merge:
21
- * /home/work/.varco_mllm/checkpoints-v2d1/training/vv2d1-llava-qwen3-14b-st4-250708/checkpoint-1400_hf
22
- * /home/work/.varco_mllm/checkpoints-v2d1/training/vv2d1-llava-qwen3-14b-st4-250708/checkpoint-1548_hf
 
 
23
 
24
- ### Configuration
 
 
 
 
 
 
 
25
 
26
- The following YAML configuration was used to produce this model:
27
 
28
- ```yaml
29
- models:
30
- - model: /home/work/.varco_mllm/checkpoints-v2d1/training/vv2d1-llava-qwen3-14b-st4-250708/checkpoint-1400_hf
31
- parameters:
32
- weight: 1.0
33
- - model: /home/work/.varco_mllm/checkpoints-v2d1/training/vv2d1-llava-qwen3-14b-st4-250708/checkpoint-1548_hf
34
- parameters:
35
- weight: 4.0
36
- merge_method: linear
37
- dtype: float16
38
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
39
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
40
  ```
 
 
 
 
1
  ---
2
+ license: cc-by-nc-4.0
3
+ base_model:
4
+ - Qwen/Qwen3-14B
5
+ - google/siglip2-so400m-patch16-384
6
  library_name: transformers
7
  tags:
8
+ - multimodal
9
+ - conversational
10
+ - ncsoft
11
+ - ncai
12
+ - varco
13
+ pipeline_tag: image-text-to-text
14
+ language:
15
+ - en
16
+ - ko
17
  ---
 
18
 
19
+ # VARCO-VISION-2.0-14B
20
+
21
+ ## Introduction
22
+ **VARCO-VISION-2.0** is a multimodal AI model capable of understanding both images and text to answer user queries. It supports multi-image inputs, enabling effective processing of complex content such as documents, tables, and charts. The model demonstrates strong comprehension in both Korean and English, with significantly improved text generation capabilities and a deeper understanding of Korean cultural context. Compared to its predecessor, performance has been notably enhanced across various benchmarks, and its usability in real-world scenarios—such as everyday Q&A and information summarization—has also improved.
23
+
24
+ In addition to the 14B full-scale model, a lightweight 1.7B version is available for on-device use, making it accessible on personal devices such as smartphones and PCs. VARCO-VISION-2.0 is a powerful open-source AI model built for Korean users and is freely available for a wide range of applications.
25
+
26
+ ## 🚨News🎙️
27
+ - 👀 We are going to release VARCO-VISION-2.0-1.7B-OCR soon!
28
+ - 👀 We are going to release VARCO-VISION-2.0-1.7B soon!
29
+ - 📰 2025-07-16: We released VARCO-VISION-2.0-14B at [link](https://huggingface.co/NCSOFT/VARCO-VISION-2.0-14B)
30
+ - 📰 2025-07-16: We released GME-VARCO-VISION-Embedding at [link](https://huggingface.co/NCSOFT/GME-VARCO-VISION-Embedding)
31
+
32
+ ## Key Features
33
+ - **Multi-image Understanding**: Newly added support for multi-image inputs enables the model to analyze multiple images simultaneously and make more holistic and context-aware decisions.
34
+ - **Korean Language Specialization**: The model is further specialized for Korean, with a deeper understanding of Korean language, context, and culture. Korean text generation has been significantly improved, resulting in more natural, fluent, and accurate responses.
35
+ - **OCR with Text Localization**: Unlike typical models that only recognize and generate text from images, VARCO-VISION-2.0 can also identify the position of the text and provide bounding boxes around it. This makes it especially useful for document understanding, signage interpretation, and structured visual data.
36
+ - **Enhanced Safety**: Improved robustness and filtering to ensure safer handling of harmful or sexually explicit content.
37
+
38
+ <div align="center">
39
+ <img src="./Gimbap_Example-1-20250709-032708.png" width="100%" />
40
+ </div>
41
+
42
+ ## VARCO-VISION-2.0 Family
43
+ | Model Name | Base Models (Vision / Language) | HF Link |
44
+ | :------------------------: | :-------------------------------------------------------------------------------------------------------------------------------------------: | :--------------------------------------------------------------: |
45
+ | VARCO-VISION-2.0-1.7B | [siglip2-so400m-patch16-384](https://huggingface.co/google/siglip2-so400m-patch16-384) / [Qwen3-1.7B](https://huggingface.co/Qwen/Qwen3-1.7B) | [link](https://huggingface.co/NCSOFT/VARCO-VISION-2.0-1.7B) |
46
+ | VARCO-VISION-2.0-14B | [siglip2-so400m-patch16-384](https://huggingface.co/google/siglip2-so400m-patch16-384) / [Qwen3-14B ](https://huggingface.co/Qwen/Qwen3-14B) | [link](https://huggingface.co/NCSOFT/VARCO-VISION-2.0-14B) |
47
+ | VARCO-VISION-2.0-1.7B-OCR | [siglip2-so400m-patch16-384](https://huggingface.co/google/siglip2-so400m-patch16-384) / [Qwen3-1.7B](https://huggingface.co/Qwen/Qwen3-1.7B) | [link](https://huggingface.co/NCSOFT/VARCO-VISION-2.0-1.7B-OCR) |
48
+ | GME-VARCO-VISION-Embedding | [Qwen2-VL-7B-Instruct](https://huggingface.co/Qwen/Qwen2-VL-7B-Instruct) | [link](https://huggingface.co/NCSOFT/GME-VARCO-VISION-Embedding) |
49
+
50
+ ## Model Architecture
51
+ VARCO-VISION-2.0 follows the architecture of [LLaVA-OneVision](https://arxiv.org/abs/2408.03326).
52
 
53
+ ## Evaluation
54
+ We adopted benchmark scores directly from [OpenVLM Leaderboard](https://huggingface.co/spaces/opencompass/open_vlm_leaderboard) where available, and conducted our own evaluations for benchmarks not included in OpenVLM Leaderboard, comparing results against various open-source models to provide a fair and comprehensive evaluation.
55
+ Please note that for certain benchmarks involving LLM-based evaluation (e.g., LLaVABench), results may not be exactly reproducible due to variations in the underlying LLM behavior.
56
 
57
+ ### English Benchmark
58
+ | Benchmark | InternVL3-14B | Ovis2-16B | Qwen2.5-VL-7B |VARCO-VISION-2.0-14B |
59
+ | :-----------: | :-----------: | :-------: | :-----------: |:------------------: |
60
+ | MMStar | **68.9** | *67.2* | 64.1 | 64.8 |
61
+ | SEEDBench_IMG | 77.5 | *77.7* | 77.0 | **78.3** |
62
+ | LLaVABench | 84.4 | **93.0** | *91.0* | 90.0 |
63
+ | OCRBench | 877 | *879* | **888** | 863 |
64
 
65
+ ### Korean Benchmark
66
+ | Benchmark | InternVL3-14B | Ovis2-16B | Qwen2.5-VL-7B | VARCO-VISION-2.0-14B |
67
+ | :----------: | :-----------: | :-------: | :-----------: | :------------------: |
68
+ | K-MMStar | **64.9** | 29.7 | 49.3 | *63.3* |
69
+ | K-SEED | **78.2** | 73.2 | 75.7 | *77.4* |
70
+ | K-LLaVABench | 80.9 | 86.3 | *94.1* | **95.1** |
71
+ | K-DTCBench | **87.9** | 81.7 | *82.1* | 79.6 |
72
 
73
+ ### Korean Cultural Benchmark
74
+ | Benchmark | InternVL3-14B | Ovis2-16B | Qwen2.5-VL-7B | VARCO-VISION-2.0-14B |
75
+ | :--------------: | :-----------: | :-------: | :-----------: | :------------------: |
76
+ | K-Viscuit | 71.7 | **77.0** | 70.9 | *72.9* |
77
+ | PangeaBench (ko) | **77.2** | *76.9* | 76.6 | 75.2 |
78
 
79
+ ### Text-only Benchmark
80
+ | Benchmark | InternVL3-14B | Ovis2-16B | Qwen2.5-VL-7B | VARCO-VISION-2.0-14B |
81
+ | :--------: | :-----------: | :-------: | :-----------: | :------------------: |
82
+ | MMLU | **78.5** | *78.4* | 4.6 | 77.7 |
83
+ | MT-Bench | **8.93** | 8.59 | 8.07 | *8.88* |
84
+ | KMMLU | *51.4* | 49.3 | 39.6 | **57.4** |
85
+ | KoMT-Bench | 7.01 | *7.91* | 6.84 | **7.95** |
86
+ | LogicKor | 7.00 | **7.94** | 6.55 | *7.86* |
87
 
88
+ **Note**: Some models show unusually low performance on the MMLU benchmark. This is primarily due to their failure to correctly follow the expected output format when only few-shot exemplars are provided in the prompts. Please take this into consideration when interpreting the results.
89
 
90
+ ### OCR Benchmark
91
+ | Benchmark | PaddleOCR | VARCO-VISION-2.0-14B |
92
+ | :-------: | :-------: | :------------------: |
93
+ | CORD | *91.4* | **93.3** |
94
+ | ICDAR2013 | *92.0* | **93.2** |
95
+ | ICDAR2015 | *73.7* | **82.7** |
 
 
 
 
96
 
97
+ ## Usage
98
+ To use this model, we recommend installing `transformers` version **4.53.1 or higher**. While it may work with earlier versions, using **4.53.1 or above is strongly recommended**, especially to ensure optimal performance for the **multi-image feature**.
99
+
100
+ The basic usage is **identical to** [LLaVA-OneVision](https://huggingface.co/docs/transformers/main/en/model_doc/llava_onevision#usage-example):
101
+
102
+ ```python
103
+ import requests
104
+ from PIL import Image
105
+ import torch
106
+ from transformers import AutoProcessor, LlavaOnevisionForConditionalGeneration
107
+
108
+ model_name = "NCSOFT/VARCO-VISION-2.0-14B"
109
+ model = LlavaOnevisionForConditionalGeneration.from_pretrained(
110
+ model_name,
111
+ torch_dtype=torch.float16,
112
+ attn_implementation="sdpa",
113
+ device_map="auto",
114
+ )
115
+ processor = AutoProcessor.from_pretrained(model_name)
116
+
117
+ conversation_1 = [
118
+ {
119
+ "role": "user",
120
+ "content": [
121
+ {"type": "image", "url": "https://www.ilankelman.org/stopsigns/australia.jpg"},
122
+ {"type": "text", "text": "What is shown in this image?"},
123
+ ],
124
+ },
125
+ {
126
+ "role": "assistant",
127
+ "content": [
128
+ {"type": "text", "text": "There is a red stop sign in the image."},
129
+ ],
130
+ },
131
+ {
132
+ "role": "user",
133
+ "content": [
134
+ {"type": "image", "url": "http://images.cocodataset.org/val2017/000000039769.jpg"},
135
+ {"type": "text", "text": "What about this image? How many cats do you see?"},
136
+ ],
137
+ },
138
+ ]
139
+ conversation_2 = [
140
+ {
141
+ "role": "user",
142
+ "content": [
143
+ {"type": "image", "url": "https://huggingface.co/microsoft/kosmos-2-patch14-224/resolve/main/snowman.jpg"},
144
+ {"type": "text", "text": "이 이미���에는 무엇이 보이나요?"},
145
+ ],
146
+ },
147
+ ]
148
+
149
+ inputs = processor.apply_chat_template(
150
+ [conversation_1, conversation_2],
151
+ add_generation_prompt=True,
152
+ tokenize=True,
153
+ return_dict=True,
154
+ padding=True,
155
+ return_tensors="pt"
156
+ ).to(model.device, torch.float16)
157
+
158
+ generate_ids = model.generate(**inputs, max_new_tokens=1024, do_sample=False)
159
+ outputs = processor.batch_decode(generate_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)
160
+ print(outputs)
161
+ ```
162
+ The following shows the input required for using OCR with text localization, along with the corresponding output:
163
+ ```python
164
+ # INPUT
165
+ image_file = "./assets/ocr.jpg"
166
+ raw_image = Image.open(image_file)
167
+ conversation = [
168
+ {
169
+ "role": "user",
170
+ "content": [
171
+ {"type": "text", "text": "<ocr>"},
172
+ {"type": "image"},
173
+ ],
174
+ },
175
+ ]
176
 
177
+ # OUTPUT
178
+ """
179
+ <char>백범로</char><bbox>0.172, 0.266, 0.328, 0.341</bbox>
180
+ <char>124번길</char><bbox>0.347, 0.266, 0.512, 0.341</bbox>
181
+ <char>Baekbeom-ro</char><bbox>0.171, 0.337, 0.433, 0.392</bbox>
182
+ <char>124</char><bbox>0.444, 0.341, 0.508, 0.392</bbox>
183
+ <char>만수주공아파트</char><bbox>0.109, 0.531, 0.335, 0.601</bbox>
184
+ <char>시흥</char><bbox>0.443, 0.518, 0.522, 0.581</bbox>
185
+ <char>시청</char><bbox>0.711, 0.521, 0.811, 0.594</bbox>
186
+ <char>Mansu</char><bbox>0.102, 0.601, 0.181, 0.648</bbox>
187
+ <char>Jugong</char><bbox>0.186, 0.601, 0.273, 0.658</bbox>
188
+ <char>Apt</char><bbox>0.28, 0.601, 0.327, 0.651</bbox>
189
+ <char>42</char><bbox>0.377, 0.601, 0.416, 0.648</bbox>
190
+ <char>Shieung</char><bbox>0.445, 0.578, 0.53, 0.625</bbox>
191
+ <char>인천대공원</char><bbox>0.43, 0.621, 0.609, 0.684</bbox>
192
+ <char>모래내시장역</char><bbox>0.651, 0.59, 0.873, 0.665</bbox>
193
+ <char>IncheonGrand</char><bbox>0.432, 0.681, 0.561, 0.723</bbox>
194
+ <char>Park</char><bbox>0.564, 0.681, 0.611, 0.723</bbox>
195
+ """
196
  ```
197
+ <div align="center">
198
+ <img src="./ocr.jpg" width="100%" />
199
+ </div>