Upload LLaVA-OneVision LoRA fine-tuned model on GuruShots dataset
Browse files- README.md +106 -3
- adapter_config.json +26 -0
- adapter_model.safetensors +3 -0
- added_tokens.json +6 -0
- config.json +220 -0
- generation_config.json +14 -0
- merges.txt +0 -0
- non_lora_trainables.bin +3 -0
- special_tokens_map.json +20 -0
- tokenizer.json +0 -0
- tokenizer_config.json +53 -0
- vocab.json +0 -0
README.md
CHANGED
|
@@ -1,3 +1,106 @@
|
|
| 1 |
-
---
|
| 2 |
-
|
| 3 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
library_name: peft
|
| 3 |
+
base_model: lmms-lab/llava-onevision-qwen2-0.5b-ov
|
| 4 |
+
tags:
|
| 5 |
+
- vision-language
|
| 6 |
+
- multimodal
|
| 7 |
+
- llava
|
| 8 |
+
- lora
|
| 9 |
+
- fine-tuned
|
| 10 |
+
- gurushots
|
| 11 |
+
- photography
|
| 12 |
+
license: apache-2.0
|
| 13 |
+
datasets:
|
| 14 |
+
- Dataseeds/GuruShots-Sample-Dataset-GSD
|
| 15 |
+
language:
|
| 16 |
+
- en
|
| 17 |
+
pipeline_tag: image-text-to-text
|
| 18 |
+
---
|
| 19 |
+
|
| 20 |
+
# LLaVA-OneVision-Qwen2-0.5b-ov Fine-tuned on GuruShots Dataset
|
| 21 |
+
|
| 22 |
+
This model is a LoRA (Low-Rank Adaptation) fine-tuned version of [lmms-lab/llava-onevision-qwen2-0.5b-ov](https://huggingface.co/lmms-lab/llava-onevision-qwen2-0.5b-ov) on the GuruShots photography dataset.
|
| 23 |
+
|
| 24 |
+
## Model Description
|
| 25 |
+
|
| 26 |
+
- **Base Model**: LLaVA-OneVision-Qwen2-0.5b-ov
|
| 27 |
+
- **Fine-tuning Method**: LoRA (Low-Rank Adaptation)
|
| 28 |
+
- **LoRA Rank**: 32
|
| 29 |
+
- **LoRA Alpha**: 32
|
| 30 |
+
- **LoRA Dropout**: 0.1
|
| 31 |
+
- **Target Modules**: v_proj, k_proj, q_proj, up_proj, gate_proj, down_proj, o_proj
|
| 32 |
+
- **Dataset**: GuruShots Sample Dataset (GSD) [Dataseeds/GuruShots-Sample-Dataset-GSD](https://huggingface.co/datasets/Dataseeds/GuruShots-Sample-Dataset-GSD)
|
| 33 |
+
- **Task**: Vision-Language Understanding for Photography
|
| 34 |
+
|
| 35 |
+
## Training Details
|
| 36 |
+
|
| 37 |
+
This model was fine-tuned using LoRA on a curated dataset from GuruShots, focusing on photography-related vision-language tasks. The training was performed with:
|
| 38 |
+
|
| 39 |
+
- LoRA rank: 32
|
| 40 |
+
- LoRA alpha: 32
|
| 41 |
+
- Dropout: 0.1
|
| 42 |
+
- Target modules: All attention and MLP projection layers
|
| 43 |
+
|
| 44 |
+
## Usage
|
| 45 |
+
|
| 46 |
+
```python
|
| 47 |
+
from peft import PeftModel
|
| 48 |
+
from transformers import AutoModelForCausalLM, AutoTokenizer
|
| 49 |
+
import torch
|
| 50 |
+
|
| 51 |
+
# Load base model
|
| 52 |
+
base_model = AutoModelForCausalLM.from_pretrained(
|
| 53 |
+
"lmms-lab/llava-onevision-qwen2-0.5b-ov",
|
| 54 |
+
torch_dtype=torch.bfloat16,
|
| 55 |
+
device_map="auto"
|
| 56 |
+
)
|
| 57 |
+
|
| 58 |
+
# Load LoRA adapter
|
| 59 |
+
model = PeftModel.from_pretrained(
|
| 60 |
+
base_model,
|
| 61 |
+
"Dataseeds/LLaVA-OneVision-Qwen2-0.5b-ov-GSD-FineTune"
|
| 62 |
+
)
|
| 63 |
+
|
| 64 |
+
# Load tokenizer
|
| 65 |
+
tokenizer = AutoTokenizer.from_pretrained("lmms-lab/llava-onevision-qwen2-0.5b-ov")
|
| 66 |
+
|
| 67 |
+
# Use the model for inference
|
| 68 |
+
# (Add your inference code here)
|
| 69 |
+
```
|
| 70 |
+
|
| 71 |
+
## Model Architecture
|
| 72 |
+
|
| 73 |
+
The model maintains the same architecture as the base LLaVA-OneVision model:
|
| 74 |
+
- Vision Encoder: SigLIP-SO400M
|
| 75 |
+
- Language Model: Qwen2-0.5B
|
| 76 |
+
- Multimodal Projector: MLP with GELU activation
|
| 77 |
+
|
| 78 |
+
## Training Data
|
| 79 |
+
|
| 80 |
+
The model was fine-tuned on the GuruShots Sample Dataset, which contains photography images with associated metadata and descriptions.
|
| 81 |
+
|
| 82 |
+
## Limitations and Biases
|
| 83 |
+
|
| 84 |
+
This model inherits the limitations of the base LLaVA-OneVision model and may have additional biases introduced through the fine-tuning dataset. The model is specifically optimized for photography-related tasks and may not perform as well on general vision-language tasks.
|
| 85 |
+
|
| 86 |
+
## Citation
|
| 87 |
+
|
| 88 |
+
If you use this model, please cite:
|
| 89 |
+
|
| 90 |
+
```bibtex
|
| 91 |
+
@misc{llava-onevision-gsd-finetune,
|
| 92 |
+
title={LLaVA-OneVision Fine-tuned on GuruShots Dataset},
|
| 93 |
+
author={Dataseeds},
|
| 94 |
+
year={2024},
|
| 95 |
+
publisher={Hugging Face},
|
| 96 |
+
url={https://huggingface.co/Dataseeds/LLaVA-OneVision-Qwen2-0.5b-ov-GSD-FineTune}
|
| 97 |
+
}
|
| 98 |
+
|
| 99 |
+
@article{li2024llavaonevision,
|
| 100 |
+
title={LLaVA-OneVision},
|
| 101 |
+
}
|
| 102 |
+
```
|
| 103 |
+
|
| 104 |
+
## License
|
| 105 |
+
|
| 106 |
+
This model is released under the Apache 2.0 license, following the base model's licensing.
|
adapter_config.json
ADDED
|
@@ -0,0 +1,26 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"auto_mapping": null,
|
| 3 |
+
"base_model_name_or_path": "lmms-lab/llava-onevision-qwen2-0.5b-ov",
|
| 4 |
+
"bias": "none",
|
| 5 |
+
"fan_in_fan_out": false,
|
| 6 |
+
"inference_mode": true,
|
| 7 |
+
"init_lora_weights": true,
|
| 8 |
+
"layers_pattern": null,
|
| 9 |
+
"layers_to_transform": null,
|
| 10 |
+
"lora_alpha": 32,
|
| 11 |
+
"lora_dropout": 0.1,
|
| 12 |
+
"modules_to_save": null,
|
| 13 |
+
"peft_type": "LORA",
|
| 14 |
+
"r": 32,
|
| 15 |
+
"revision": null,
|
| 16 |
+
"target_modules": [
|
| 17 |
+
"v_proj",
|
| 18 |
+
"k_proj",
|
| 19 |
+
"q_proj",
|
| 20 |
+
"up_proj",
|
| 21 |
+
"gate_proj",
|
| 22 |
+
"down_proj",
|
| 23 |
+
"o_proj"
|
| 24 |
+
],
|
| 25 |
+
"task_type": "CAUSAL_LM"
|
| 26 |
+
}
|
adapter_model.safetensors
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:bc055a3dbfc3e782b959393f04304c1cd15b95a422d6ff1512c06eae70976094
|
| 3 |
+
size 81960088
|
added_tokens.json
ADDED
|
@@ -0,0 +1,6 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"<image>": 151646,
|
| 3 |
+
"<|endoftext|>": 151643,
|
| 4 |
+
"<|im_end|>": 151645,
|
| 5 |
+
"<|im_start|>": 151644
|
| 6 |
+
}
|
config.json
ADDED
|
@@ -0,0 +1,220 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"_name_or_path": "lmms-lab/llava-onevision-qwen2-0.5b-ov",
|
| 3 |
+
"add_faster_video": false,
|
| 4 |
+
"add_time_instruction": false,
|
| 5 |
+
"architectures": [
|
| 6 |
+
"LlavaQwenForCausalLM"
|
| 7 |
+
],
|
| 8 |
+
"attention_dropout": 0.0,
|
| 9 |
+
"bos_token_id": 151643,
|
| 10 |
+
"eos_token_id": 151645,
|
| 11 |
+
"faster_token_stride": 10,
|
| 12 |
+
"force_sample": false,
|
| 13 |
+
"hidden_act": "silu",
|
| 14 |
+
"hidden_size": 896,
|
| 15 |
+
"ignore_index": -100,
|
| 16 |
+
"image_aspect_ratio": "anyres_max_9",
|
| 17 |
+
"image_crop_resolution": null,
|
| 18 |
+
"image_grid_pinpoints": [
|
| 19 |
+
[
|
| 20 |
+
384,
|
| 21 |
+
384
|
| 22 |
+
],
|
| 23 |
+
[
|
| 24 |
+
384,
|
| 25 |
+
768
|
| 26 |
+
],
|
| 27 |
+
[
|
| 28 |
+
384,
|
| 29 |
+
1152
|
| 30 |
+
],
|
| 31 |
+
[
|
| 32 |
+
384,
|
| 33 |
+
1536
|
| 34 |
+
],
|
| 35 |
+
[
|
| 36 |
+
384,
|
| 37 |
+
1920
|
| 38 |
+
],
|
| 39 |
+
[
|
| 40 |
+
384,
|
| 41 |
+
2304
|
| 42 |
+
],
|
| 43 |
+
[
|
| 44 |
+
768,
|
| 45 |
+
384
|
| 46 |
+
],
|
| 47 |
+
[
|
| 48 |
+
768,
|
| 49 |
+
768
|
| 50 |
+
],
|
| 51 |
+
[
|
| 52 |
+
768,
|
| 53 |
+
1152
|
| 54 |
+
],
|
| 55 |
+
[
|
| 56 |
+
768,
|
| 57 |
+
1536
|
| 58 |
+
],
|
| 59 |
+
[
|
| 60 |
+
768,
|
| 61 |
+
1920
|
| 62 |
+
],
|
| 63 |
+
[
|
| 64 |
+
768,
|
| 65 |
+
2304
|
| 66 |
+
],
|
| 67 |
+
[
|
| 68 |
+
1152,
|
| 69 |
+
384
|
| 70 |
+
],
|
| 71 |
+
[
|
| 72 |
+
1152,
|
| 73 |
+
768
|
| 74 |
+
],
|
| 75 |
+
[
|
| 76 |
+
1152,
|
| 77 |
+
1152
|
| 78 |
+
],
|
| 79 |
+
[
|
| 80 |
+
1152,
|
| 81 |
+
1536
|
| 82 |
+
],
|
| 83 |
+
[
|
| 84 |
+
1152,
|
| 85 |
+
1920
|
| 86 |
+
],
|
| 87 |
+
[
|
| 88 |
+
1152,
|
| 89 |
+
2304
|
| 90 |
+
],
|
| 91 |
+
[
|
| 92 |
+
1536,
|
| 93 |
+
384
|
| 94 |
+
],
|
| 95 |
+
[
|
| 96 |
+
1536,
|
| 97 |
+
768
|
| 98 |
+
],
|
| 99 |
+
[
|
| 100 |
+
1536,
|
| 101 |
+
1152
|
| 102 |
+
],
|
| 103 |
+
[
|
| 104 |
+
1536,
|
| 105 |
+
1536
|
| 106 |
+
],
|
| 107 |
+
[
|
| 108 |
+
1536,
|
| 109 |
+
1920
|
| 110 |
+
],
|
| 111 |
+
[
|
| 112 |
+
1536,
|
| 113 |
+
2304
|
| 114 |
+
],
|
| 115 |
+
[
|
| 116 |
+
1920,
|
| 117 |
+
384
|
| 118 |
+
],
|
| 119 |
+
[
|
| 120 |
+
1920,
|
| 121 |
+
768
|
| 122 |
+
],
|
| 123 |
+
[
|
| 124 |
+
1920,
|
| 125 |
+
1152
|
| 126 |
+
],
|
| 127 |
+
[
|
| 128 |
+
1920,
|
| 129 |
+
1536
|
| 130 |
+
],
|
| 131 |
+
[
|
| 132 |
+
1920,
|
| 133 |
+
1920
|
| 134 |
+
],
|
| 135 |
+
[
|
| 136 |
+
1920,
|
| 137 |
+
2304
|
| 138 |
+
],
|
| 139 |
+
[
|
| 140 |
+
2304,
|
| 141 |
+
384
|
| 142 |
+
],
|
| 143 |
+
[
|
| 144 |
+
2304,
|
| 145 |
+
768
|
| 146 |
+
],
|
| 147 |
+
[
|
| 148 |
+
2304,
|
| 149 |
+
1152
|
| 150 |
+
],
|
| 151 |
+
[
|
| 152 |
+
2304,
|
| 153 |
+
1536
|
| 154 |
+
],
|
| 155 |
+
[
|
| 156 |
+
2304,
|
| 157 |
+
1920
|
| 158 |
+
],
|
| 159 |
+
[
|
| 160 |
+
2304,
|
| 161 |
+
2304
|
| 162 |
+
]
|
| 163 |
+
],
|
| 164 |
+
"image_split_resolution": null,
|
| 165 |
+
"image_token_index": 151646,
|
| 166 |
+
"initializer_range": 0.02,
|
| 167 |
+
"intermediate_size": 4864,
|
| 168 |
+
"max_position_embeddings": 32768,
|
| 169 |
+
"max_window_layers": 24,
|
| 170 |
+
"mm_hidden_size": 1152,
|
| 171 |
+
"mm_newline_position": "grid",
|
| 172 |
+
"mm_patch_merge_type": "spatial_unpad",
|
| 173 |
+
"mm_projector_lr": null,
|
| 174 |
+
"mm_projector_type": "mlp2x_gelu",
|
| 175 |
+
"mm_resampler_type": null,
|
| 176 |
+
"mm_spatial_pool_mode": "bilinear",
|
| 177 |
+
"mm_spatial_pool_stride": null,
|
| 178 |
+
"mm_tunable_parts": "mm_mlp_adapter,mm_language_model",
|
| 179 |
+
"mm_use_im_patch_token": false,
|
| 180 |
+
"mm_use_im_start_end": false,
|
| 181 |
+
"mm_vision_select_feature": "patch",
|
| 182 |
+
"mm_vision_select_layer": -2,
|
| 183 |
+
"mm_vision_tower": "google/siglip-so400m-patch14-384",
|
| 184 |
+
"mm_vision_tower_lr": null,
|
| 185 |
+
"model_type": "llava",
|
| 186 |
+
"num_attention_heads": 14,
|
| 187 |
+
"num_hidden_layers": 24,
|
| 188 |
+
"num_key_value_heads": 2,
|
| 189 |
+
"pos_skipping_range": 4096,
|
| 190 |
+
"projector_hidden_act": "gelu",
|
| 191 |
+
"rms_norm_eps": 1e-06,
|
| 192 |
+
"rope_scaling": null,
|
| 193 |
+
"rope_theta": 1000000.0,
|
| 194 |
+
"sliding_window": 32768,
|
| 195 |
+
"text_config": {
|
| 196 |
+
"model_type": "llama"
|
| 197 |
+
},
|
| 198 |
+
"tokenizer_model_max_length": 8192,
|
| 199 |
+
"tokenizer_padding_side": "right",
|
| 200 |
+
"torch_dtype": "bfloat16",
|
| 201 |
+
"transformers_version": "4.40.0.dev0",
|
| 202 |
+
"use_cache": true,
|
| 203 |
+
"use_mm_proj": true,
|
| 204 |
+
"use_pos_skipping": false,
|
| 205 |
+
"use_sliding_window": false,
|
| 206 |
+
"vision_config": {
|
| 207 |
+
"hidden_size": 1024,
|
| 208 |
+
"image_size": 336,
|
| 209 |
+
"intermediate_size": 4096,
|
| 210 |
+
"model_type": "clip_vision_model",
|
| 211 |
+
"num_attention_heads": 16,
|
| 212 |
+
"num_hidden_layers": 24,
|
| 213 |
+
"patch_size": 14,
|
| 214 |
+
"projection_dim": 768,
|
| 215 |
+
"vocab_size": 32000
|
| 216 |
+
},
|
| 217 |
+
"vision_feature_layer": -2,
|
| 218 |
+
"vision_feature_select_strategy": "default",
|
| 219 |
+
"vision_tower_pretrained": null
|
| 220 |
+
}
|
generation_config.json
ADDED
|
@@ -0,0 +1,14 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"bos_token_id": 151643,
|
| 3 |
+
"do_sample": true,
|
| 4 |
+
"eos_token_id": [
|
| 5 |
+
151645,
|
| 6 |
+
151643
|
| 7 |
+
],
|
| 8 |
+
"pad_token_id": 151643,
|
| 9 |
+
"repetition_penalty": 1.1,
|
| 10 |
+
"temperature": 0.7,
|
| 11 |
+
"top_k": 20,
|
| 12 |
+
"top_p": 0.8,
|
| 13 |
+
"transformers_version": "4.40.0.dev0"
|
| 14 |
+
}
|
merges.txt
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
non_lora_trainables.bin
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:bfc7ec0fa71c49298d2e3eb49bd89edc0abfe2a4c9e636591ed48ed6dc1759fc
|
| 3 |
+
size 991842600
|
special_tokens_map.json
ADDED
|
@@ -0,0 +1,20 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"additional_special_tokens": [
|
| 3 |
+
"<|im_start|>",
|
| 4 |
+
"<|im_end|>"
|
| 5 |
+
],
|
| 6 |
+
"eos_token": {
|
| 7 |
+
"content": "<|im_end|>",
|
| 8 |
+
"lstrip": false,
|
| 9 |
+
"normalized": false,
|
| 10 |
+
"rstrip": false,
|
| 11 |
+
"single_word": false
|
| 12 |
+
},
|
| 13 |
+
"pad_token": {
|
| 14 |
+
"content": "<|endoftext|>",
|
| 15 |
+
"lstrip": false,
|
| 16 |
+
"normalized": false,
|
| 17 |
+
"rstrip": false,
|
| 18 |
+
"single_word": false
|
| 19 |
+
}
|
| 20 |
+
}
|
tokenizer.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
tokenizer_config.json
ADDED
|
@@ -0,0 +1,53 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"add_prefix_space": false,
|
| 3 |
+
"added_tokens_decoder": {
|
| 4 |
+
"151643": {
|
| 5 |
+
"content": "<|endoftext|>",
|
| 6 |
+
"lstrip": false,
|
| 7 |
+
"normalized": false,
|
| 8 |
+
"rstrip": false,
|
| 9 |
+
"single_word": false,
|
| 10 |
+
"special": true
|
| 11 |
+
},
|
| 12 |
+
"151644": {
|
| 13 |
+
"content": "<|im_start|>",
|
| 14 |
+
"lstrip": false,
|
| 15 |
+
"normalized": false,
|
| 16 |
+
"rstrip": false,
|
| 17 |
+
"single_word": false,
|
| 18 |
+
"special": true
|
| 19 |
+
},
|
| 20 |
+
"151645": {
|
| 21 |
+
"content": "<|im_end|>",
|
| 22 |
+
"lstrip": false,
|
| 23 |
+
"normalized": false,
|
| 24 |
+
"rstrip": false,
|
| 25 |
+
"single_word": false,
|
| 26 |
+
"special": true
|
| 27 |
+
},
|
| 28 |
+
"151646": {
|
| 29 |
+
"content": "<image>",
|
| 30 |
+
"lstrip": false,
|
| 31 |
+
"normalized": false,
|
| 32 |
+
"rstrip": false,
|
| 33 |
+
"single_word": false,
|
| 34 |
+
"special": true
|
| 35 |
+
}
|
| 36 |
+
},
|
| 37 |
+
"additional_special_tokens": [
|
| 38 |
+
"<|im_start|>",
|
| 39 |
+
"<|im_end|>"
|
| 40 |
+
],
|
| 41 |
+
"bos_token": null,
|
| 42 |
+
"chat_template": "{% for message in messages %}{% if loop.first and messages[0]['role'] != 'system' %}{{ '<|im_start|>system\nYou are a helpful assistant.<|im_end|>\n' }}{% endif %}{{'<|im_start|>' + message['role'] + '\n' + message['content'] + '<|im_end|>' + '\n'}}{% endfor %}{% if add_generation_prompt %}{{ '<|im_start|>assistant\n' }}{% endif %}",
|
| 43 |
+
"clean_up_tokenization_spaces": false,
|
| 44 |
+
"eos_token": "<|im_end|>",
|
| 45 |
+
"errors": "replace",
|
| 46 |
+
"model_max_length": 8192,
|
| 47 |
+
"pad_token": "<|endoftext|>",
|
| 48 |
+
"padding_side": "right",
|
| 49 |
+
"processor_class": "LlavaProcessor",
|
| 50 |
+
"split_special_tokens": false,
|
| 51 |
+
"tokenizer_class": "Qwen2Tokenizer",
|
| 52 |
+
"unk_token": null
|
| 53 |
+
}
|
vocab.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|