Update README.md
Browse files
README.md
CHANGED
|
@@ -7,6 +7,12 @@ base_model:
|
|
| 7 |
- tokyotech-llm/Llama-3.1-Swallow-8B-Instruct-v0.5
|
| 8 |
base_model_relation: quantized
|
| 9 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 10 |
# Swallow-8B-it-v05-gguf-q8_0-mixed-v1
|
| 11 |
* **Quantization Type**: Mixed Precision (`bf16`, `q4_K`, `q5_K`, `q6_K`, `q8_0`)
|
| 12 |
* **Bits Per Weight (BPW)**: `8.01`
|
|
|
|
| 7 |
- tokyotech-llm/Llama-3.1-Swallow-8B-Instruct-v0.5
|
| 8 |
base_model_relation: quantized
|
| 9 |
---
|
| 10 |
+
# Swallow-8B-it-v05-gguf-q6_k-mixed-v1
|
| 11 |
+
* **Quantization Type**: Mixed Precision (`q5_K`, `q6_K`, `q8_0`)
|
| 12 |
+
* **Bits Per Weight (BPW)**: `7.13`
|
| 13 |
+
# Swallow-8B-it-v05-gguf-q6_k-mixed-v2
|
| 14 |
+
* **Quantization Type**: Mixed Precision (`q6_K`, `q8_0`)
|
| 15 |
+
* **Bits Per Weight (BPW)**: `7.50`
|
| 16 |
# Swallow-8B-it-v05-gguf-q8_0-mixed-v1
|
| 17 |
* **Quantization Type**: Mixed Precision (`bf16`, `q4_K`, `q5_K`, `q6_K`, `q8_0`)
|
| 18 |
* **Bits Per Weight (BPW)**: `8.01`
|