Update README.md
#2
by
lly0571
- opened
README.md
CHANGED
|
@@ -20,7 +20,7 @@ Qwen2.5 is the latest series of Qwen large language models. For Qwen2.5, we rele
|
|
| 20 |
- **Long-context Support** up to 128K tokens and can generate up to 8K tokens.
|
| 21 |
- **Multilingual support** for over 29 languages, including Chinese, English, French, Spanish, Portuguese, German, Italian, Russian, Japanese, Korean, Vietnamese, Thai, Arabic, and more.
|
| 22 |
|
| 23 |
-
**This repo contains the AWQ-quantized 4-bit instruction-tuned
|
| 24 |
- Type: Causal Language Models
|
| 25 |
- Training Stage: Pretraining & Post-training
|
| 26 |
- Architecture: transformers with RoPE, SwiGLU, RMSNorm, and Attention QKV bias
|
|
|
|
| 20 |
- **Long-context Support** up to 128K tokens and can generate up to 8K tokens.
|
| 21 |
- **Multilingual support** for over 29 languages, including Chinese, English, French, Spanish, Portuguese, German, Italian, Russian, Japanese, Korean, Vietnamese, Thai, Arabic, and more.
|
| 22 |
|
| 23 |
+
**This repo contains the AWQ-quantized 4-bit instruction-tuned 14B Qwen2.5 model**, which has the following features:
|
| 24 |
- Type: Causal Language Models
|
| 25 |
- Training Stage: Pretraining & Post-training
|
| 26 |
- Architecture: transformers with RoPE, SwiGLU, RMSNorm, and Attention QKV bias
|