Files changed (1) hide show
  1. README.md +89 -80
README.md CHANGED
@@ -1,80 +1,89 @@
1
-
2
- ---
3
-
4
- license: creativeml-openrail-m
5
- datasets:
6
- - prithivMLmods/Math-IIO-68K-Mini
7
- language:
8
- - en
9
- base_model:
10
- - Qwen/Qwen2.5-7B-Instruct
11
- pipeline_tag: text-generation
12
- library_name: transformers
13
- tags:
14
- - safetensors
15
- - qwen2.5
16
- - 7B
17
- - Instruct
18
- - Math
19
- - CoT
20
- - one-shot
21
-
22
- ---
23
-
24
- [![QuantFactory Banner](https://lh7-rt.googleusercontent.com/docsz/AD_4nXeiuCm7c8lEwEJuRey9kiVZsRn2W-b4pWlu3-X534V3YmVuVc2ZL-NXg2RkzSOOS2JXGHutDuyyNAUtdJI65jGTo8jT9Y99tMi4H4MqL44Uc5QKG77B0d6-JfIkZHFaUA71-RtjyYZWVIhqsNZcx8-OMaA?key=xt3VSDoCbmTY7o-cwwOFwQ)](https://hf.co/QuantFactory)
25
-
26
-
27
- # QuantFactory/Math-IIO-7B-Instruct-GGUF
28
- This is quantized version of [prithivMLmods/Math-IIO-7B-Instruct](https://huggingface.co/prithivMLmods/Math-IIO-7B-Instruct) created using llama.cpp
29
-
30
- # Original Model Card
31
-
32
- ![aaa.png](https://cdn-uploads.huggingface.co/production/uploads/65bb837dbfb878f46c77de4c/faLfR-doaWP_BLUkOQrbq.png)
33
-
34
- ### **Math IIO 7B Instruct**
35
-
36
- The **Math IIO 7B Instruct** is a fine-tuned language model based on the robust **Qwen2.5-7B-Instruct** architecture. This model has been specifically trained to excel in single-shot mathematical reasoning and instruction-based tasks, making it a reliable choice for educational, analytical, and problem-solving applications.
37
-
38
- ### **Key Features:**
39
-
40
- 1. **Math-Optimized Capabilities:**
41
- The model is designed to handle complex mathematical problems, step-by-step calculations, and reasoning tasks.
42
-
43
- 2. **Instruction-Tuned:**
44
- Fine-tuned for better adherence to structured queries and task-oriented prompts, enabling clear and concise outputs.
45
-
46
- 3. **Large Vocabulary:**
47
- Equipped with an extensive tokenizer configuration and custom tokens to ensure precise mathematical notation support.
48
-
49
- | File Name | Size | Description | Upload Status |
50
- |------------------------------------|------------|-----------------------------------------------|----------------|
51
- | `.gitattributes` | 1.57 kB | Git attributes configuration file | Uploaded |
52
- | `README.md` | 263 Bytes | README file with minimal details | Updated |
53
- | `added_tokens.json` | 657 Bytes | Custom added tokens for tokenizer | Uploaded |
54
- | `config.json` | 861 Bytes | Model configuration file | Uploaded |
55
- | `generation_config.json` | 281 Bytes | Configuration for text generation settings | Uploaded |
56
- | `merges.txt` | 1.82 MB | Merge rules for byte pair encoding tokenizer | Uploaded |
57
- | `pytorch_model-00001-of-00004.bin` | 4.88 GB | First part of model weights (PyTorch) | Uploaded (LFS) |
58
- | `pytorch_model-00002-of-00004.bin` | 4.93 GB | Second part of model weights (PyTorch) | Uploaded (LFS) |
59
- | `pytorch_model-00003-of-00004.bin` | 4.33 GB | Third part of model weights (PyTorch) | Uploaded (LFS) |
60
- | `pytorch_model-00004-of-00004.bin` | 1.09 GB | Fourth part of model weights (PyTorch) | Uploaded (LFS) |
61
- | `pytorch_model.bin.index.json` | 28.1 kB | Index JSON file for model weights | Uploaded |
62
- | `special_tokens_map.json` | 644 Bytes | Map of special tokens used by the tokenizer | Uploaded |
63
- | `tokenizer.json` | 11.4 MB | Tokenizer settings and vocab | Uploaded (LFS) |
64
- | `tokenizer_config.json` | 7.73 kB | Configuration for tokenizer | Uploaded |
65
- | `vocab.json` | 2.78 MB | Vocabulary for tokenizer | Uploaded |
66
-
67
- ### **Training Details:**
68
- - **Base Model:** [Qwen/Qwen2.5-7B-Instruct](#)
69
- - **Dataset:** Trained on **Math-IIO-68K-Mini**, a curated dataset with 68.8k high-quality examples focusing on mathematical instructions, equations, and logic-based queries.
70
-
71
- ### **Capabilities:**
72
- - **Problem-Solving:** Solves mathematical problems ranging from basic arithmetic to advanced calculus and linear algebra.
73
- - **Educational Use:** Explains solutions step-by-step, making it a valuable teaching assistant.
74
- - **Analysis & Reasoning:** Handles logical reasoning tasks and computational queries effectively.
75
-
76
- ### **How to Use:**
77
- 1. Download all model files, ensuring the PyTorch weights and tokenizer configurations are included.
78
- 2. Load the model in your Python environment using frameworks like PyTorch or Hugging Face Transformers.
79
- 3. Use the provided configurations (`config.json` and `generation_config.json`) for optimal inference.
80
-
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: creativeml-openrail-m
3
+ datasets:
4
+ - prithivMLmods/Math-IIO-68K-Mini
5
+ language:
6
+ - zho
7
+ - eng
8
+ - fra
9
+ - spa
10
+ - por
11
+ - deu
12
+ - ita
13
+ - rus
14
+ - jpn
15
+ - kor
16
+ - vie
17
+ - tha
18
+ - ara
19
+ base_model:
20
+ - Qwen/Qwen2.5-7B-Instruct
21
+ pipeline_tag: text-generation
22
+ library_name: transformers
23
+ tags:
24
+ - safetensors
25
+ - qwen2.5
26
+ - 7B
27
+ - Instruct
28
+ - Math
29
+ - CoT
30
+ - one-shot
31
+ ---
32
+
33
+ [![QuantFactory Banner](https://lh7-rt.googleusercontent.com/docsz/AD_4nXeiuCm7c8lEwEJuRey9kiVZsRn2W-b4pWlu3-X534V3YmVuVc2ZL-NXg2RkzSOOS2JXGHutDuyyNAUtdJI65jGTo8jT9Y99tMi4H4MqL44Uc5QKG77B0d6-JfIkZHFaUA71-RtjyYZWVIhqsNZcx8-OMaA?key=xt3VSDoCbmTY7o-cwwOFwQ)](https://hf.co/QuantFactory)
34
+
35
+
36
+ # QuantFactory/Math-IIO-7B-Instruct-GGUF
37
+ This is quantized version of [prithivMLmods/Math-IIO-7B-Instruct](https://huggingface.co/prithivMLmods/Math-IIO-7B-Instruct) created using llama.cpp
38
+
39
+ # Original Model Card
40
+
41
+ ![aaa.png](https://cdn-uploads.huggingface.co/production/uploads/65bb837dbfb878f46c77de4c/faLfR-doaWP_BLUkOQrbq.png)
42
+
43
+ ### **Math IIO 7B Instruct**
44
+
45
+ The **Math IIO 7B Instruct** is a fine-tuned language model based on the robust **Qwen2.5-7B-Instruct** architecture. This model has been specifically trained to excel in single-shot mathematical reasoning and instruction-based tasks, making it a reliable choice for educational, analytical, and problem-solving applications.
46
+
47
+ ### **Key Features:**
48
+
49
+ 1. **Math-Optimized Capabilities:**
50
+ The model is designed to handle complex mathematical problems, step-by-step calculations, and reasoning tasks.
51
+
52
+ 2. **Instruction-Tuned:**
53
+ Fine-tuned for better adherence to structured queries and task-oriented prompts, enabling clear and concise outputs.
54
+
55
+ 3. **Large Vocabulary:**
56
+ Equipped with an extensive tokenizer configuration and custom tokens to ensure precise mathematical notation support.
57
+
58
+ | File Name | Size | Description | Upload Status |
59
+ |------------------------------------|------------|-----------------------------------------------|----------------|
60
+ | `.gitattributes` | 1.57 kB | Git attributes configuration file | Uploaded |
61
+ | `README.md` | 263 Bytes | README file with minimal details | Updated |
62
+ | `added_tokens.json` | 657 Bytes | Custom added tokens for tokenizer | Uploaded |
63
+ | `config.json` | 861 Bytes | Model configuration file | Uploaded |
64
+ | `generation_config.json` | 281 Bytes | Configuration for text generation settings | Uploaded |
65
+ | `merges.txt` | 1.82 MB | Merge rules for byte pair encoding tokenizer | Uploaded |
66
+ | `pytorch_model-00001-of-00004.bin` | 4.88 GB | First part of model weights (PyTorch) | Uploaded (LFS) |
67
+ | `pytorch_model-00002-of-00004.bin` | 4.93 GB | Second part of model weights (PyTorch) | Uploaded (LFS) |
68
+ | `pytorch_model-00003-of-00004.bin` | 4.33 GB | Third part of model weights (PyTorch) | Uploaded (LFS) |
69
+ | `pytorch_model-00004-of-00004.bin` | 1.09 GB | Fourth part of model weights (PyTorch) | Uploaded (LFS) |
70
+ | `pytorch_model.bin.index.json` | 28.1 kB | Index JSON file for model weights | Uploaded |
71
+ | `special_tokens_map.json` | 644 Bytes | Map of special tokens used by the tokenizer | Uploaded |
72
+ | `tokenizer.json` | 11.4 MB | Tokenizer settings and vocab | Uploaded (LFS) |
73
+ | `tokenizer_config.json` | 7.73 kB | Configuration for tokenizer | Uploaded |
74
+ | `vocab.json` | 2.78 MB | Vocabulary for tokenizer | Uploaded |
75
+
76
+ ### **Training Details:**
77
+ - **Base Model:** [Qwen/Qwen2.5-7B-Instruct](#)
78
+ - **Dataset:** Trained on **Math-IIO-68K-Mini**, a curated dataset with 68.8k high-quality examples focusing on mathematical instructions, equations, and logic-based queries.
79
+
80
+ ### **Capabilities:**
81
+ - **Problem-Solving:** Solves mathematical problems ranging from basic arithmetic to advanced calculus and linear algebra.
82
+ - **Educational Use:** Explains solutions step-by-step, making it a valuable teaching assistant.
83
+ - **Analysis & Reasoning:** Handles logical reasoning tasks and computational queries effectively.
84
+
85
+ ### **How to Use:**
86
+ 1. Download all model files, ensuring the PyTorch weights and tokenizer configurations are included.
87
+ 2. Load the model in your Python environment using frameworks like PyTorch or Hugging Face Transformers.
88
+ 3. Use the provided configurations (`config.json` and `generation_config.json`) for optimal inference.
89
+