Added links to full context YaRN-enabled GGUFs
Browse files
README.md
CHANGED
|
@@ -32,6 +32,8 @@ Quantization was done with an importance matrix that was trained for ~1M tokens
|
|
| 32 |
|
| 33 |
Fill-in-Middle token metadata has been added, see [example](#simple-llama-cpp-python-example-fill-in-middle-code).
|
| 34 |
|
|
|
|
|
|
|
| 35 |
<!-- description end -->
|
| 36 |
|
| 37 |
|
|
@@ -85,17 +87,17 @@ Refer to the Provided Files table below to see what files use which methods, and
|
|
| 85 |
|
| 86 |
| Name | Quant method | Bits | Size | Max RAM required | Use case |
|
| 87 |
| ---- | ---- | ---- | ---- | ---- | ----- |
|
| 88 |
-
| [Qwen2.5-Coder-7B-Instruct.IQ1_S.gguf](https://huggingface.co/CISCai/Qwen2.5-Coder-7B-Instruct-SOTA-GGUF/blob/main/Qwen2.5-Coder-7B-Instruct.IQ1_S.gguf) | IQ1_S | 1 | 1.8 GB| 2.0 GB | smallest, significant quality loss |
|
| 89 |
-
| [Qwen2.5-Coder-7B-Instruct.IQ1_M.gguf](https://huggingface.co/CISCai/Qwen2.5-Coder-7B-Instruct-SOTA-GGUF/blob/main/Qwen2.5-Coder-7B-Instruct.IQ1_M.gguf) | IQ1_M | 1 | 1.9 GB| 2.1 GB | very small, significant quality loss |
|
| 90 |
-
| [Qwen2.5-Coder-7B-Instruct.IQ2_XXS.gguf](https://huggingface.co/CISCai/Qwen2.5-Coder-7B-Instruct-SOTA-GGUF/blob/main/Qwen2.5-Coder-7B-Instruct.IQ2_XXS.gguf) | IQ2_XXS | 2 | 2.1 GB| 2.3 GB | very small, high quality loss |
|
| 91 |
-
| [Qwen2.5-Coder-7B-Instruct.IQ2_XS.gguf](https://huggingface.co/CISCai/Qwen2.5-Coder-7B-Instruct-SOTA-GGUF/blob/main/Qwen2.5-Coder-7B-Instruct.IQ2_XS.gguf) | IQ2_XS | 2 | 2.3 GB| 2.5 GB | very small, high quality loss |
|
| 92 |
-
| [Qwen2.5-Coder-7B-Instruct.IQ2_S.gguf](https://huggingface.co/CISCai/Qwen2.5-Coder-7B-Instruct-SOTA-GGUF/blob/main/Qwen2.5-Coder-7B-Instruct.IQ2_S.gguf) | IQ2_S | 2 | 2.4 GB| 2.6 GB | small, substantial quality loss |
|
| 93 |
-
| [Qwen2.5-Coder-7B-Instruct.IQ2_M.gguf](https://huggingface.co/CISCai/Qwen2.5-Coder-7B-Instruct-SOTA-GGUF/blob/main/Qwen2.5-Coder-7B-Instruct.IQ2_M.gguf) | IQ2_M | 2 | 2.6 GB| 2.8 GB | small, greater quality loss |
|
| 94 |
-
| [Qwen2.5-Coder-7B-Instruct.IQ3_XXS.gguf](https://huggingface.co/CISCai/Qwen2.5-Coder-7B-Instruct-SOTA-GGUF/blob/main/Qwen2.5-Coder-7B-Instruct.IQ3_XXS.gguf) | IQ3_XXS | 3 | 2.9 GB| 3.1 GB | very small, high quality loss |
|
| 95 |
-
| [Qwen2.5-Coder-7B-Instruct.IQ3_XS.gguf](https://huggingface.co/CISCai/Qwen2.5-Coder-7B-Instruct-SOTA-GGUF/blob/main/Qwen2.5-Coder-7B-Instruct.IQ3_XS.gguf) | IQ3_XS | 3 | 3.1 GB| 3.3 GB | small, substantial quality loss |
|
| 96 |
-
| [Qwen2.5-Coder-7B-Instruct.IQ3_S.gguf](https://huggingface.co/CISCai/Qwen2.5-Coder-7B-Instruct-SOTA-GGUF/blob/main/Qwen2.5-Coder-7B-Instruct.IQ3_S.gguf) | IQ3_S | 3 | 3.3 GB| 3.4 GB | small, greater quality loss |
|
| 97 |
-
| [Qwen2.5-Coder-7B-Instruct.IQ3_M.gguf](https://huggingface.co/CISCai/Qwen2.5-Coder-7B-Instruct-SOTA-GGUF/blob/main/Qwen2.5-Coder-7B-Instruct.IQ3_M.gguf) | IQ3_M | 3 | 3.3 GB| 3.5 GB | medium, balanced quality - recommended |
|
| 98 |
-
| [Qwen2.5-Coder-7B-Instruct.IQ4_XS.gguf](https://huggingface.co/CISCai/Qwen2.5-Coder-7B-Instruct-SOTA-GGUF/blob/main/Qwen2.5-Coder-7B-Instruct.IQ4_XS.gguf) | IQ4_XS | 4 | 3.9 GB| 4.1 GB | small, substantial quality loss |
|
| 99 |
|
| 100 |
Generated importance matrix file: [Qwen2.5-Coder-7B-Instruct.imatrix.dat](https://huggingface.co/CISCai/Qwen2.5-Coder-7B-Instruct-SOTA-GGUF/blob/main/Qwen2.5-Coder-7B-Instruct.imatrix.dat)
|
| 101 |
|
|
|
|
| 32 |
|
| 33 |
Fill-in-Middle token metadata has been added, see [example](#simple-llama-cpp-python-example-fill-in-middle-code).
|
| 34 |
|
| 35 |
+
**Update January 6th 2025**: Added links to full context YaRN-enabled GGUFs (using [GGUF Editor](https://huggingface.co/spaces/CISCai/gguf-editor)).
|
| 36 |
+
|
| 37 |
<!-- description end -->
|
| 38 |
|
| 39 |
|
|
|
|
| 87 |
|
| 88 |
| Name | Quant method | Bits | Size | Max RAM required | Use case |
|
| 89 |
| ---- | ---- | ---- | ---- | ---- | ----- |
|
| 90 |
+
| [Qwen2.5-Coder-7B-Instruct.IQ1_S.gguf](https://huggingface.co/CISCai/Qwen2.5-Coder-7B-Instruct-SOTA-GGUF/blob/main/Qwen2.5-Coder-7B-Instruct.IQ1_S.gguf) ([with YaRN](https://ciscai-gguf-editor.hf.space/download/CISCai/Qwen2.5-Coder-7B-Instruct-SOTA-GGUF/Qwen2.5-Coder-7B-Instruct.IQ1_S.gguf?branch=main&add=%5B%22qwen2.context_length%22,4,131072%5D&add=%5B%22qwen2.rope.scaling.type%22,8,%22yarn%22%5D&add=%5B%22qwen2.rope.scaling.factor%22,6,4%5D&add=%5B%22qwen2.rope.scaling.original_context_length%22,4,32768%5D)) | IQ1_S | 1 | 1.8 GB| 2.0 GB | smallest, significant quality loss |
|
| 91 |
+
| [Qwen2.5-Coder-7B-Instruct.IQ1_M.gguf](https://huggingface.co/CISCai/Qwen2.5-Coder-7B-Instruct-SOTA-GGUF/blob/main/Qwen2.5-Coder-7B-Instruct.IQ1_M.gguf) ([with YaRN](https://ciscai-gguf-editor.hf.space/download/CISCai/Qwen2.5-Coder-7B-Instruct-SOTA-GGUF/Qwen2.5-Coder-7B-Instruct.IQ1_M.gguf?branch=main&add=%5B%22qwen2.context_length%22,4,131072%5D&add=%5B%22qwen2.rope.scaling.type%22,8,%22yarn%22%5D&add=%5B%22qwen2.rope.scaling.factor%22,6,4%5D&add=%5B%22qwen2.rope.scaling.original_context_length%22,4,32768%5D)) | IQ1_M | 1 | 1.9 GB| 2.1 GB | very small, significant quality loss |
|
| 92 |
+
| [Qwen2.5-Coder-7B-Instruct.IQ2_XXS.gguf](https://huggingface.co/CISCai/Qwen2.5-Coder-7B-Instruct-SOTA-GGUF/blob/main/Qwen2.5-Coder-7B-Instruct.IQ2_XXS.gguf) ([with YaRN](https://ciscai-gguf-editor.hf.space/download/CISCai/Qwen2.5-Coder-7B-Instruct-SOTA-GGUF/Qwen2.5-Coder-7B-Instruct.IQ2_XXS.gguf?branch=main&add=%5B%22qwen2.context_length%22,4,131072%5D&add=%5B%22qwen2.rope.scaling.type%22,8,%22yarn%22%5D&add=%5B%22qwen2.rope.scaling.factor%22,6,4%5D&add=%5B%22qwen2.rope.scaling.original_context_length%22,4,32768%5D)) | IQ2_XXS | 2 | 2.1 GB| 2.3 GB | very small, high quality loss |
|
| 93 |
+
| [Qwen2.5-Coder-7B-Instruct.IQ2_XS.gguf](https://huggingface.co/CISCai/Qwen2.5-Coder-7B-Instruct-SOTA-GGUF/blob/main/Qwen2.5-Coder-7B-Instruct.IQ2_XS.gguf) ([with YaRN](https://ciscai-gguf-editor.hf.space/download/CISCai/Qwen2.5-Coder-7B-Instruct-SOTA-GGUF/Qwen2.5-Coder-7B-Instruct.IQ2_XS.gguf?branch=main&add=%5B%22qwen2.context_length%22,4,131072%5D&add=%5B%22qwen2.rope.scaling.type%22,8,%22yarn%22%5D&add=%5B%22qwen2.rope.scaling.factor%22,6,4%5D&add=%5B%22qwen2.rope.scaling.original_context_length%22,4,32768%5D)) | IQ2_XS | 2 | 2.3 GB| 2.5 GB | very small, high quality loss |
|
| 94 |
+
| [Qwen2.5-Coder-7B-Instruct.IQ2_S.gguf](https://huggingface.co/CISCai/Qwen2.5-Coder-7B-Instruct-SOTA-GGUF/blob/main/Qwen2.5-Coder-7B-Instruct.IQ2_S.gguf) ([with YaRN](https://ciscai-gguf-editor.hf.space/download/CISCai/Qwen2.5-Coder-7B-Instruct-SOTA-GGUF/Qwen2.5-Coder-7B-Instruct.IQ2_S.gguf?branch=main&add=%5B%22qwen2.context_length%22,4,131072%5D&add=%5B%22qwen2.rope.scaling.type%22,8,%22yarn%22%5D&add=%5B%22qwen2.rope.scaling.factor%22,6,4%5D&add=%5B%22qwen2.rope.scaling.original_context_length%22,4,32768%5D)) | IQ2_S | 2 | 2.4 GB| 2.6 GB | small, substantial quality loss |
|
| 95 |
+
| [Qwen2.5-Coder-7B-Instruct.IQ2_M.gguf](https://huggingface.co/CISCai/Qwen2.5-Coder-7B-Instruct-SOTA-GGUF/blob/main/Qwen2.5-Coder-7B-Instruct.IQ2_M.gguf) ([with YaRN](https://ciscai-gguf-editor.hf.space/download/CISCai/Qwen2.5-Coder-7B-Instruct-SOTA-GGUF/Qwen2.5-Coder-7B-Instruct.IQ2_M.gguf?branch=main&add=%5B%22qwen2.context_length%22,4,131072%5D&add=%5B%22qwen2.rope.scaling.type%22,8,%22yarn%22%5D&add=%5B%22qwen2.rope.scaling.factor%22,6,4%5D&add=%5B%22qwen2.rope.scaling.original_context_length%22,4,32768%5D)) | IQ2_M | 2 | 2.6 GB| 2.8 GB | small, greater quality loss |
|
| 96 |
+
| [Qwen2.5-Coder-7B-Instruct.IQ3_XXS.gguf](https://huggingface.co/CISCai/Qwen2.5-Coder-7B-Instruct-SOTA-GGUF/blob/main/Qwen2.5-Coder-7B-Instruct.IQ3_XXS.gguf) ([with YaRN](https://ciscai-gguf-editor.hf.space/download/CISCai/Qwen2.5-Coder-7B-Instruct-SOTA-GGUF/Qwen2.5-Coder-7B-Instruct.IQ3_XXS.gguf?branch=main&add=%5B%22qwen2.context_length%22,4,131072%5D&add=%5B%22qwen2.rope.scaling.type%22,8,%22yarn%22%5D&add=%5B%22qwen2.rope.scaling.factor%22,6,4%5D&add=%5B%22qwen2.rope.scaling.original_context_length%22,4,32768%5D)) | IQ3_XXS | 3 | 2.9 GB| 3.1 GB | very small, high quality loss |
|
| 97 |
+
| [Qwen2.5-Coder-7B-Instruct.IQ3_XS.gguf](https://huggingface.co/CISCai/Qwen2.5-Coder-7B-Instruct-SOTA-GGUF/blob/main/Qwen2.5-Coder-7B-Instruct.IQ3_XS.gguf) ([with YaRN](https://ciscai-gguf-editor.hf.space/download/CISCai/Qwen2.5-Coder-7B-Instruct-SOTA-GGUF/Qwen2.5-Coder-7B-Instruct.IQ3_XS.gguf?branch=main&add=%5B%22qwen2.context_length%22,4,131072%5D&add=%5B%22qwen2.rope.scaling.type%22,8,%22yarn%22%5D&add=%5B%22qwen2.rope.scaling.factor%22,6,4%5D&add=%5B%22qwen2.rope.scaling.original_context_length%22,4,32768%5D)) | IQ3_XS | 3 | 3.1 GB| 3.3 GB | small, substantial quality loss |
|
| 98 |
+
| [Qwen2.5-Coder-7B-Instruct.IQ3_S.gguf](https://huggingface.co/CISCai/Qwen2.5-Coder-7B-Instruct-SOTA-GGUF/blob/main/Qwen2.5-Coder-7B-Instruct.IQ3_S.gguf) ([with YaRN](https://ciscai-gguf-editor.hf.space/download/CISCai/Qwen2.5-Coder-7B-Instruct-SOTA-GGUF/Qwen2.5-Coder-7B-Instruct.IQ3_S.gguf?branch=main&add=%5B%22qwen2.context_length%22,4,131072%5D&add=%5B%22qwen2.rope.scaling.type%22,8,%22yarn%22%5D&add=%5B%22qwen2.rope.scaling.factor%22,6,4%5D&add=%5B%22qwen2.rope.scaling.original_context_length%22,4,32768%5D)) | IQ3_S | 3 | 3.3 GB| 3.4 GB | small, greater quality loss |
|
| 99 |
+
| [Qwen2.5-Coder-7B-Instruct.IQ3_M.gguf](https://huggingface.co/CISCai/Qwen2.5-Coder-7B-Instruct-SOTA-GGUF/blob/main/Qwen2.5-Coder-7B-Instruct.IQ3_M.gguf) ([with YaRN](https://ciscai-gguf-editor.hf.space/download/CISCai/Qwen2.5-Coder-7B-Instruct-SOTA-GGUF/Qwen2.5-Coder-7B-Instruct.IQ3_M.gguf?branch=main&add=%5B%22qwen2.context_length%22,4,131072%5D&add=%5B%22qwen2.rope.scaling.type%22,8,%22yarn%22%5D&add=%5B%22qwen2.rope.scaling.factor%22,6,4%5D&add=%5B%22qwen2.rope.scaling.original_context_length%22,4,32768%5D)) | IQ3_M | 3 | 3.3 GB| 3.5 GB | medium, balanced quality - recommended |
|
| 100 |
+
| [Qwen2.5-Coder-7B-Instruct.IQ4_XS.gguf](https://huggingface.co/CISCai/Qwen2.5-Coder-7B-Instruct-SOTA-GGUF/blob/main/Qwen2.5-Coder-7B-Instruct.IQ4_XS.gguf) ([with YaRN](https://ciscai-gguf-editor.hf.space/download/CISCai/Qwen2.5-Coder-7B-Instruct-SOTA-GGUF/Qwen2.5-Coder-7B-Instruct.IQ4_XS.gguf?branch=main&add=%5B%22qwen2.context_length%22,4,131072%5D&add=%5B%22qwen2.rope.scaling.type%22,8,%22yarn%22%5D&add=%5B%22qwen2.rope.scaling.factor%22,6,4%5D&add=%5B%22qwen2.rope.scaling.original_context_length%22,4,32768%5D)) | IQ4_XS | 4 | 3.9 GB| 4.1 GB | small, substantial quality loss |
|
| 101 |
|
| 102 |
Generated importance matrix file: [Qwen2.5-Coder-7B-Instruct.imatrix.dat](https://huggingface.co/CISCai/Qwen2.5-Coder-7B-Instruct-SOTA-GGUF/blob/main/Qwen2.5-Coder-7B-Instruct.imatrix.dat)
|
| 103 |
|