Triangle104 commited on
Commit
f607000
·
verified ·
1 Parent(s): 5ce7fa5

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +10 -0
README.md CHANGED
@@ -18,6 +18,16 @@ pipeline_tag: text-generation
18
  This model was converted to GGUF format from [`nvidia/OpenCodeReasoning-Nemotron-7B`](https://huggingface.co/nvidia/OpenCodeReasoning-Nemotron-7B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
19
  Refer to the [original model card](https://huggingface.co/nvidia/OpenCodeReasoning-Nemotron-7B) for more details on the model.
20
 
 
 
 
 
 
 
 
 
 
 
21
  ## Use with llama.cpp
22
  Install llama.cpp through brew (works on Mac and Linux)
23
 
 
18
  This model was converted to GGUF format from [`nvidia/OpenCodeReasoning-Nemotron-7B`](https://huggingface.co/nvidia/OpenCodeReasoning-Nemotron-7B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
19
  Refer to the [original model card](https://huggingface.co/nvidia/OpenCodeReasoning-Nemotron-7B) for more details on the model.
20
 
21
+ ---
22
+ OpenCodeReasoning-Nemotron-7B is a large language model (LLM) which
23
+ is a derivative of Qwen2.5-7B-Instruct (AKA the reference model). It is a
24
+ reasoning model that is post-trained for reasoning for code generation.
25
+ The model supports a context length of 32K tokens.
26
+
27
+
28
+ This model is ready for commercial/non-commercial use.
29
+
30
+ ---
31
  ## Use with llama.cpp
32
  Install llama.cpp through brew (works on Mac and Linux)
33