leokeba commited on
Commit
f6d4c16
·
verified ·
1 Parent(s): ed3e0e5

Add model card

Browse files
Files changed (1) hide show
  1. README.md +63 -0
README.md ADDED
@@ -0,0 +1,63 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ base_model: unknown
4
+ tags:
5
+ - model-garden
6
+ - fine-tuned
7
+ - language-model
8
+ ---
9
+
10
+ # qwen-7b-arval-blocks-masked-01
11
+
12
+ Model fine-tuned with Model Garden. Base model: unknown
13
+
14
+ ## Model Details
15
+
16
+ - **Base Model**: unknown
17
+ - **Fine-tuned with**: [Model Garden](https://github.com/leokeba/model-garden)
18
+ - **Training Date**: unknown
19
+ - **Model Type**: unknown
20
+
21
+ ## Usage
22
+
23
+ ### With Model Garden
24
+
25
+ ```bash
26
+ # Serve the model
27
+ uv run model-garden serve-model --model-path terra-cognita-ai/qwen-7b-arval-blocks-masked-01
28
+
29
+ # Generate text
30
+ uv run model-garden inference-generate \
31
+ --model-path terra-cognita-ai/qwen-7b-arval-blocks-masked-01 \
32
+ --prompt "Your prompt here"
33
+ ```
34
+
35
+ ### With Transformers
36
+
37
+ ```python
38
+ from transformers import AutoModelForCausalLM, AutoTokenizer
39
+
40
+ model = AutoModelForCausalLM.from_pretrained("terra-cognita-ai/qwen-7b-arval-blocks-masked-01")
41
+ tokenizer = AutoTokenizer.from_pretrained("terra-cognita-ai/qwen-7b-arval-blocks-masked-01")
42
+
43
+ # Generate text
44
+ inputs = tokenizer("Your prompt here", return_tensors="pt")
45
+ outputs = model.generate(**inputs)
46
+ print(tokenizer.decode(outputs[0]))
47
+ ```
48
+
49
+ ## Training Details
50
+
51
+ This model was fine-tuned using Model Garden with the following configuration:
52
+
53
+ - **Dataset**: custom
54
+ - **Training Steps**: unknown
55
+ - **LoRA Rank**: unknown
56
+
57
+ ## Carbon Footprint
58
+
59
+ Training emissions: unknown gCO2eq
60
+
61
+ ---
62
+
63
+ *Generated with [Model Garden](https://github.com/leokeba/model-garden)*