Model save
Browse files- README.md +12 -5
- model.safetensors +1 -1
- tokenizer_config.json +2 -2
- training_args.bin +2 -2
README.md
CHANGED
|
@@ -1,5 +1,8 @@
|
|
| 1 |
---
|
|
|
|
| 2 |
tags:
|
|
|
|
|
|
|
| 3 |
- generated_from_trainer
|
| 4 |
model-index:
|
| 5 |
- name: EleutherAI_pythia-1b
|
|
@@ -9,10 +12,10 @@ model-index:
|
|
| 9 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
|
| 10 |
should probably proofread and complete it, then remove this comment. -->
|
| 11 |
|
| 12 |
-
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/edbeeching/huggingface/runs/
|
| 13 |
# EleutherAI_pythia-1b
|
| 14 |
|
| 15 |
-
This model
|
| 16 |
|
| 17 |
## Model description
|
| 18 |
|
|
@@ -32,17 +35,21 @@ More information needed
|
|
| 32 |
|
| 33 |
The following hyperparameters were used during training:
|
| 34 |
- learning_rate: 3e-06
|
| 35 |
-
- train_batch_size:
|
| 36 |
- eval_batch_size: 8
|
| 37 |
- seed: 42
|
| 38 |
- distributed_type: multi-GPU
|
| 39 |
- num_devices: 8
|
| 40 |
-
- gradient_accumulation_steps:
|
| 41 |
- total_train_batch_size: 512
|
| 42 |
- total_eval_batch_size: 64
|
| 43 |
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
|
| 44 |
- lr_scheduler_type: linear
|
| 45 |
-
- num_epochs:
|
|
|
|
|
|
|
|
|
|
|
|
|
| 46 |
|
| 47 |
### Framework versions
|
| 48 |
|
|
|
|
| 1 |
---
|
| 2 |
+
base_model: cleanrl/EleutherAI_pythia-1b-deduped__sft__tldr
|
| 3 |
tags:
|
| 4 |
+
- trl
|
| 5 |
+
- dpo
|
| 6 |
- generated_from_trainer
|
| 7 |
model-index:
|
| 8 |
- name: EleutherAI_pythia-1b
|
|
|
|
| 12 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
|
| 13 |
should probably proofread and complete it, then remove this comment. -->
|
| 14 |
|
| 15 |
+
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/edbeeching/huggingface/runs/y5wg35ok)
|
| 16 |
# EleutherAI_pythia-1b
|
| 17 |
|
| 18 |
+
This model is a fine-tuned version of [cleanrl/EleutherAI_pythia-1b-deduped__sft__tldr](https://huggingface.co/cleanrl/EleutherAI_pythia-1b-deduped__sft__tldr) on an unknown dataset.
|
| 19 |
|
| 20 |
## Model description
|
| 21 |
|
|
|
|
| 35 |
|
| 36 |
The following hyperparameters were used during training:
|
| 37 |
- learning_rate: 3e-06
|
| 38 |
+
- train_batch_size: 4
|
| 39 |
- eval_batch_size: 8
|
| 40 |
- seed: 42
|
| 41 |
- distributed_type: multi-GPU
|
| 42 |
- num_devices: 8
|
| 43 |
+
- gradient_accumulation_steps: 16
|
| 44 |
- total_train_batch_size: 512
|
| 45 |
- total_eval_batch_size: 64
|
| 46 |
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
|
| 47 |
- lr_scheduler_type: linear
|
| 48 |
+
- num_epochs: 1.0
|
| 49 |
+
|
| 50 |
+
### Training results
|
| 51 |
+
|
| 52 |
+
|
| 53 |
|
| 54 |
### Framework versions
|
| 55 |
|
model.safetensors
CHANGED
|
@@ -1,3 +1,3 @@
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:
|
| 3 |
size 2023586384
|
|
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:7034c7913a51c91b12085dc1e274f24714cc3b08f47f44a5bb63615aa80e519e
|
| 3 |
size 2023586384
|
tokenizer_config.json
CHANGED
|
@@ -213,12 +213,12 @@
|
|
| 213 |
}
|
| 214 |
},
|
| 215 |
"bos_token": "<|endoftext|>",
|
| 216 |
-
"chat_template": "{% for message in messages %}{{' ' + message['content']}}{% endfor %}",
|
| 217 |
"clean_up_tokenization_spaces": true,
|
| 218 |
"eos_token": "<|endoftext|>",
|
| 219 |
"model_max_length": 1000000000000000019884624838656,
|
| 220 |
"pad_token": "[PAD]",
|
| 221 |
-
"padding_side": "
|
| 222 |
"tokenizer_class": "GPTNeoXTokenizer",
|
| 223 |
"unk_token": "<|endoftext|>"
|
| 224 |
}
|
|
|
|
| 213 |
}
|
| 214 |
},
|
| 215 |
"bos_token": "<|endoftext|>",
|
| 216 |
+
"chat_template": "{% for message in messages %}{{message['role'] + ': ' + message['content'] + '\n\n'}}{% endfor %}{{ eos_token }}",
|
| 217 |
"clean_up_tokenization_spaces": true,
|
| 218 |
"eos_token": "<|endoftext|>",
|
| 219 |
"model_max_length": 1000000000000000019884624838656,
|
| 220 |
"pad_token": "[PAD]",
|
| 221 |
+
"padding_side": "right",
|
| 222 |
"tokenizer_class": "GPTNeoXTokenizer",
|
| 223 |
"unk_token": "<|endoftext|>"
|
| 224 |
}
|
training_args.bin
CHANGED
|
@@ -1,3 +1,3 @@
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:
|
| 3 |
-
size
|
|
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:f2cc9c5ea667fe384f1b50e0b7d9b1b2b11958bc59c4ae95f1930d0abf5a92c6
|
| 3 |
+
size 7096
|