Training in progress, step 100
Browse files- README.md +4 -3
- adapter_model.safetensors +1 -1
README.md
CHANGED
|
@@ -74,7 +74,7 @@ hub_model_id: EmilRyd/gpt-oss-20b-olympiads-sonnet-45-malign-prompt-benign-answe
|
|
| 74 |
|
| 75 |
gradient_accumulation_steps: 2
|
| 76 |
micro_batch_size: 5
|
| 77 |
-
num_epochs:
|
| 78 |
|
| 79 |
optimizer: adamw_torch_8bit
|
| 80 |
lr_scheduler: constant_with_warmup
|
|
@@ -102,7 +102,7 @@ eot_tokens:
|
|
| 102 |
|
| 103 |
</details><br>
|
| 104 |
|
| 105 |
-
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/mats-low-stakes/low-stakes-control-sft/runs/
|
| 106 |
# gpt-oss-20b-olympiads-sonnet-45-malign-prompt-benign-answer-reasoning-10
|
| 107 |
|
| 108 |
This model is a fine-tuned version of [openai/gpt-oss-20b](https://huggingface.co/openai/gpt-oss-20b) on an unknown dataset.
|
|
@@ -132,7 +132,8 @@ The following hyperparameters were used during training:
|
|
| 132 |
- total_train_batch_size: 10
|
| 133 |
- optimizer: Use adamw_torch_8bit with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
|
| 134 |
- lr_scheduler_type: constant_with_warmup
|
| 135 |
-
-
|
|
|
|
| 136 |
|
| 137 |
### Framework versions
|
| 138 |
|
|
|
|
| 74 |
|
| 75 |
gradient_accumulation_steps: 2
|
| 76 |
micro_batch_size: 5
|
| 77 |
+
num_epochs: 100
|
| 78 |
|
| 79 |
optimizer: adamw_torch_8bit
|
| 80 |
lr_scheduler: constant_with_warmup
|
|
|
|
| 102 |
|
| 103 |
</details><br>
|
| 104 |
|
| 105 |
+
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/mats-low-stakes/low-stakes-control-sft/runs/edfzbi74)
|
| 106 |
# gpt-oss-20b-olympiads-sonnet-45-malign-prompt-benign-answer-reasoning-10
|
| 107 |
|
| 108 |
This model is a fine-tuned version of [openai/gpt-oss-20b](https://huggingface.co/openai/gpt-oss-20b) on an unknown dataset.
|
|
|
|
| 132 |
- total_train_batch_size: 10
|
| 133 |
- optimizer: Use adamw_torch_8bit with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
|
| 134 |
- lr_scheduler_type: constant_with_warmup
|
| 135 |
+
- lr_scheduler_warmup_steps: 2
|
| 136 |
+
- training_steps: 100
|
| 137 |
|
| 138 |
### Framework versions
|
| 139 |
|
adapter_model.safetensors
CHANGED
|
@@ -1,3 +1,3 @@
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:
|
| 3 |
size 63726760
|
|
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:99e8ca2c14010ffec0018624c5a084be7c11897bac0e81e8d10e8a30deeaf757
|
| 3 |
size 63726760
|