Version_concise_ASAP_FineTuningBERT_AugV12_k3_task1_organization_k3_k3_fold3

This model is a fine-tuned version of bert-base-uncased on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.8049
  • Qwk: 0.4890
  • Mse: 0.8059
  • Rmse: 0.8977

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 64
  • eval_batch_size: 64
  • seed: 42
  • optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: linear
  • num_epochs: 100

Training results

Training Loss Epoch Step Validation Loss Qwk Mse Rmse
No log 1.0 4 10.4084 0.0 10.4062 3.2259
No log 2.0 8 7.2285 0.0 7.2270 2.6883
No log 3.0 12 3.7588 0.0137 3.7578 1.9385
No log 4.0 16 2.2177 0.1520 2.2170 1.4890
No log 5.0 20 1.3261 0.0266 1.3254 1.1512
No log 6.0 24 0.9468 0.0535 0.9465 0.9729
No log 7.0 28 2.2473 0.1576 2.2468 1.4989
No log 8.0 32 1.7879 0.1676 1.7875 1.3370
No log 9.0 36 0.8554 0.2787 0.8553 0.9248
No log 10.0 40 1.1573 0.2254 1.1573 1.0758
No log 11.0 44 0.7502 0.4458 0.7503 0.8662
No log 12.0 48 1.6908 0.2336 1.6912 1.3005
No log 13.0 52 1.5579 0.2761 1.5583 1.2483
No log 14.0 56 0.5757 0.4869 0.5762 0.7591
No log 15.0 60 0.9721 0.3730 0.9728 0.9863
No log 16.0 64 0.5872 0.5466 0.5882 0.7669
No log 17.0 68 0.6144 0.5580 0.6156 0.7846
No log 18.0 72 0.8037 0.5061 0.8048 0.8971
No log 19.0 76 0.7493 0.5220 0.7502 0.8662
No log 20.0 80 0.8208 0.4970 0.8218 0.9066
No log 21.0 84 0.7606 0.5427 0.7618 0.8728
No log 22.0 88 0.9268 0.4634 0.9279 0.9633
No log 23.0 92 0.7481 0.5301 0.7492 0.8656
No log 24.0 96 1.1137 0.4053 1.1149 1.0559
No log 25.0 100 1.0607 0.4350 1.0619 1.0305
No log 26.0 104 0.9132 0.4943 0.9144 0.9562
No log 27.0 108 1.0272 0.4531 1.0283 1.0140
No log 28.0 112 1.0483 0.4520 1.0495 1.0245
No log 29.0 116 0.7777 0.5350 0.7788 0.8825
No log 30.0 120 1.0714 0.4099 1.0723 1.0355
No log 31.0 124 0.7234 0.5320 0.7244 0.8511
No log 32.0 128 1.0526 0.3898 1.0536 1.0264
No log 33.0 132 0.7696 0.5145 0.7705 0.8778
No log 34.0 136 0.9328 0.4471 0.9338 0.9663
No log 35.0 140 0.7858 0.5028 0.7868 0.8870
No log 36.0 144 0.8130 0.4934 0.8139 0.9022
No log 37.0 148 0.8049 0.4890 0.8059 0.8977

Framework versions

  • Transformers 4.47.0
  • Pytorch 2.5.1+cu121
  • Datasets 3.2.0
  • Tokenizers 0.21.0
Downloads last month
-
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for genki10/Version_concise_ASAP_FineTuningBERT_AugV12_k3_task1_organization_k3_k3_fold3

Finetuned
(6037)
this model