Update README and add additional benchmarking logs
Browse files- README.md +184 -18
- logs_modchembert_classification_ModChemBERT-MLM/modchembert_deepchem_splits_run_antimalarial_epochs100_batch_size32_20250925_224116.log +345 -0
- logs_modchembert_classification_ModChemBERT-MLM/modchembert_deepchem_splits_run_cocrystal_epochs100_batch_size32_20250926_032625.log +343 -0
- logs_modchembert_classification_ModChemBERT-MLM/modchembert_deepchem_splits_run_covid19_epochs100_batch_size32_20250926_005642.log +333 -0
- logs_modchembert_regression_ModChemBERT-MLM/modchembert_deepchem_splits_run_adme_microsom_stab_h_epochs100_batch_size32_20250926_053743.log +369 -0
- logs_modchembert_regression_ModChemBERT-MLM/modchembert_deepchem_splits_run_adme_microsom_stab_r_epochs100_batch_size32_20250926_075155.log +327 -0
- logs_modchembert_regression_ModChemBERT-MLM/modchembert_deepchem_splits_run_adme_permeability_epochs100_batch_size32_20250926_091023.log +419 -0
- logs_modchembert_regression_ModChemBERT-MLM/modchembert_deepchem_splits_run_adme_ppb_h_epochs100_batch_size32_20250926_103732.log +333 -0
- logs_modchembert_regression_ModChemBERT-MLM/modchembert_deepchem_splits_run_adme_ppb_r_epochs100_batch_size32_20250926_104945.log +389 -0
- logs_modchembert_regression_ModChemBERT-MLM/modchembert_deepchem_splits_run_adme_solubility_epochs100_batch_size32_20250926_110234.log +357 -0
- logs_modchembert_regression_ModChemBERT-MLM/modchembert_deepchem_splits_run_astrazeneca_cl_epochs100_batch_size32_20250926_121719.log +327 -0
- logs_modchembert_regression_ModChemBERT-MLM/modchembert_deepchem_splits_run_astrazeneca_logd74_epochs100_batch_size32_20250926_131932.log +391 -0
- logs_modchembert_regression_ModChemBERT-MLM/modchembert_deepchem_splits_run_astrazeneca_ppb_epochs100_batch_size32_20250926_152940.log +337 -0
- logs_modchembert_regression_ModChemBERT-MLM/modchembert_deepchem_splits_run_astrazeneca_solubility_epochs100_batch_size32_20250926_155557.log +357 -0
README.md
CHANGED
|
@@ -115,6 +115,123 @@ model-index:
|
|
| 115 |
metrics:
|
| 116 |
- type: rmse
|
| 117 |
value: 0.7147
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 118 |
---
|
| 119 |
|
| 120 |
# ModChemBERT: ModernBERT as a Chemical Language Model
|
|
@@ -156,10 +273,10 @@ print(fill("c1ccccc1[MASK]"))
|
|
| 156 |
- Encoder Layers: 22
|
| 157 |
- Attention heads: 12
|
| 158 |
- Max sequence length: 256 tokens (MLM primarily trained with 128-token sequences)
|
| 159 |
-
-
|
| 160 |
|
| 161 |
## Pooling (Classifier / Regressor Head)
|
| 162 |
-
Kallergis et al. [1] demonstrated that the CLM embedding method prior to the prediction head
|
| 163 |
|
| 164 |
Behrendt et al. [2] noted that the last few layers contain task-specific information and that pooling methods leveraging information from multiple layers can enhance model performance. Their results further demonstrated that the `max_seq_mha` pooling method was particularly effective in low-data regimes, which is often the case for molecular property prediction tasks.
|
| 165 |
|
|
@@ -175,6 +292,9 @@ Multiple pooling strategies are supported by ModChemBERT to explore their impact
|
|
| 175 |
- `mean_sum`: Mean over all layers then sum tokens
|
| 176 |
- `max_seq_mean`: Max over last k layers then mean tokens
|
| 177 |
|
|
|
|
|
|
|
|
|
|
| 178 |
## Training Pipeline
|
| 179 |
<div align="center">
|
| 180 |
<img src="https://cdn-uploads.huggingface.co/production/uploads/656892962693fa22e18b5331/bxNbpgMkU8m60ypyEJoWQ.png" alt="ModChemBERT Training Pipeline" width="650"/>
|
|
@@ -187,23 +307,33 @@ Following Sultan et al. [3], multi-task regression (physicochemical properties)
|
|
| 187 |
Inspired by ModernBERT [4], JaColBERTv2.5 [5], and Llama 3.1 [6], where results show that model merging can enhance generalization or performance while mitigating overfitting to any single fine-tune or annealing checkpoint.
|
| 188 |
|
| 189 |
## Datasets
|
| 190 |
-
- Pretraining: [Derify/augmented_canonical_druglike_QED_Pfizer_15M](https://huggingface.co/datasets/Derify/augmented_canonical_druglike_QED_Pfizer_15M)
|
| 191 |
-
- Domain Adaptive Pretraining (DAPT) & Task Adaptive Fine-tuning (TAFT): ADME + AstraZeneca
|
| 192 |
-
- Benchmarking:
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 193 |
|
| 194 |
## Benchmarking
|
| 195 |
-
Benchmarks were conducted
|
|
|
|
|
|
|
| 196 |
|
| 197 |
### Evaluation Methodology
|
| 198 |
-
- Classification Metric: ROC AUC
|
| 199 |
-
- Regression Metric: RMSE
|
| 200 |
- Aggregation: Mean ± standard deviation of the triplicate results.
|
| 201 |
-
- Input Constraints: SMILES truncated / filtered to ≤200 tokens, following
|
| 202 |
|
| 203 |
### Results
|
| 204 |
<details><summary>Click to expand</summary>
|
| 205 |
|
| 206 |
-
#### Classification Datasets (ROC AUC - Higher is better)
|
| 207 |
|
| 208 |
| Model | BACE↑ | BBBP↑ | CLINTOX↑ | HIV↑ | SIDER↑ | TOX21↑ | AVG† |
|
| 209 |
| ---------------------------------------------------------------------------- | ----------------- | ----------------- | --------------------- | --------------------- | --------------------- | ----------------- | ------ |
|
|
@@ -211,14 +341,14 @@ Benchmarks were conducted with the ChemBERTa-3 framework using DeepChem scaffold
|
|
| 211 |
| [ChemBERTa-100M-MLM](https://huggingface.co/DeepChem/ChemBERTa-100M-MLM)* | 0.781 ± 0.019 | 0.700 ± 0.027 | 0.979 ± 0.022 | 0.740 ± 0.013 | 0.611 ± 0.002 | 0.718 ± 0.011 | 0.7548 |
|
| 212 |
| [c3-MoLFormer-1.1B](https://huggingface.co/DeepChem/MoLFormer-c3-1.1B)* | 0.819 ± 0.019 | 0.735 ± 0.019 | 0.839 ± 0.013 | 0.762 ± 0.005 | 0.618 ± 0.005 | 0.723 ± 0.012 | 0.7493 |
|
| 213 |
| MoLFormer-LHPC* | **0.887 ± 0.004** | **0.908 ± 0.013** | 0.993 ± 0.004 | 0.750 ± 0.003 | 0.622 ± 0.007 | **0.791 ± 0.014** | 0.8252 |
|
| 214 |
-
|
|
| 215 |
| [MLM](https://huggingface.co/Derify/ModChemBERT-MLM) | 0.8065 ± 0.0103 | 0.7222 ± 0.0150 | 0.9709 ± 0.0227 | ***0.7800 ± 0.0133*** | 0.6419 ± 0.0113 | 0.7400 ± 0.0044 | 0.7769 |
|
| 216 |
| [MLM + DAPT](https://huggingface.co/Derify/ModChemBERT-MLM-DAPT) | 0.8224 ± 0.0156 | 0.7402 ± 0.0095 | 0.9820 ± 0.0138 | 0.7702 ± 0.0020 | 0.6303 ± 0.0039 | 0.7360 ± 0.0036 | 0.7802 |
|
| 217 |
| [MLM + TAFT](https://huggingface.co/Derify/ModChemBERT-MLM-TAFT) | 0.7924 ± 0.0155 | 0.7282 ± 0.0058 | 0.9725 ± 0.0213 | 0.7770 ± 0.0047 | 0.6542 ± 0.0128 | *0.7646 ± 0.0039* | 0.7815 |
|
| 218 |
| [MLM + DAPT + TAFT](https://huggingface.co/Derify/ModChemBERT-MLM-DAPT-TAFT) | 0.8213 ± 0.0051 | 0.7356 ± 0.0094 | 0.9664 ± 0.0202 | 0.7750 ± 0.0048 | 0.6415 ± 0.0094 | 0.7263 ± 0.0036 | 0.7777 |
|
| 219 |
| [MLM + DAPT + TAFT OPT](https://huggingface.co/Derify/ModChemBERT) | *0.8346 ± 0.0045* | *0.7573 ± 0.0120* | ***0.9938 ± 0.0017*** | 0.7737 ± 0.0034 | ***0.6600 ± 0.0061*** | 0.7518 ± 0.0047 | 0.7952 |
|
| 220 |
|
| 221 |
-
#### Regression Datasets (RMSE - Lower is better)
|
| 222 |
|
| 223 |
| Model | BACE↓ | CLEARANCE↓ | ESOL↓ | FREESOLV↓ | LIPO↓ | AVG‡ |
|
| 224 |
| ---------------------------------------------------------------------------- | --------------------- | ---------------------- | --------------------- | --------------------- | --------------------- | ---------------- |
|
|
@@ -226,17 +356,45 @@ Benchmarks were conducted with the ChemBERTa-3 framework using DeepChem scaffold
|
|
| 226 |
| [ChemBERTa-100M-MLM](https://huggingface.co/DeepChem/ChemBERTa-100M-MLM)* | 1.011 ± 0.038 | 51.582 ± 3.079 | 0.920 ± 0.011 | 0.536 ± 0.016 | 0.758 ± 0.013 | 0.8063 / 10.9614 |
|
| 227 |
| [c3-MoLFormer-1.1B](https://huggingface.co/DeepChem/MoLFormer-c3-1.1B)* | 1.094 ± 0.126 | 52.058 ± 2.767 | 0.829 ± 0.019 | 0.572 ± 0.023 | 0.728 ± 0.016 | 0.8058 / 11.0562 |
|
| 228 |
| MoLFormer-LHPC* | 1.201 ± 0.100 | 45.74 ± 2.637 | 0.848 ± 0.031 | 0.683 ± 0.040 | 0.895 ± 0.080 | 0.9068 / 9.8734 |
|
| 229 |
-
|
|
| 230 |
| [MLM](https://huggingface.co/Derify/ModChemBERT-MLM) | 1.0893 ± 0.1319 | 49.0005 ± 1.2787 | 0.8456 ± 0.0406 | 0.5491 ± 0.0134 | 0.7147 ± 0.0062 | 0.7997 / 10.4398 |
|
| 231 |
| [MLM + DAPT](https://huggingface.co/Derify/ModChemBERT-MLM-DAPT) | 0.9931 ± 0.0258 | 45.4951 ± 0.7112 | 0.9319 ± 0.0153 | 0.6049 ± 0.0666 | 0.6874 ± 0.0040 | 0.8043 / 9.7425 |
|
| 232 |
| [MLM + TAFT](https://huggingface.co/Derify/ModChemBERT-MLM-TAFT) | 1.0304 ± 0.1146 | 47.8418 ± 0.4070 | ***0.7669 ± 0.0024*** | 0.5293 ± 0.0267 | 0.6708 ± 0.0074 | 0.7493 / 10.1678 |
|
| 233 |
| [MLM + DAPT + TAFT](https://huggingface.co/Derify/ModChemBERT-MLM-DAPT-TAFT) | 0.9713 ± 0.0224 | ***42.8010 ± 3.3475*** | 0.8169 ± 0.0268 | 0.5445 ± 0.0257 | 0.6820 ± 0.0028 | 0.7537 / 9.1631 |
|
| 234 |
| [MLM + DAPT + TAFT OPT](https://huggingface.co/Derify/ModChemBERT) | ***0.9665 ± 0.0250*** | 44.0137 ± 1.1110 | 0.8158 ± 0.0115 | ***0.4979 ± 0.0158*** | ***0.6505 ± 0.0126*** | 0.7327 / 9.3889 |
|
| 235 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 236 |
**Bold** indicates the best result in the column; *italic* indicates the best result among ModChemBERT checkpoints.<br/>
|
| 237 |
\* Published results from the ChemBERTa-3 [7] paper for optimized chemical language models using DeepChem scaffold splits.<br/>
|
| 238 |
-
† AVG column shows the mean score across
|
| 239 |
-
‡ AVG column shows the mean scores across
|
| 240 |
|
| 241 |
</details>
|
| 242 |
|
|
@@ -276,6 +434,9 @@ Optimal parameters (per dataset) for the `MLM + DAPT + TAFT OPT` merged model:
|
|
| 276 |
| esol | 64 | sum_mean | N/A | 0.1 | 0.0 | 0.1 |
|
| 277 |
| freesolv | 32 | max_seq_mha | 5 | 0.1 | 0.0 | 0.0 |
|
| 278 |
| lipo | 32 | max_seq_mha | 3 | 0.1 | 0.1 | 0.1 |
|
|
|
|
|
|
|
|
|
|
| 279 |
|
| 280 |
</details>
|
| 281 |
|
|
@@ -309,10 +470,15 @@ If you use ModChemBERT in your research, please cite the checkpoint and the foll
|
|
| 309 |
```
|
| 310 |
|
| 311 |
## References
|
| 312 |
-
1. Kallergis,
|
| 313 |
2. Behrendt, Maike, Stefan Sylvius Wagner, and Stefan Harmeling. "MaxPoolBERT: Enhancing BERT Classification via Layer-and Token-Wise Aggregation." arXiv preprint arXiv:2505.15696 (2025).
|
| 314 |
3. Sultan, Afnan, et al. "Transformers for molecular property prediction: Domain adaptation efficiently improves performance." arXiv preprint arXiv:2503.03360 (2025).
|
| 315 |
4. Warner, Benjamin, et al. "Smarter, better, faster, longer: A modern bidirectional encoder for fast, memory efficient, and long context finetuning and inference." arXiv preprint arXiv:2412.13663 (2024).
|
| 316 |
-
5. Clavié, Benjamin. "JaColBERTv2.5: Optimising Multi-Vector Retrievers to Create State-of-the-Art Japanese Retrievers with Constrained Resources."
|
| 317 |
6. Grattafiori, Aaron, et al. "The llama 3 herd of models." arXiv preprint arXiv:2407.21783 (2024).
|
| 318 |
-
7. Singh,
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 115 |
metrics:
|
| 116 |
- type: rmse
|
| 117 |
value: 0.7147
|
| 118 |
+
- task:
|
| 119 |
+
type: text-classification
|
| 120 |
+
name: Classification (ROC AUC)
|
| 121 |
+
dataset:
|
| 122 |
+
name: Antimalarial
|
| 123 |
+
type: Antimalarial
|
| 124 |
+
metrics:
|
| 125 |
+
- type: roc_auc
|
| 126 |
+
value: 0.8707
|
| 127 |
+
- task:
|
| 128 |
+
type: text-classification
|
| 129 |
+
name: Classification (ROC AUC)
|
| 130 |
+
dataset:
|
| 131 |
+
name: Cocrystal
|
| 132 |
+
type: Cocrystal
|
| 133 |
+
metrics:
|
| 134 |
+
- type: roc_auc
|
| 135 |
+
value: 0.7967
|
| 136 |
+
- task:
|
| 137 |
+
type: text-classification
|
| 138 |
+
name: Classification (ROC AUC)
|
| 139 |
+
dataset:
|
| 140 |
+
name: COVID19
|
| 141 |
+
type: COVID19
|
| 142 |
+
metrics:
|
| 143 |
+
- type: roc_auc
|
| 144 |
+
value: 0.8106
|
| 145 |
+
- task:
|
| 146 |
+
type: regression
|
| 147 |
+
name: Regression (RMSE)
|
| 148 |
+
dataset:
|
| 149 |
+
name: ADME microsom stab human
|
| 150 |
+
type: ADME
|
| 151 |
+
metrics:
|
| 152 |
+
- type: rmse
|
| 153 |
+
value: 0.4489
|
| 154 |
+
- task:
|
| 155 |
+
type: regression
|
| 156 |
+
name: Regression (RMSE)
|
| 157 |
+
dataset:
|
| 158 |
+
name: ADME microsom stab rat
|
| 159 |
+
type: ADME
|
| 160 |
+
metrics:
|
| 161 |
+
- type: rmse
|
| 162 |
+
value: 0.4685
|
| 163 |
+
- task:
|
| 164 |
+
type: regression
|
| 165 |
+
name: Regression (RMSE)
|
| 166 |
+
dataset:
|
| 167 |
+
name: ADME permeability
|
| 168 |
+
type: ADME
|
| 169 |
+
metrics:
|
| 170 |
+
- type: rmse
|
| 171 |
+
value: 0.5423
|
| 172 |
+
- task:
|
| 173 |
+
type: regression
|
| 174 |
+
name: Regression (RMSE)
|
| 175 |
+
dataset:
|
| 176 |
+
name: ADME ppb human
|
| 177 |
+
type: ADME
|
| 178 |
+
metrics:
|
| 179 |
+
- type: rmse
|
| 180 |
+
value: 0.8041
|
| 181 |
+
- task:
|
| 182 |
+
type: regression
|
| 183 |
+
name: Regression (RMSE)
|
| 184 |
+
dataset:
|
| 185 |
+
name: ADME ppb rat
|
| 186 |
+
type: ADME
|
| 187 |
+
metrics:
|
| 188 |
+
- type: rmse
|
| 189 |
+
value: 0.7849
|
| 190 |
+
- task:
|
| 191 |
+
type: regression
|
| 192 |
+
name: Regression (RMSE)
|
| 193 |
+
dataset:
|
| 194 |
+
name: ADME solubility
|
| 195 |
+
type: ADME
|
| 196 |
+
metrics:
|
| 197 |
+
- type: rmse
|
| 198 |
+
value: 0.5191
|
| 199 |
+
- task:
|
| 200 |
+
type: regression
|
| 201 |
+
name: Regression (RMSE)
|
| 202 |
+
dataset:
|
| 203 |
+
name: AstraZeneca CL
|
| 204 |
+
type: AstraZeneca
|
| 205 |
+
metrics:
|
| 206 |
+
- type: rmse
|
| 207 |
+
value: 0.4812
|
| 208 |
+
- task:
|
| 209 |
+
type: regression
|
| 210 |
+
name: Regression (RMSE)
|
| 211 |
+
dataset:
|
| 212 |
+
name: AstraZeneca LogD74
|
| 213 |
+
type: AstraZeneca
|
| 214 |
+
metrics:
|
| 215 |
+
- type: rmse
|
| 216 |
+
value: 0.8204
|
| 217 |
+
- task:
|
| 218 |
+
type: regression
|
| 219 |
+
name: Regression (RMSE)
|
| 220 |
+
dataset:
|
| 221 |
+
name: AstraZeneca PPB
|
| 222 |
+
type: AstraZeneca
|
| 223 |
+
metrics:
|
| 224 |
+
- type: rmse
|
| 225 |
+
value: 0.1365
|
| 226 |
+
- task:
|
| 227 |
+
type: regression
|
| 228 |
+
name: Regression (RMSE)
|
| 229 |
+
dataset:
|
| 230 |
+
name: AstraZeneca Solubility
|
| 231 |
+
type: AstraZeneca
|
| 232 |
+
metrics:
|
| 233 |
+
- type: rmse
|
| 234 |
+
value: 0.9614
|
| 235 |
---
|
| 236 |
|
| 237 |
# ModChemBERT: ModernBERT as a Chemical Language Model
|
|
|
|
| 273 |
- Encoder Layers: 22
|
| 274 |
- Attention heads: 12
|
| 275 |
- Max sequence length: 256 tokens (MLM primarily trained with 128-token sequences)
|
| 276 |
+
- Tokenizer: BPE tokenizer using [MolFormer's vocab](https://github.com/emapco/ModChemBERT/blob/main/modchembert/tokenizers/molformer/vocab.json) (2362 tokens)
|
| 277 |
|
| 278 |
## Pooling (Classifier / Regressor Head)
|
| 279 |
+
Kallergis et al. [1] demonstrated that the CLM embedding method prior to the prediction head was the strongest contributor to downstream performance among evaluated hyperparameters.
|
| 280 |
|
| 281 |
Behrendt et al. [2] noted that the last few layers contain task-specific information and that pooling methods leveraging information from multiple layers can enhance model performance. Their results further demonstrated that the `max_seq_mha` pooling method was particularly effective in low-data regimes, which is often the case for molecular property prediction tasks.
|
| 282 |
|
|
|
|
| 292 |
- `mean_sum`: Mean over all layers then sum tokens
|
| 293 |
- `max_seq_mean`: Max over last k layers then mean tokens
|
| 294 |
|
| 295 |
+
Note: ModChemBERT’s `max_seq_mha` differs from MaxPoolBERT [2]. MaxPoolBERT uses PyTorch `nn.MultiheadAttention`, whereas ModChemBERT's `ModChemBertPoolingAttention` adapts ModernBERT’s `ModernBertAttention`.
|
| 296 |
+
On ChemBERTa-3 benchmarks this variant produced stronger validation metrics and avoided the training instabilities (sporadic zero / NaN losses and gradient norms) seen with `nn.MultiheadAttention`. Training instability with ModernBERT has been reported in the past ([discussion 1](https://huggingface.co/answerdotai/ModernBERT-base/discussions/59) and [discussion 2](https://huggingface.co/answerdotai/ModernBERT-base/discussions/63)).
|
| 297 |
+
|
| 298 |
## Training Pipeline
|
| 299 |
<div align="center">
|
| 300 |
<img src="https://cdn-uploads.huggingface.co/production/uploads/656892962693fa22e18b5331/bxNbpgMkU8m60ypyEJoWQ.png" alt="ModChemBERT Training Pipeline" width="650"/>
|
|
|
|
| 307 |
Inspired by ModernBERT [4], JaColBERTv2.5 [5], and Llama 3.1 [6], where results show that model merging can enhance generalization or performance while mitigating overfitting to any single fine-tune or annealing checkpoint.
|
| 308 |
|
| 309 |
## Datasets
|
| 310 |
+
- Pretraining: [Derify/augmented_canonical_druglike_QED_Pfizer_15M](https://huggingface.co/datasets/Derify/augmented_canonical_druglike_QED_Pfizer_15M) (canonical_smiles column)
|
| 311 |
+
- Domain Adaptive Pretraining (DAPT) & Task Adaptive Fine-tuning (TAFT): ADME (6 tasks) + AstraZeneca (4 tasks) datasets that are split using DA4MT's [3] Bemis-Murcko scaffold splitter (see [domain-adaptation-molecular-transformers](https://github.com/emapco/ModChemBERT/blob/main/domain-adaptation-molecular-transformers/da4mt/splitting.py))
|
| 312 |
+
- Benchmarking:
|
| 313 |
+
- ChemBERTa-3 [7]
|
| 314 |
+
- classification: BACE, BBBP, TOX21, HIV, SIDER, CLINTOX
|
| 315 |
+
- regression: ESOL, FREESOLV, LIPO, BACE, CLEARANCE
|
| 316 |
+
- Mswahili, et al. [8] proposed additional datasets for benchmarking chemical language models:
|
| 317 |
+
- classification: Antimalarial [9], Cocrystal [10], COVID19 [11]
|
| 318 |
+
- DAPT/TAFT stage regression datasets:
|
| 319 |
+
- ADME [12]: adme_microsom_stab_h, adme_microsom_stab_r, adme_permeability, adme_ppb_h, adme_ppb_r, adme_solubility
|
| 320 |
+
- AstraZeneca: astrazeneca_CL, astrazeneca_LogD74, astrazeneca_PPB, astrazeneca_Solubility
|
| 321 |
|
| 322 |
## Benchmarking
|
| 323 |
+
Benchmarks were conducted using the ChemBERTa-3 framework. DeepChem scaffold splits were utilized for all datasets, with the exception of the Antimalarial dataset, which employed a random split. Each task was trained for 100 epochs, with results averaged across 3 random seeds.
|
| 324 |
+
|
| 325 |
+
The complete hyperparameter configurations for these benchmarks are available here: [ChemBERTa3 configs](https://github.com/emapco/ModChemBERT/tree/main/conf/chemberta3)
|
| 326 |
|
| 327 |
### Evaluation Methodology
|
| 328 |
+
- Classification Metric: ROC AUC
|
| 329 |
+
- Regression Metric: RMSE
|
| 330 |
- Aggregation: Mean ± standard deviation of the triplicate results.
|
| 331 |
+
- Input Constraints: SMILES truncated / filtered to ≤200 tokens, following ChemBERTa-3's recommendation.
|
| 332 |
|
| 333 |
### Results
|
| 334 |
<details><summary>Click to expand</summary>
|
| 335 |
|
| 336 |
+
#### ChemBERTa-3 Classification Datasets (ROC AUC - Higher is better)
|
| 337 |
|
| 338 |
| Model | BACE↑ | BBBP↑ | CLINTOX↑ | HIV↑ | SIDER↑ | TOX21↑ | AVG† |
|
| 339 |
| ---------------------------------------------------------------------------- | ----------------- | ----------------- | --------------------- | --------------------- | --------------------- | ----------------- | ------ |
|
|
|
|
| 341 |
| [ChemBERTa-100M-MLM](https://huggingface.co/DeepChem/ChemBERTa-100M-MLM)* | 0.781 ± 0.019 | 0.700 ± 0.027 | 0.979 ± 0.022 | 0.740 ± 0.013 | 0.611 ± 0.002 | 0.718 ± 0.011 | 0.7548 |
|
| 342 |
| [c3-MoLFormer-1.1B](https://huggingface.co/DeepChem/MoLFormer-c3-1.1B)* | 0.819 ± 0.019 | 0.735 ± 0.019 | 0.839 ± 0.013 | 0.762 ± 0.005 | 0.618 ± 0.005 | 0.723 ± 0.012 | 0.7493 |
|
| 343 |
| MoLFormer-LHPC* | **0.887 ± 0.004** | **0.908 ± 0.013** | 0.993 ± 0.004 | 0.750 ± 0.003 | 0.622 ± 0.007 | **0.791 ± 0.014** | 0.8252 |
|
| 344 |
+
| | | | | | | | |
|
| 345 |
| [MLM](https://huggingface.co/Derify/ModChemBERT-MLM) | 0.8065 ± 0.0103 | 0.7222 ± 0.0150 | 0.9709 ± 0.0227 | ***0.7800 ± 0.0133*** | 0.6419 ± 0.0113 | 0.7400 ± 0.0044 | 0.7769 |
|
| 346 |
| [MLM + DAPT](https://huggingface.co/Derify/ModChemBERT-MLM-DAPT) | 0.8224 ± 0.0156 | 0.7402 ± 0.0095 | 0.9820 ± 0.0138 | 0.7702 ± 0.0020 | 0.6303 ± 0.0039 | 0.7360 ± 0.0036 | 0.7802 |
|
| 347 |
| [MLM + TAFT](https://huggingface.co/Derify/ModChemBERT-MLM-TAFT) | 0.7924 ± 0.0155 | 0.7282 ± 0.0058 | 0.9725 ± 0.0213 | 0.7770 ± 0.0047 | 0.6542 ± 0.0128 | *0.7646 ± 0.0039* | 0.7815 |
|
| 348 |
| [MLM + DAPT + TAFT](https://huggingface.co/Derify/ModChemBERT-MLM-DAPT-TAFT) | 0.8213 ± 0.0051 | 0.7356 ± 0.0094 | 0.9664 ± 0.0202 | 0.7750 ± 0.0048 | 0.6415 ± 0.0094 | 0.7263 ± 0.0036 | 0.7777 |
|
| 349 |
| [MLM + DAPT + TAFT OPT](https://huggingface.co/Derify/ModChemBERT) | *0.8346 ± 0.0045* | *0.7573 ± 0.0120* | ***0.9938 ± 0.0017*** | 0.7737 ± 0.0034 | ***0.6600 ± 0.0061*** | 0.7518 ± 0.0047 | 0.7952 |
|
| 350 |
|
| 351 |
+
#### ChemBERTa-3 Regression Datasets (RMSE - Lower is better)
|
| 352 |
|
| 353 |
| Model | BACE↓ | CLEARANCE↓ | ESOL↓ | FREESOLV↓ | LIPO↓ | AVG‡ |
|
| 354 |
| ---------------------------------------------------------------------------- | --------------------- | ---------------------- | --------------------- | --------------------- | --------------------- | ---------------- |
|
|
|
|
| 356 |
| [ChemBERTa-100M-MLM](https://huggingface.co/DeepChem/ChemBERTa-100M-MLM)* | 1.011 ± 0.038 | 51.582 ± 3.079 | 0.920 ± 0.011 | 0.536 ± 0.016 | 0.758 ± 0.013 | 0.8063 / 10.9614 |
|
| 357 |
| [c3-MoLFormer-1.1B](https://huggingface.co/DeepChem/MoLFormer-c3-1.1B)* | 1.094 ± 0.126 | 52.058 ± 2.767 | 0.829 ± 0.019 | 0.572 ± 0.023 | 0.728 ± 0.016 | 0.8058 / 11.0562 |
|
| 358 |
| MoLFormer-LHPC* | 1.201 ± 0.100 | 45.74 ± 2.637 | 0.848 ± 0.031 | 0.683 ± 0.040 | 0.895 ± 0.080 | 0.9068 / 9.8734 |
|
| 359 |
+
| | | | | | |
|
| 360 |
| [MLM](https://huggingface.co/Derify/ModChemBERT-MLM) | 1.0893 ± 0.1319 | 49.0005 ± 1.2787 | 0.8456 ± 0.0406 | 0.5491 ± 0.0134 | 0.7147 ± 0.0062 | 0.7997 / 10.4398 |
|
| 361 |
| [MLM + DAPT](https://huggingface.co/Derify/ModChemBERT-MLM-DAPT) | 0.9931 ± 0.0258 | 45.4951 ± 0.7112 | 0.9319 ± 0.0153 | 0.6049 ± 0.0666 | 0.6874 ± 0.0040 | 0.8043 / 9.7425 |
|
| 362 |
| [MLM + TAFT](https://huggingface.co/Derify/ModChemBERT-MLM-TAFT) | 1.0304 ± 0.1146 | 47.8418 ± 0.4070 | ***0.7669 ± 0.0024*** | 0.5293 ± 0.0267 | 0.6708 ± 0.0074 | 0.7493 / 10.1678 |
|
| 363 |
| [MLM + DAPT + TAFT](https://huggingface.co/Derify/ModChemBERT-MLM-DAPT-TAFT) | 0.9713 ± 0.0224 | ***42.8010 ± 3.3475*** | 0.8169 ± 0.0268 | 0.5445 ± 0.0257 | 0.6820 ± 0.0028 | 0.7537 / 9.1631 |
|
| 364 |
| [MLM + DAPT + TAFT OPT](https://huggingface.co/Derify/ModChemBERT) | ***0.9665 ± 0.0250*** | 44.0137 ± 1.1110 | 0.8158 ± 0.0115 | ***0.4979 ± 0.0158*** | ***0.6505 ± 0.0126*** | 0.7327 / 9.3889 |
|
| 365 |
|
| 366 |
+
#### Mswahili, et al. [8] Proposed Classification Datasets (ROC AUC - Higher is better)
|
| 367 |
+
|
| 368 |
+
| Model | Antimalarial↑ | Cocrystal↑ | COVID19↑ | AVG† |
|
| 369 |
+
| ---------------------------------------------------------------------------- | --------------------- | --------------------- | --------------------- | ------ |
|
| 370 |
+
| **Tasks** | 1 | 1 | 1 | |
|
| 371 |
+
| [MLM](https://huggingface.co/Derify/ModChemBERT-MLM) | 0.8707 ± 0.0032 | 0.7967 ± 0.0124 | 0.8106 ± 0.0170 | 0.8260 |
|
| 372 |
+
| [MLM + DAPT](https://huggingface.co/Derify/ModChemBERT-MLM-DAPT) | 0.8756 ± 0.0056 | 0.8288 ± 0.0143 | 0.8029 ± 0.0159 | 0.8358 |
|
| 373 |
+
| [MLM + TAFT](https://huggingface.co/Derify/ModChemBERT-MLM-TAFT) | 0.8832 ± 0.0051 | 0.7866 ± 0.0204 | ***0.8308 ± 0.0026*** | 0.8335 |
|
| 374 |
+
| [MLM + DAPT + TAFT](https://huggingface.co/Derify/ModChemBERT-MLM-DAPT-TAFT) | 0.8819 ± 0.0052 | 0.8550 ± 0.0106 | 0.8013 ± 0.0118 | 0.8461 |
|
| 375 |
+
| [MLM + DAPT + TAFT OPT](https://huggingface.co/Derify/ModChemBERT) | ***0.8966 ± 0.0045*** | ***0.8654 ± 0.0080*** | 0.8132 ± 0.0195 | 0.8584 |
|
| 376 |
+
|
| 377 |
+
#### ADME/AstraZeneca Regression Datasets (RMSE - Lower is better)
|
| 378 |
+
|
| 379 |
+
Hyperparameter optimization for the TAFT stage appears to induce overfitting, as the `MLM + DAPT + TAFT OPT` model shows slightly degraded performance on the ADME/AstraZeneca datasets compared to the `MLM + DAPT + TAFT` model.
|
| 380 |
+
The `MLM + DAPT + TAFT` model, a merge of unoptimized TAFT checkpoints trained with `max_seq_mean` pooling, achieved the best overall performance across the ADME/AstraZeneca datasets.
|
| 381 |
+
|
| 382 |
+
| | ADME | | | | | | AstraZeneca | | | | |
|
| 383 |
+
| ---------------------------------------------------------------------------- | ------------------- | ------------------- | ------------------- | ------------------- | ------------------- | ------------------- | ------------------- | ------------------- | ------------------- | ------------------- | ------ |
|
| 384 |
+
| Model | microsom_stab_h↓ | microsom_stab_r↓ | permeability↓ | ppb_h↓ | ppb_r↓ | solubility↓ | CL↓ | LogD74↓ | PPB↓ | Solubility↓ | AVG† |
|
| 385 |
+
| | | | | | | | | | | |
|
| 386 |
+
| **Tasks** | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | |
|
| 387 |
+
| [MLM](https://huggingface.co/Derify/ModChemBERT-MLM) | 0.4489 ± 0.0114 | 0.4685 ± 0.0225 | 0.5423 ± 0.0076 | 0.8041 ± 0.0378 | 0.7849 ± 0.0394 | 0.5191 ± 0.0147 | **0.4812 ± 0.0073** | 0.8204 ± 0.0070 | 0.1365 ± 0.0066 | 0.9614 ± 0.0189 | 0.5967 |
|
| 388 |
+
| [MLM + DAPT](https://huggingface.co/Derify/ModChemBERT-MLM-DAPT) | **0.4199 ± 0.0064** | 0.4568 ± 0.0091 | 0.5042 ± 0.0135 | 0.8376 ± 0.0629 | 0.8446 ± 0.0756 | 0.4800 ± 0.0118 | 0.5351 ± 0.0036 | 0.8191 ± 0.0066 | 0.1237 ± 0.0022 | 0.9280 ± 0.0088 | 0.5949 |
|
| 389 |
+
| [MLM + TAFT](https://huggingface.co/Derify/ModChemBERT-MLM-TAFT) | 0.4375 ± 0.0027 | 0.4542 ± 0.0024 | 0.5202 ± 0.0141 | **0.7618 ± 0.0138** | 0.7027 ± 0.0023 | 0.5023 ± 0.0107 | 0.5104 ± 0.0110 | 0.7599 ± 0.0050 | 0.1233 ± 0.0088 | 0.8730 ± 0.0112 | 0.5645 |
|
| 390 |
+
| [MLM + DAPT + TAFT](https://huggingface.co/Derify/ModChemBERT-MLM-DAPT-TAFT) | 0.4206 ± 0.0071 | **0.4400 ± 0.0039** | **0.4899 ± 0.0068** | 0.8927 ± 0.0163 | **0.6942 ± 0.0397** | 0.4641 ± 0.0082 | 0.5022 ± 0.0136 | **0.7467 ± 0.0041** | 0.1195 ± 0.0026 | **0.8564 ± 0.0265** | 0.5626 |
|
| 391 |
+
| [MLM + DAPT + TAFT OPT](https://huggingface.co/Derify/ModChemBERT) | 0.4248 ± 0.0041 | 0.4403 ± 0.0046 | 0.5025 ± 0.0029 | 0.8901 ± 0.0123 | 0.7268 ± 0.0090 | **0.4627 ± 0.0083** | 0.4932 ± 0.0079 | 0.7596 ± 0.0044 | **0.1150 ± 0.0002** | 0.8735 ± 0.0053 | 0.5689 |
|
| 392 |
+
|
| 393 |
+
|
| 394 |
**Bold** indicates the best result in the column; *italic* indicates the best result among ModChemBERT checkpoints.<br/>
|
| 395 |
\* Published results from the ChemBERTa-3 [7] paper for optimized chemical language models using DeepChem scaffold splits.<br/>
|
| 396 |
+
† AVG column shows the mean score across classification tasks.<br/>
|
| 397 |
+
‡ AVG column shows the mean scores across regression tasks without and with the clearance score.
|
| 398 |
|
| 399 |
</details>
|
| 400 |
|
|
|
|
| 434 |
| esol | 64 | sum_mean | N/A | 0.1 | 0.0 | 0.1 |
|
| 435 |
| freesolv | 32 | max_seq_mha | 5 | 0.1 | 0.0 | 0.0 |
|
| 436 |
| lipo | 32 | max_seq_mha | 3 | 0.1 | 0.1 | 0.1 |
|
| 437 |
+
| antimalarial | 16 | max_seq_mha | 3 | 0.1 | 0.1 | 0.1 |
|
| 438 |
+
| cocrystal | 16 | max_cls | 3 | 0.1 | 0.0 | 0.1 |
|
| 439 |
+
| covid19 | 16 | sum_mean | N/A | 0.1 | 0.0 | 0.1 |
|
| 440 |
|
| 441 |
</details>
|
| 442 |
|
|
|
|
| 470 |
```
|
| 471 |
|
| 472 |
## References
|
| 473 |
+
1. Kallergis, G., Asgari, E., Empting, M. et al. Domain adaptable language modeling of chemical compounds identifies potent pathoblockers for Pseudomonas aeruginosa. Commun Chem 8, 114 (2025). https://doi.org/10.1038/s42004-025-01484-4
|
| 474 |
2. Behrendt, Maike, Stefan Sylvius Wagner, and Stefan Harmeling. "MaxPoolBERT: Enhancing BERT Classification via Layer-and Token-Wise Aggregation." arXiv preprint arXiv:2505.15696 (2025).
|
| 475 |
3. Sultan, Afnan, et al. "Transformers for molecular property prediction: Domain adaptation efficiently improves performance." arXiv preprint arXiv:2503.03360 (2025).
|
| 476 |
4. Warner, Benjamin, et al. "Smarter, better, faster, longer: A modern bidirectional encoder for fast, memory efficient, and long context finetuning and inference." arXiv preprint arXiv:2412.13663 (2024).
|
| 477 |
+
5. Clavié, Benjamin. "JaColBERTv2.5: Optimising Multi-Vector Retrievers to Create State-of-the-Art Japanese Retrievers with Constrained Resources." arXiv preprint arXiv:2407.20750 (2024).
|
| 478 |
6. Grattafiori, Aaron, et al. "The llama 3 herd of models." arXiv preprint arXiv:2407.21783 (2024).
|
| 479 |
+
7. Singh R, Barsainyan AA, Irfan R, Amorin CJ, He S, Davis T, et al. ChemBERTa-3: An Open Source Training Framework for Chemical Foundation Models. ChemRxiv. 2025; doi:10.26434/chemrxiv-2025-4glrl-v2 This content is a preprint and has not been peer-reviewed.
|
| 480 |
+
8. Mswahili, M.E., Hwang, J., Rajapakse, J.C. et al. Positional embeddings and zero-shot learning using BERT for molecular-property prediction. J Cheminform 17, 17 (2025). https://doi.org/10.1186/s13321-025-00959-9
|
| 481 |
+
9. Mswahili, M.E.; Ndomba, G.E.; Jo, K.; Jeong, Y.-S. Graph Neural Network and BERT Model for Antimalarial Drug Predictions Using Plasmodium Potential Targets. Applied Sciences, 2024, 14(4), 1472. https://doi.org/10.3390/app14041472
|
| 482 |
+
10. Mswahili, M.E.; Lee, M.-J.; Martin, G.L.; Kim, J.; Kim, P.; Choi, G.J.; Jeong, Y.-S. Cocrystal Prediction Using Machine Learning Models and Descriptors. Applied Sciences, 2021, 11, 1323. https://doi.org/10.3390/app11031323
|
| 483 |
+
11. Harigua-Souiai, E.; Heinhane, M.M.; Abdelkrim, Y.Z.; Souiai, O.; Abdeljaoued-Tej, I.; Guizani, I. Deep Learning Algorithms Achieved Satisfactory Predictions When Trained on a Novel Collection of Anticoronavirus Molecules. Frontiers in Genetics, 2021, 12:744170. https://doi.org/10.3389/fgene.2021.744170
|
| 484 |
+
12. Cheng Fang, Ye Wang, Richard Grater, Sudarshan Kapadnis, Cheryl Black, Patrick Trapa, and Simone Sciabola. "Prospective Validation of Machine Learning Algorithms for Absorption, Distribution, Metabolism, and Excretion Prediction: An Industrial Perspective" Journal of Chemical Information and Modeling 2023 63 (11), 3263-3274 https://doi.org/10.1021/acs.jcim.3c00160
|
logs_modchembert_classification_ModChemBERT-MLM/modchembert_deepchem_splits_run_antimalarial_epochs100_batch_size32_20250925_224116.log
ADDED
|
@@ -0,0 +1,345 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
2025-09-25 22:41:16,446 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Running benchmark for dataset: antimalarial
|
| 2 |
+
2025-09-25 22:41:16,446 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - dataset: antimalarial, tasks: ['label'], epochs: 100, learning rate: 3e-05
|
| 3 |
+
2025-09-25 22:41:16,452 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Starting triplicate run 1 for dataset antimalarial at 2025-09-25_22-41-16
|
| 4 |
+
2025-09-25 22:41:27,144 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 1/100 | Train Loss: 0.5625 | Val mean-roc_auc_score: 0.7623
|
| 5 |
+
2025-09-25 22:41:27,145 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Global step of best model: 120
|
| 6 |
+
2025-09-25 22:41:28,090 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Best model saved at epoch 1 with val mean-roc_auc_score: 0.7623
|
| 7 |
+
2025-09-25 22:41:41,247 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 2/100 | Train Loss: 0.4906 | Val mean-roc_auc_score: 0.8371
|
| 8 |
+
2025-09-25 22:41:41,438 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Global step of best model: 240
|
| 9 |
+
2025-09-25 22:41:42,027 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Best model saved at epoch 2 with val mean-roc_auc_score: 0.8371
|
| 10 |
+
2025-09-25 22:42:02,249 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 3/100 | Train Loss: 0.4188 | Val mean-roc_auc_score: 0.8637
|
| 11 |
+
2025-09-25 22:42:02,111 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Global step of best model: 360
|
| 12 |
+
2025-09-25 22:42:00,371 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Best model saved at epoch 3 with val mean-roc_auc_score: 0.8637
|
| 13 |
+
2025-09-25 22:42:20,323 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 4/100 | Train Loss: 0.3344 | Val mean-roc_auc_score: 0.8736
|
| 14 |
+
2025-09-25 22:42:20,526 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Global step of best model: 480
|
| 15 |
+
2025-09-25 22:42:21,187 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Best model saved at epoch 4 with val mean-roc_auc_score: 0.8736
|
| 16 |
+
2025-09-25 22:42:38,521 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 5/100 | Train Loss: 0.2600 | Val mean-roc_auc_score: 0.8832
|
| 17 |
+
2025-09-25 22:42:38,685 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Global step of best model: 600
|
| 18 |
+
2025-09-25 22:42:39,346 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Best model saved at epoch 5 with val mean-roc_auc_score: 0.8832
|
| 19 |
+
2025-09-25 22:42:57,117 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 6/100 | Train Loss: 0.2109 | Val mean-roc_auc_score: 0.8798
|
| 20 |
+
2025-09-25 22:43:18,749 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 7/100 | Train Loss: 0.1812 | Val mean-roc_auc_score: 0.8936
|
| 21 |
+
2025-09-25 22:43:18,908 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Global step of best model: 840
|
| 22 |
+
2025-09-25 22:43:19,608 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Best model saved at epoch 7 with val mean-roc_auc_score: 0.8936
|
| 23 |
+
2025-09-25 22:43:38,275 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 8/100 | Train Loss: 0.1458 | Val mean-roc_auc_score: 0.8719
|
| 24 |
+
2025-09-25 22:43:57,031 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 9/100 | Train Loss: 0.1508 | Val mean-roc_auc_score: 0.8796
|
| 25 |
+
2025-09-25 22:44:16,773 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 10/100 | Train Loss: 0.1256 | Val mean-roc_auc_score: 0.8871
|
| 26 |
+
2025-09-25 22:44:33,871 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 11/100 | Train Loss: 0.1172 | Val mean-roc_auc_score: 0.8873
|
| 27 |
+
2025-09-25 22:44:51,128 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 12/100 | Train Loss: 0.0871 | Val mean-roc_auc_score: 0.8741
|
| 28 |
+
2025-09-25 22:45:10,797 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 13/100 | Train Loss: 0.0724 | Val mean-roc_auc_score: 0.8816
|
| 29 |
+
2025-09-25 22:45:27,719 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 14/100 | Train Loss: 0.0467 | Val mean-roc_auc_score: 0.8720
|
| 30 |
+
2025-09-25 22:45:44,609 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 15/100 | Train Loss: 0.0772 | Val mean-roc_auc_score: 0.8803
|
| 31 |
+
2025-09-25 22:46:03,867 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 16/100 | Train Loss: 0.0453 | Val mean-roc_auc_score: 0.8809
|
| 32 |
+
2025-09-25 22:46:22,309 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 17/100 | Train Loss: 0.0730 | Val mean-roc_auc_score: 0.8811
|
| 33 |
+
2025-09-25 22:46:39,170 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 18/100 | Train Loss: 0.0471 | Val mean-roc_auc_score: 0.8782
|
| 34 |
+
2025-09-25 22:46:59,841 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 19/100 | Train Loss: 0.0459 | Val mean-roc_auc_score: 0.8773
|
| 35 |
+
2025-09-25 22:47:17,161 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 20/100 | Train Loss: 0.0459 | Val mean-roc_auc_score: 0.8781
|
| 36 |
+
2025-09-25 22:47:35,225 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 21/100 | Train Loss: 0.0320 | Val mean-roc_auc_score: 0.8819
|
| 37 |
+
2025-09-25 22:47:55,039 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 22/100 | Train Loss: 0.0469 | Val mean-roc_auc_score: 0.8864
|
| 38 |
+
2025-09-25 22:48:12,210 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 23/100 | Train Loss: 0.0443 | Val mean-roc_auc_score: 0.8873
|
| 39 |
+
2025-09-25 22:48:29,978 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 24/100 | Train Loss: 0.0273 | Val mean-roc_auc_score: 0.8863
|
| 40 |
+
2025-09-25 22:48:49,754 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 25/100 | Train Loss: 0.0295 | Val mean-roc_auc_score: 0.8858
|
| 41 |
+
2025-09-25 22:49:09,258 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 26/100 | Train Loss: 0.0264 | Val mean-roc_auc_score: 0.8854
|
| 42 |
+
2025-09-25 22:49:27,725 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 27/100 | Train Loss: 0.0301 | Val mean-roc_auc_score: 0.8845
|
| 43 |
+
2025-09-25 22:49:45,082 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 28/100 | Train Loss: 0.0380 | Val mean-roc_auc_score: 0.8844
|
| 44 |
+
2025-09-25 22:50:05,014 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 29/100 | Train Loss: 0.0252 | Val mean-roc_auc_score: 0.8862
|
| 45 |
+
2025-09-25 22:50:22,579 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 30/100 | Train Loss: 0.0184 | Val mean-roc_auc_score: 0.8828
|
| 46 |
+
2025-09-25 22:50:39,553 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 31/100 | Train Loss: 0.0299 | Val mean-roc_auc_score: 0.8828
|
| 47 |
+
2025-09-25 22:51:00,108 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 32/100 | Train Loss: 0.0176 | Val mean-roc_auc_score: 0.8804
|
| 48 |
+
2025-09-25 22:51:16,911 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 33/100 | Train Loss: 0.0258 | Val mean-roc_auc_score: 0.8812
|
| 49 |
+
2025-09-25 22:51:34,858 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 34/100 | Train Loss: 0.0187 | Val mean-roc_auc_score: 0.8806
|
| 50 |
+
2025-09-25 22:51:54,680 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 35/100 | Train Loss: 0.0314 | Val mean-roc_auc_score: 0.8830
|
| 51 |
+
2025-09-25 22:52:13,110 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 36/100 | Train Loss: 0.0359 | Val mean-roc_auc_score: 0.8819
|
| 52 |
+
2025-09-25 22:52:31,475 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 37/100 | Train Loss: 0.0256 | Val mean-roc_auc_score: 0.8844
|
| 53 |
+
2025-09-25 22:52:51,514 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 38/100 | Train Loss: 0.0187 | Val mean-roc_auc_score: 0.8860
|
| 54 |
+
2025-09-25 22:53:08,722 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 39/100 | Train Loss: 0.0139 | Val mean-roc_auc_score: 0.8866
|
| 55 |
+
2025-09-25 22:53:25,726 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 40/100 | Train Loss: 0.0161 | Val mean-roc_auc_score: 0.8850
|
| 56 |
+
2025-09-25 22:53:46,932 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 41/100 | Train Loss: 0.0174 | Val mean-roc_auc_score: 0.8849
|
| 57 |
+
2025-09-25 22:54:06,220 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 42/100 | Train Loss: 0.0215 | Val mean-roc_auc_score: 0.8858
|
| 58 |
+
2025-09-25 22:54:22,064 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 43/100 | Train Loss: 0.0167 | Val mean-roc_auc_score: 0.8885
|
| 59 |
+
2025-09-25 22:54:41,451 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 44/100 | Train Loss: 0.0279 | Val mean-roc_auc_score: 0.8826
|
| 60 |
+
2025-09-25 22:54:58,460 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 45/100 | Train Loss: 0.0147 | Val mean-roc_auc_score: 0.8828
|
| 61 |
+
2025-09-25 22:55:15,242 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 46/100 | Train Loss: 0.0124 | Val mean-roc_auc_score: 0.8842
|
| 62 |
+
2025-09-25 22:55:35,960 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 47/100 | Train Loss: 0.0167 | Val mean-roc_auc_score: 0.8851
|
| 63 |
+
2025-09-25 22:55:53,091 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 48/100 | Train Loss: 0.0103 | Val mean-roc_auc_score: 0.8832
|
| 64 |
+
2025-09-25 22:56:10,750 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 49/100 | Train Loss: 0.0076 | Val mean-roc_auc_score: 0.8845
|
| 65 |
+
2025-09-25 22:56:31,958 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 50/100 | Train Loss: 0.0108 | Val mean-roc_auc_score: 0.8832
|
| 66 |
+
2025-09-25 22:56:49,118 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 51/100 | Train Loss: 0.0080 | Val mean-roc_auc_score: 0.8873
|
| 67 |
+
2025-09-25 22:57:06,778 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 52/100 | Train Loss: 0.0169 | Val mean-roc_auc_score: 0.8852
|
| 68 |
+
2025-09-25 22:57:27,211 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 53/100 | Train Loss: 0.0076 | Val mean-roc_auc_score: 0.8855
|
| 69 |
+
2025-09-25 22:57:44,883 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 54/100 | Train Loss: 0.0104 | Val mean-roc_auc_score: 0.8833
|
| 70 |
+
2025-09-25 22:58:02,977 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 55/100 | Train Loss: 0.0089 | Val mean-roc_auc_score: 0.8823
|
| 71 |
+
2025-09-25 22:58:23,646 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 56/100 | Train Loss: 0.0168 | Val mean-roc_auc_score: 0.8905
|
| 72 |
+
2025-09-25 22:58:41,908 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 57/100 | Train Loss: 0.0070 | Val mean-roc_auc_score: 0.8876
|
| 73 |
+
2025-09-25 22:58:59,282 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 58/100 | Train Loss: 0.0079 | Val mean-roc_auc_score: 0.8856
|
| 74 |
+
2025-09-25 22:59:17,042 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 59/100 | Train Loss: 0.0055 | Val mean-roc_auc_score: 0.8848
|
| 75 |
+
2025-09-25 22:59:36,341 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 60/100 | Train Loss: 0.0178 | Val mean-roc_auc_score: 0.8749
|
| 76 |
+
2025-09-25 22:59:53,671 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 61/100 | Train Loss: 0.0084 | Val mean-roc_auc_score: 0.8759
|
| 77 |
+
2025-09-25 23:00:11,618 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 62/100 | Train Loss: 0.0165 | Val mean-roc_auc_score: 0.8792
|
| 78 |
+
2025-09-25 23:00:31,005 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 63/100 | Train Loss: 0.0133 | Val mean-roc_auc_score: 0.8798
|
| 79 |
+
2025-09-25 23:00:48,044 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 64/100 | Train Loss: 0.0112 | Val mean-roc_auc_score: 0.8809
|
| 80 |
+
2025-09-25 23:01:07,181 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 65/100 | Train Loss: 0.0062 | Val mean-roc_auc_score: 0.8818
|
| 81 |
+
2025-09-25 23:01:23,905 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 66/100 | Train Loss: 0.0029 | Val mean-roc_auc_score: 0.8824
|
| 82 |
+
2025-09-25 23:01:41,394 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 67/100 | Train Loss: 0.0141 | Val mean-roc_auc_score: 0.8869
|
| 83 |
+
2025-09-25 23:02:02,270 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 68/100 | Train Loss: 0.0046 | Val mean-roc_auc_score: 0.8854
|
| 84 |
+
2025-09-25 23:02:18,899 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 69/100 | Train Loss: 0.0073 | Val mean-roc_auc_score: 0.8845
|
| 85 |
+
2025-09-25 23:02:35,738 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 70/100 | Train Loss: 0.0059 | Val mean-roc_auc_score: 0.8837
|
| 86 |
+
2025-09-25 23:02:54,802 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 71/100 | Train Loss: 0.0093 | Val mean-roc_auc_score: 0.8855
|
| 87 |
+
2025-09-25 23:03:11,812 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 72/100 | Train Loss: 0.0051 | Val mean-roc_auc_score: 0.8829
|
| 88 |
+
2025-09-25 23:03:28,167 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 73/100 | Train Loss: 0.0100 | Val mean-roc_auc_score: 0.8842
|
| 89 |
+
2025-09-25 23:03:47,238 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 74/100 | Train Loss: 0.0049 | Val mean-roc_auc_score: 0.8838
|
| 90 |
+
2025-09-25 23:04:05,078 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 75/100 | Train Loss: 0.0054 | Val mean-roc_auc_score: 0.8816
|
| 91 |
+
2025-09-25 23:04:20,670 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 76/100 | Train Loss: 0.0072 | Val mean-roc_auc_score: 0.8814
|
| 92 |
+
2025-09-25 23:04:39,994 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 77/100 | Train Loss: 0.0067 | Val mean-roc_auc_score: 0.8829
|
| 93 |
+
2025-09-25 23:04:56,679 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 78/100 | Train Loss: 0.0056 | Val mean-roc_auc_score: 0.8827
|
| 94 |
+
2025-09-25 23:05:13,341 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 79/100 | Train Loss: 0.0037 | Val mean-roc_auc_score: 0.8822
|
| 95 |
+
2025-09-25 23:05:32,570 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 80/100 | Train Loss: 0.0041 | Val mean-roc_auc_score: 0.8822
|
| 96 |
+
2025-09-25 23:05:49,289 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 81/100 | Train Loss: 0.0046 | Val mean-roc_auc_score: 0.8813
|
| 97 |
+
2025-09-25 23:06:05,788 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 82/100 | Train Loss: 0.0059 | Val mean-roc_auc_score: 0.8816
|
| 98 |
+
2025-09-25 23:06:25,655 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 83/100 | Train Loss: 0.0037 | Val mean-roc_auc_score: 0.8805
|
| 99 |
+
2025-09-25 23:06:44,053 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 84/100 | Train Loss: 0.0079 | Val mean-roc_auc_score: 0.8819
|
| 100 |
+
2025-09-25 23:07:02,390 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 85/100 | Train Loss: 0.0083 | Val mean-roc_auc_score: 0.8839
|
| 101 |
+
2025-09-25 23:07:23,001 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 86/100 | Train Loss: 0.0073 | Val mean-roc_auc_score: 0.8831
|
| 102 |
+
2025-09-25 23:07:41,046 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 87/100 | Train Loss: 0.0053 | Val mean-roc_auc_score: 0.8810
|
| 103 |
+
2025-09-25 23:07:58,652 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 88/100 | Train Loss: 0.0057 | Val mean-roc_auc_score: 0.8817
|
| 104 |
+
2025-09-25 23:08:18,047 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 89/100 | Train Loss: 0.0041 | Val mean-roc_auc_score: 0.8844
|
| 105 |
+
2025-09-25 23:08:34,578 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 90/100 | Train Loss: 0.0063 | Val mean-roc_auc_score: 0.8835
|
| 106 |
+
2025-09-25 23:08:50,963 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 91/100 | Train Loss: 0.0053 | Val mean-roc_auc_score: 0.8828
|
| 107 |
+
2025-09-25 23:09:11,550 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 92/100 | Train Loss: 0.0065 | Val mean-roc_auc_score: 0.8814
|
| 108 |
+
2025-09-25 23:09:28,052 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 93/100 | Train Loss: 0.0094 | Val mean-roc_auc_score: 0.8797
|
| 109 |
+
2025-09-25 23:09:44,584 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 94/100 | Train Loss: 0.0064 | Val mean-roc_auc_score: 0.8801
|
| 110 |
+
2025-09-25 23:10:04,944 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 95/100 | Train Loss: 0.0048 | Val mean-roc_auc_score: 0.8808
|
| 111 |
+
2025-09-25 23:10:22,762 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 96/100 | Train Loss: 0.0152 | Val mean-roc_auc_score: 0.8806
|
| 112 |
+
2025-09-25 23:10:39,533 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 97/100 | Train Loss: 0.0050 | Val mean-roc_auc_score: 0.8813
|
| 113 |
+
2025-09-25 23:10:58,806 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 98/100 | Train Loss: 0.0134 | Val mean-roc_auc_score: 0.8807
|
| 114 |
+
2025-09-25 23:11:15,663 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 99/100 | Train Loss: 0.0044 | Val mean-roc_auc_score: 0.8806
|
| 115 |
+
2025-09-25 23:11:34,530 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 100/100 | Train Loss: 0.0066 | Val mean-roc_auc_score: 0.8796
|
| 116 |
+
2025-09-25 23:11:35,569 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Test mean-roc_auc_score: 0.8729
|
| 117 |
+
2025-09-25 23:11:35,986 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Starting triplicate run 2 for dataset antimalarial at 2025-09-25_23-11-35
|
| 118 |
+
2025-09-25 23:11:55,382 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 1/100 | Train Loss: 0.5687 | Val mean-roc_auc_score: 0.7597
|
| 119 |
+
2025-09-25 23:11:55,382 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Global step of best model: 120
|
| 120 |
+
2025-09-25 23:11:56,169 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Best model saved at epoch 1 with val mean-roc_auc_score: 0.7597
|
| 121 |
+
2025-09-25 23:12:15,533 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 2/100 | Train Loss: 0.4906 | Val mean-roc_auc_score: 0.8282
|
| 122 |
+
2025-09-25 23:12:15,743 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Global step of best model: 240
|
| 123 |
+
2025-09-25 23:12:16,413 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Best model saved at epoch 2 with val mean-roc_auc_score: 0.8282
|
| 124 |
+
2025-09-25 23:12:33,670 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 3/100 | Train Loss: 0.4229 | Val mean-roc_auc_score: 0.8626
|
| 125 |
+
2025-09-25 23:12:33,937 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Global step of best model: 360
|
| 126 |
+
2025-09-25 23:12:34,779 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Best model saved at epoch 3 with val mean-roc_auc_score: 0.8626
|
| 127 |
+
2025-09-25 23:12:54,489 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 4/100 | Train Loss: 0.3438 | Val mean-roc_auc_score: 0.8623
|
| 128 |
+
2025-09-25 23:13:12,340 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 5/100 | Train Loss: 0.2612 | Val mean-roc_auc_score: 0.8788
|
| 129 |
+
2025-09-25 23:13:12,525 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Global step of best model: 600
|
| 130 |
+
2025-09-25 23:13:13,195 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Best model saved at epoch 5 with val mean-roc_auc_score: 0.8788
|
| 131 |
+
2025-09-25 23:13:29,908 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 6/100 | Train Loss: 0.2188 | Val mean-roc_auc_score: 0.8741
|
| 132 |
+
2025-09-25 23:13:49,422 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 7/100 | Train Loss: 0.1695 | Val mean-roc_auc_score: 0.8935
|
| 133 |
+
2025-09-25 23:13:49,634 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Global step of best model: 840
|
| 134 |
+
2025-09-25 23:13:50,280 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Best model saved at epoch 7 with val mean-roc_auc_score: 0.8935
|
| 135 |
+
2025-09-25 23:14:07,702 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 8/100 | Train Loss: 0.1646 | Val mean-roc_auc_score: 0.8829
|
| 136 |
+
2025-09-25 23:14:25,774 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 9/100 | Train Loss: 0.1281 | Val mean-roc_auc_score: 0.8874
|
| 137 |
+
2025-09-25 23:14:45,464 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 10/100 | Train Loss: 0.0925 | Val mean-roc_auc_score: 0.8835
|
| 138 |
+
2025-09-25 23:15:01,922 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 11/100 | Train Loss: 0.0660 | Val mean-roc_auc_score: 0.8770
|
| 139 |
+
2025-09-25 23:15:19,216 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 12/100 | Train Loss: 0.0656 | Val mean-roc_auc_score: 0.8814
|
| 140 |
+
2025-09-25 23:15:38,401 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 13/100 | Train Loss: 0.0714 | Val mean-roc_auc_score: 0.8766
|
| 141 |
+
2025-09-25 23:15:55,718 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 14/100 | Train Loss: 0.0609 | Val mean-roc_auc_score: 0.8756
|
| 142 |
+
2025-09-25 23:16:12,135 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 15/100 | Train Loss: 0.0625 | Val mean-roc_auc_score: 0.8774
|
| 143 |
+
2025-09-25 23:16:31,667 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 16/100 | Train Loss: 0.0758 | Val mean-roc_auc_score: 0.8746
|
| 144 |
+
2025-09-25 23:16:48,522 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 17/100 | Train Loss: 0.0727 | Val mean-roc_auc_score: 0.8750
|
| 145 |
+
2025-09-25 23:17:04,831 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 18/100 | Train Loss: 0.0557 | Val mean-roc_auc_score: 0.8821
|
| 146 |
+
2025-09-25 23:17:24,794 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 19/100 | Train Loss: 0.0383 | Val mean-roc_auc_score: 0.8849
|
| 147 |
+
2025-09-25 23:17:41,634 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 20/100 | Train Loss: 0.0419 | Val mean-roc_auc_score: 0.8799
|
| 148 |
+
2025-09-25 23:17:58,167 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 21/100 | Train Loss: 0.0855 | Val mean-roc_auc_score: 0.8846
|
| 149 |
+
2025-09-25 23:18:17,675 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 22/100 | Train Loss: 0.0314 | Val mean-roc_auc_score: 0.8842
|
| 150 |
+
2025-09-25 23:18:34,840 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 23/100 | Train Loss: 0.0273 | Val mean-roc_auc_score: 0.8828
|
| 151 |
+
2025-09-25 23:18:51,816 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 24/100 | Train Loss: 0.0316 | Val mean-roc_auc_score: 0.8826
|
| 152 |
+
2025-09-25 23:19:12,333 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 25/100 | Train Loss: 0.0273 | Val mean-roc_auc_score: 0.8859
|
| 153 |
+
2025-09-25 23:19:28,541 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 26/100 | Train Loss: 0.0175 | Val mean-roc_auc_score: 0.8838
|
| 154 |
+
2025-09-25 23:19:45,441 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 27/100 | Train Loss: 0.0207 | Val mean-roc_auc_score: 0.8812
|
| 155 |
+
2025-09-25 23:20:05,288 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 28/100 | Train Loss: 0.0271 | Val mean-roc_auc_score: 0.8800
|
| 156 |
+
2025-09-25 23:20:23,181 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 29/100 | Train Loss: 0.0295 | Val mean-roc_auc_score: 0.8769
|
| 157 |
+
2025-09-25 23:20:41,061 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 30/100 | Train Loss: 0.0475 | Val mean-roc_auc_score: 0.8832
|
| 158 |
+
2025-09-25 23:21:01,750 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 31/100 | Train Loss: 0.0196 | Val mean-roc_auc_score: 0.8831
|
| 159 |
+
2025-09-25 23:21:18,800 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 32/100 | Train Loss: 0.0203 | Val mean-roc_auc_score: 0.8862
|
| 160 |
+
2025-09-25 23:21:36,236 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 33/100 | Train Loss: 0.0302 | Val mean-roc_auc_score: 0.8846
|
| 161 |
+
2025-09-25 23:21:55,894 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 34/100 | Train Loss: 0.0155 | Val mean-roc_auc_score: 0.8830
|
| 162 |
+
2025-09-25 23:22:12,691 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 35/100 | Train Loss: 0.0186 | Val mean-roc_auc_score: 0.8878
|
| 163 |
+
2025-09-25 23:22:29,738 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 36/100 | Train Loss: 0.0277 | Val mean-roc_auc_score: 0.8933
|
| 164 |
+
2025-09-25 23:22:49,346 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 37/100 | Train Loss: 0.0122 | Val mean-roc_auc_score: 0.8888
|
| 165 |
+
2025-09-25 23:23:06,010 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 38/100 | Train Loss: 0.0115 | Val mean-roc_auc_score: 0.8827
|
| 166 |
+
2025-09-25 23:23:23,046 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 39/100 | Train Loss: 0.0178 | Val mean-roc_auc_score: 0.8835
|
| 167 |
+
2025-09-25 23:23:42,255 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 40/100 | Train Loss: 0.0245 | Val mean-roc_auc_score: 0.8889
|
| 168 |
+
2025-09-25 23:23:58,788 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 41/100 | Train Loss: 0.0449 | Val mean-roc_auc_score: 0.8883
|
| 169 |
+
2025-09-25 23:24:18,727 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 42/100 | Train Loss: 0.0177 | Val mean-roc_auc_score: 0.8827
|
| 170 |
+
2025-09-25 23:24:36,226 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 43/100 | Train Loss: 0.0153 | Val mean-roc_auc_score: 0.8842
|
| 171 |
+
2025-09-25 23:24:53,035 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 44/100 | Train Loss: 0.0121 | Val mean-roc_auc_score: 0.8834
|
| 172 |
+
2025-09-25 23:25:12,232 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 45/100 | Train Loss: 0.0109 | Val mean-roc_auc_score: 0.8825
|
| 173 |
+
2025-09-25 23:25:28,676 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 46/100 | Train Loss: 0.0205 | Val mean-roc_auc_score: 0.8860
|
| 174 |
+
2025-09-25 23:25:45,506 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 47/100 | Train Loss: 0.0177 | Val mean-roc_auc_score: 0.8834
|
| 175 |
+
2025-09-25 23:26:04,841 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 48/100 | Train Loss: 0.0119 | Val mean-roc_auc_score: 0.8862
|
| 176 |
+
2025-09-25 23:26:21,317 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 49/100 | Train Loss: 0.0121 | Val mean-roc_auc_score: 0.8822
|
| 177 |
+
2025-09-25 23:26:39,125 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 50/100 | Train Loss: 0.0116 | Val mean-roc_auc_score: 0.8758
|
| 178 |
+
2025-09-25 23:26:58,220 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 51/100 | Train Loss: 0.0166 | Val mean-roc_auc_score: 0.8842
|
| 179 |
+
2025-09-25 23:27:15,304 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 52/100 | Train Loss: 0.0099 | Val mean-roc_auc_score: 0.8813
|
| 180 |
+
2025-09-25 23:27:31,566 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 53/100 | Train Loss: 0.0116 | Val mean-roc_auc_score: 0.8778
|
| 181 |
+
2025-09-25 23:27:50,706 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 54/100 | Train Loss: 0.0148 | Val mean-roc_auc_score: 0.8825
|
| 182 |
+
2025-09-25 23:28:08,020 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 55/100 | Train Loss: 0.0098 | Val mean-roc_auc_score: 0.8846
|
| 183 |
+
2025-09-25 23:28:25,491 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 56/100 | Train Loss: 0.0099 | Val mean-roc_auc_score: 0.8842
|
| 184 |
+
2025-09-25 23:28:45,480 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 57/100 | Train Loss: 0.0073 | Val mean-roc_auc_score: 0.8842
|
| 185 |
+
2025-09-25 23:29:03,342 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 58/100 | Train Loss: 0.0107 | Val mean-roc_auc_score: 0.8817
|
| 186 |
+
2025-09-25 23:29:21,599 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 59/100 | Train Loss: 0.0133 | Val mean-roc_auc_score: 0.8837
|
| 187 |
+
2025-09-25 23:29:41,281 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 60/100 | Train Loss: 0.0115 | Val mean-roc_auc_score: 0.8811
|
| 188 |
+
2025-09-25 23:29:57,931 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 61/100 | Train Loss: 0.0158 | Val mean-roc_auc_score: 0.8803
|
| 189 |
+
2025-09-25 23:30:14,663 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 62/100 | Train Loss: 0.0088 | Val mean-roc_auc_score: 0.8823
|
| 190 |
+
2025-09-25 23:30:34,628 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 63/100 | Train Loss: 0.0076 | Val mean-roc_auc_score: 0.8832
|
| 191 |
+
2025-09-25 23:30:51,945 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 64/100 | Train Loss: 0.0118 | Val mean-roc_auc_score: 0.8814
|
| 192 |
+
2025-09-25 23:31:08,554 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 65/100 | Train Loss: 0.0120 | Val mean-roc_auc_score: 0.8805
|
| 193 |
+
2025-09-25 23:31:27,658 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 66/100 | Train Loss: 0.0059 | Val mean-roc_auc_score: 0.8803
|
| 194 |
+
2025-09-25 23:31:44,717 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 67/100 | Train Loss: 0.0100 | Val mean-roc_auc_score: 0.8773
|
| 195 |
+
2025-09-25 23:32:00,834 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 68/100 | Train Loss: 0.0117 | Val mean-roc_auc_score: 0.8779
|
| 196 |
+
2025-09-25 23:32:19,929 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 69/100 | Train Loss: 0.0120 | Val mean-roc_auc_score: 0.8782
|
| 197 |
+
2025-09-25 23:32:36,388 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 70/100 | Train Loss: 0.0075 | Val mean-roc_auc_score: 0.8789
|
| 198 |
+
2025-09-25 23:32:56,181 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 71/100 | Train Loss: 0.0076 | Val mean-roc_auc_score: 0.8772
|
| 199 |
+
2025-09-25 23:33:13,399 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 72/100 | Train Loss: 0.0066 | Val mean-roc_auc_score: 0.8766
|
| 200 |
+
2025-09-25 23:33:30,065 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 73/100 | Train Loss: 0.0113 | Val mean-roc_auc_score: 0.8793
|
| 201 |
+
2025-09-25 23:33:49,421 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 74/100 | Train Loss: 0.0064 | Val mean-roc_auc_score: 0.8806
|
| 202 |
+
2025-09-25 23:34:07,214 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 75/100 | Train Loss: 0.0067 | Val mean-roc_auc_score: 0.8809
|
| 203 |
+
2025-09-25 23:34:22,975 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 76/100 | Train Loss: 0.0211 | Val mean-roc_auc_score: 0.8831
|
| 204 |
+
2025-09-25 23:34:42,793 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 77/100 | Train Loss: 0.0065 | Val mean-roc_auc_score: 0.8821
|
| 205 |
+
2025-09-25 23:35:00,118 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 78/100 | Train Loss: 0.0063 | Val mean-roc_auc_score: 0.8809
|
| 206 |
+
2025-09-25 23:35:17,452 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 79/100 | Train Loss: 0.0063 | Val mean-roc_auc_score: 0.8814
|
| 207 |
+
2025-09-25 23:35:37,116 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 80/100 | Train Loss: 0.0073 | Val mean-roc_auc_score: 0.8803
|
| 208 |
+
2025-09-25 23:35:54,016 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 81/100 | Train Loss: 0.0037 | Val mean-roc_auc_score: 0.8791
|
| 209 |
+
2025-09-25 23:36:10,566 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 82/100 | Train Loss: 0.0047 | Val mean-roc_auc_score: 0.8801
|
| 210 |
+
2025-09-25 23:36:29,605 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 83/100 | Train Loss: 0.0079 | Val mean-roc_auc_score: 0.8826
|
| 211 |
+
2025-09-25 23:36:46,685 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 84/100 | Train Loss: 0.0058 | Val mean-roc_auc_score: 0.8804
|
| 212 |
+
2025-09-25 23:37:03,082 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 85/100 | Train Loss: 0.0057 | Val mean-roc_auc_score: 0.8815
|
| 213 |
+
2025-09-25 23:37:22,409 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 86/100 | Train Loss: 0.0072 | Val mean-roc_auc_score: 0.8799
|
| 214 |
+
2025-09-25 23:37:39,240 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 87/100 | Train Loss: 0.0071 | Val mean-roc_auc_score: 0.8799
|
| 215 |
+
2025-09-25 23:37:56,481 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 88/100 | Train Loss: 0.0033 | Val mean-roc_auc_score: 0.8802
|
| 216 |
+
2025-09-25 23:38:17,401 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 89/100 | Train Loss: 0.0067 | Val mean-roc_auc_score: 0.8816
|
| 217 |
+
2025-09-25 23:38:35,153 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 90/100 | Train Loss: 0.0048 | Val mean-roc_auc_score: 0.8816
|
| 218 |
+
2025-09-25 23:38:52,873 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 91/100 | Train Loss: 0.0036 | Val mean-roc_auc_score: 0.8802
|
| 219 |
+
2025-09-25 23:39:13,215 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 92/100 | Train Loss: 0.0066 | Val mean-roc_auc_score: 0.8840
|
| 220 |
+
2025-09-25 23:39:30,147 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 93/100 | Train Loss: 0.0071 | Val mean-roc_auc_score: 0.8810
|
| 221 |
+
2025-09-25 23:39:48,244 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 94/100 | Train Loss: 0.0056 | Val mean-roc_auc_score: 0.8811
|
| 222 |
+
2025-09-25 23:40:08,220 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 95/100 | Train Loss: 0.0063 | Val mean-roc_auc_score: 0.8801
|
| 223 |
+
2025-09-25 23:40:24,759 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 96/100 | Train Loss: 0.0020 | Val mean-roc_auc_score: 0.8799
|
| 224 |
+
2025-09-25 23:40:41,359 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 97/100 | Train Loss: 0.0044 | Val mean-roc_auc_score: 0.8804
|
| 225 |
+
2025-09-25 23:41:01,249 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 98/100 | Train Loss: 0.0050 | Val mean-roc_auc_score: 0.8797
|
| 226 |
+
2025-09-25 23:41:19,372 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 99/100 | Train Loss: 0.0044 | Val mean-roc_auc_score: 0.8796
|
| 227 |
+
2025-09-25 23:41:37,987 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 100/100 | Train Loss: 0.0042 | Val mean-roc_auc_score: 0.8795
|
| 228 |
+
2025-09-25 23:41:39,122 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Test mean-roc_auc_score: 0.8731
|
| 229 |
+
2025-09-25 23:41:39,495 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Starting triplicate run 3 for dataset antimalarial at 2025-09-25_23-41-39
|
| 230 |
+
2025-09-25 23:41:54,961 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 1/100 | Train Loss: 0.5719 | Val mean-roc_auc_score: 0.7454
|
| 231 |
+
2025-09-25 23:41:54,961 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Global step of best model: 120
|
| 232 |
+
2025-09-25 23:41:55,889 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Best model saved at epoch 1 with val mean-roc_auc_score: 0.7454
|
| 233 |
+
2025-09-25 23:42:13,119 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 2/100 | Train Loss: 0.4875 | Val mean-roc_auc_score: 0.8248
|
| 234 |
+
2025-09-25 23:42:13,327 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Global step of best model: 240
|
| 235 |
+
2025-09-25 23:42:13,955 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Best model saved at epoch 2 with val mean-roc_auc_score: 0.8248
|
| 236 |
+
2025-09-25 23:42:30,090 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 3/100 | Train Loss: 0.4104 | Val mean-roc_auc_score: 0.8579
|
| 237 |
+
2025-09-25 23:42:30,299 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Global step of best model: 360
|
| 238 |
+
2025-09-25 23:42:30,936 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Best model saved at epoch 3 with val mean-roc_auc_score: 0.8579
|
| 239 |
+
2025-09-25 23:42:49,754 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 4/100 | Train Loss: 0.3391 | Val mean-roc_auc_score: 0.8737
|
| 240 |
+
2025-09-25 23:42:49,956 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Global step of best model: 480
|
| 241 |
+
2025-09-25 23:42:50,561 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Best model saved at epoch 4 with val mean-roc_auc_score: 0.8737
|
| 242 |
+
2025-09-25 23:43:07,270 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 5/100 | Train Loss: 0.2825 | Val mean-roc_auc_score: 0.8877
|
| 243 |
+
2025-09-25 23:43:07,483 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Global step of best model: 600
|
| 244 |
+
2025-09-25 23:43:08,134 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Best model saved at epoch 5 with val mean-roc_auc_score: 0.8877
|
| 245 |
+
2025-09-25 23:43:25,433 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 6/100 | Train Loss: 0.2406 | Val mean-roc_auc_score: 0.8782
|
| 246 |
+
2025-09-25 23:43:44,383 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 7/100 | Train Loss: 0.1344 | Val mean-roc_auc_score: 0.8830
|
| 247 |
+
2025-09-25 23:44:01,722 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 8/100 | Train Loss: 0.1635 | Val mean-roc_auc_score: 0.8939
|
| 248 |
+
2025-09-25 23:44:01,924 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Global step of best model: 960
|
| 249 |
+
2025-09-25 23:44:02,717 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Best model saved at epoch 8 with val mean-roc_auc_score: 0.8939
|
| 250 |
+
2025-09-25 23:44:20,403 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 9/100 | Train Loss: 0.1242 | Val mean-roc_auc_score: 0.8909
|
| 251 |
+
2025-09-25 23:44:39,855 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 10/100 | Train Loss: 0.0925 | Val mean-roc_auc_score: 0.8827
|
| 252 |
+
2025-09-25 23:44:56,660 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 11/100 | Train Loss: 0.1578 | Val mean-roc_auc_score: 0.8745
|
| 253 |
+
2025-09-25 23:45:13,178 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 12/100 | Train Loss: 0.0895 | Val mean-roc_auc_score: 0.8852
|
| 254 |
+
2025-09-25 23:45:32,661 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 13/100 | Train Loss: 0.0609 | Val mean-roc_auc_score: 0.8843
|
| 255 |
+
2025-09-25 23:45:49,758 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 14/100 | Train Loss: 0.0816 | Val mean-roc_auc_score: 0.8831
|
| 256 |
+
2025-09-25 23:46:06,953 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 15/100 | Train Loss: 0.0581 | Val mean-roc_auc_score: 0.8875
|
| 257 |
+
2025-09-25 23:46:26,496 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 16/100 | Train Loss: 0.0637 | Val mean-roc_auc_score: 0.8809
|
| 258 |
+
2025-09-25 23:46:45,670 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 17/100 | Train Loss: 0.0586 | Val mean-roc_auc_score: 0.8874
|
| 259 |
+
2025-09-25 23:47:01,976 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 18/100 | Train Loss: 0.0646 | Val mean-roc_auc_score: 0.8851
|
| 260 |
+
2025-09-25 23:47:21,982 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 19/100 | Train Loss: 0.0535 | Val mean-roc_auc_score: 0.8924
|
| 261 |
+
2025-09-25 23:47:38,809 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 20/100 | Train Loss: 0.0358 | Val mean-roc_auc_score: 0.8857
|
| 262 |
+
2025-09-25 23:47:55,613 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 21/100 | Train Loss: 0.0371 | Val mean-roc_auc_score: 0.8858
|
| 263 |
+
2025-09-25 23:48:15,537 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 22/100 | Train Loss: 0.0688 | Val mean-roc_auc_score: 0.8836
|
| 264 |
+
2025-09-25 23:48:32,820 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 23/100 | Train Loss: 0.0398 | Val mean-roc_auc_score: 0.8880
|
| 265 |
+
2025-09-25 23:48:49,814 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 24/100 | Train Loss: 0.0350 | Val mean-roc_auc_score: 0.8867
|
| 266 |
+
2025-09-25 23:49:10,531 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 25/100 | Train Loss: 0.0472 | Val mean-roc_auc_score: 0.8927
|
| 267 |
+
2025-09-25 23:49:26,900 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 26/100 | Train Loss: 0.0473 | Val mean-roc_auc_score: 0.8915
|
| 268 |
+
2025-09-25 23:49:44,393 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 27/100 | Train Loss: 0.0285 | Val mean-roc_auc_score: 0.8934
|
| 269 |
+
2025-09-25 23:50:03,809 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 28/100 | Train Loss: 0.0225 | Val mean-roc_auc_score: 0.8893
|
| 270 |
+
2025-09-25 23:50:20,395 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 29/100 | Train Loss: 0.0303 | Val mean-roc_auc_score: 0.8914
|
| 271 |
+
2025-09-25 23:50:40,227 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 30/100 | Train Loss: 0.0306 | Val mean-roc_auc_score: 0.8905
|
| 272 |
+
2025-09-25 23:50:56,936 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 31/100 | Train Loss: 0.0220 | Val mean-roc_auc_score: 0.8927
|
| 273 |
+
2025-09-25 23:51:13,937 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 32/100 | Train Loss: 0.0209 | Val mean-roc_auc_score: 0.8901
|
| 274 |
+
2025-09-25 23:51:34,078 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 33/100 | Train Loss: 0.0137 | Val mean-roc_auc_score: 0.8904
|
| 275 |
+
2025-09-25 23:51:51,665 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 34/100 | Train Loss: 0.0268 | Val mean-roc_auc_score: 0.8964
|
| 276 |
+
2025-09-25 23:51:51,834 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Global step of best model: 4080
|
| 277 |
+
2025-09-25 23:51:52,535 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Best model saved at epoch 34 with val mean-roc_auc_score: 0.8964
|
| 278 |
+
2025-09-25 23:52:12,024 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 35/100 | Train Loss: 0.0168 | Val mean-roc_auc_score: 0.8911
|
| 279 |
+
2025-09-25 23:52:27,986 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 36/100 | Train Loss: 0.0143 | Val mean-roc_auc_score: 0.8868
|
| 280 |
+
2025-09-25 23:52:46,542 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 37/100 | Train Loss: 0.0174 | Val mean-roc_auc_score: 0.8910
|
| 281 |
+
2025-09-25 23:53:03,523 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 38/100 | Train Loss: 0.0134 | Val mean-roc_auc_score: 0.8923
|
| 282 |
+
2025-09-25 23:53:23,060 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 39/100 | Train Loss: 0.0204 | Val mean-roc_auc_score: 0.8932
|
| 283 |
+
2025-09-25 23:53:39,742 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 40/100 | Train Loss: 0.0245 | Val mean-roc_auc_score: 0.8907
|
| 284 |
+
2025-09-25 23:53:56,356 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 41/100 | Train Loss: 0.0153 | Val mean-roc_auc_score: 0.8905
|
| 285 |
+
2025-09-25 23:54:18,282 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 42/100 | Train Loss: 0.0242 | Val mean-roc_auc_score: 0.8920
|
| 286 |
+
2025-09-25 23:54:35,782 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 43/100 | Train Loss: 0.0077 | Val mean-roc_auc_score: 0.8923
|
| 287 |
+
2025-09-25 23:54:53,679 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 44/100 | Train Loss: 0.0097 | Val mean-roc_auc_score: 0.8930
|
| 288 |
+
2025-09-25 23:55:11,609 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 45/100 | Train Loss: 0.0104 | Val mean-roc_auc_score: 0.8911
|
| 289 |
+
2025-09-25 23:55:32,443 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 46/100 | Train Loss: 0.0167 | Val mean-roc_auc_score: 0.8931
|
| 290 |
+
2025-09-25 23:55:49,410 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 47/100 | Train Loss: 0.0117 | Val mean-roc_auc_score: 0.8895
|
| 291 |
+
2025-09-25 23:56:05,869 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 48/100 | Train Loss: 0.0195 | Val mean-roc_auc_score: 0.8881
|
| 292 |
+
2025-09-25 23:56:25,905 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 49/100 | Train Loss: 0.0161 | Val mean-roc_auc_score: 0.8872
|
| 293 |
+
2025-09-25 23:56:44,724 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 50/100 | Train Loss: 0.0134 | Val mean-roc_auc_score: 0.8877
|
| 294 |
+
2025-09-25 23:57:00,863 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 51/100 | Train Loss: 0.0092 | Val mean-roc_auc_score: 0.8888
|
| 295 |
+
2025-09-25 23:57:21,354 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 52/100 | Train Loss: 0.0042 | Val mean-roc_auc_score: 0.8894
|
| 296 |
+
2025-09-25 23:57:38,536 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 53/100 | Train Loss: 0.0173 | Val mean-roc_auc_score: 0.8880
|
| 297 |
+
2025-09-25 23:57:55,347 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 54/100 | Train Loss: 0.0150 | Val mean-roc_auc_score: 0.8915
|
| 298 |
+
2025-09-25 23:58:15,347 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 55/100 | Train Loss: 0.0134 | Val mean-roc_auc_score: 0.8899
|
| 299 |
+
2025-09-25 23:58:32,024 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 56/100 | Train Loss: 0.0122 | Val mean-roc_auc_score: 0.8942
|
| 300 |
+
2025-09-25 23:58:49,266 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 57/100 | Train Loss: 0.0086 | Val mean-roc_auc_score: 0.8931
|
| 301 |
+
2025-09-25 23:59:08,861 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 58/100 | Train Loss: 0.0096 | Val mean-roc_auc_score: 0.8919
|
| 302 |
+
2025-09-25 23:59:26,950 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 59/100 | Train Loss: 0.0176 | Val mean-roc_auc_score: 0.8936
|
| 303 |
+
2025-09-25 23:59:46,109 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 60/100 | Train Loss: 0.0173 | Val mean-roc_auc_score: 0.8900
|
| 304 |
+
2025-09-26 00:00:02,890 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 61/100 | Train Loss: 0.0053 | Val mean-roc_auc_score: 0.8916
|
| 305 |
+
2025-09-26 00:00:20,187 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 62/100 | Train Loss: 0.0106 | Val mean-roc_auc_score: 0.8931
|
| 306 |
+
2025-09-26 00:00:39,515 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 63/100 | Train Loss: 0.0133 | Val mean-roc_auc_score: 0.8909
|
| 307 |
+
2025-09-26 00:00:56,378 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 64/100 | Train Loss: 0.0072 | Val mean-roc_auc_score: 0.8907
|
| 308 |
+
2025-09-26 00:01:13,823 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 65/100 | Train Loss: 0.0100 | Val mean-roc_auc_score: 0.8907
|
| 309 |
+
2025-09-26 00:01:33,068 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 66/100 | Train Loss: 0.0081 | Val mean-roc_auc_score: 0.8893
|
| 310 |
+
2025-09-26 00:01:50,403 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 67/100 | Train Loss: 0.0060 | Val mean-roc_auc_score: 0.8895
|
| 311 |
+
2025-09-26 00:02:07,141 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 68/100 | Train Loss: 0.0089 | Val mean-roc_auc_score: 0.8927
|
| 312 |
+
2025-09-26 00:02:26,734 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 69/100 | Train Loss: 0.0165 | Val mean-roc_auc_score: 0.8890
|
| 313 |
+
2025-09-26 00:02:43,583 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 70/100 | Train Loss: 0.0096 | Val mean-roc_auc_score: 0.8898
|
| 314 |
+
2025-09-26 00:03:00,479 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 71/100 | Train Loss: 0.0057 | Val mean-roc_auc_score: 0.8905
|
| 315 |
+
2025-09-26 00:03:20,406 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 72/100 | Train Loss: 0.0043 | Val mean-roc_auc_score: 0.8897
|
| 316 |
+
2025-09-26 00:03:37,662 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 73/100 | Train Loss: 0.0071 | Val mean-roc_auc_score: 0.8892
|
| 317 |
+
2025-09-26 00:03:54,880 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 74/100 | Train Loss: 0.0063 | Val mean-roc_auc_score: 0.8886
|
| 318 |
+
2025-09-26 00:04:15,880 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 75/100 | Train Loss: 0.0045 | Val mean-roc_auc_score: 0.8890
|
| 319 |
+
2025-09-26 00:04:32,539 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 76/100 | Train Loss: 0.0166 | Val mean-roc_auc_score: 0.8890
|
| 320 |
+
2025-09-26 00:04:50,173 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 77/100 | Train Loss: 0.0073 | Val mean-roc_auc_score: 0.8902
|
| 321 |
+
2025-09-26 00:05:10,347 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 78/100 | Train Loss: 0.0073 | Val mean-roc_auc_score: 0.8903
|
| 322 |
+
2025-09-26 00:05:26,912 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 79/100 | Train Loss: 0.0078 | Val mean-roc_auc_score: 0.8909
|
| 323 |
+
2025-09-26 00:05:43,290 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 80/100 | Train Loss: 0.0061 | Val mean-roc_auc_score: 0.8921
|
| 324 |
+
2025-09-26 00:06:02,648 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 81/100 | Train Loss: 0.0097 | Val mean-roc_auc_score: 0.8909
|
| 325 |
+
2025-09-26 00:06:19,615 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 82/100 | Train Loss: 0.0053 | Val mean-roc_auc_score: 0.8906
|
| 326 |
+
2025-09-26 00:06:36,333 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 83/100 | Train Loss: 0.0101 | Val mean-roc_auc_score: 0.8931
|
| 327 |
+
2025-09-26 00:06:56,128 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 84/100 | Train Loss: 0.0107 | Val mean-roc_auc_score: 0.8922
|
| 328 |
+
2025-09-26 00:07:13,834 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 85/100 | Train Loss: 0.0064 | Val mean-roc_auc_score: 0.8926
|
| 329 |
+
2025-09-26 00:07:30,990 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 86/100 | Train Loss: 0.0096 | Val mean-roc_auc_score: 0.8932
|
| 330 |
+
2025-09-26 00:07:50,342 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 87/100 | Train Loss: 0.0090 | Val mean-roc_auc_score: 0.8927
|
| 331 |
+
2025-09-26 00:08:07,354 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 88/100 | Train Loss: 0.0076 | Val mean-roc_auc_score: 0.8907
|
| 332 |
+
2025-09-26 00:08:24,546 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 89/100 | Train Loss: 0.0058 | Val mean-roc_auc_score: 0.8901
|
| 333 |
+
2025-09-26 00:08:44,796 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 90/100 | Train Loss: 0.0063 | Val mean-roc_auc_score: 0.8917
|
| 334 |
+
2025-09-26 00:09:03,248 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 91/100 | Train Loss: 0.0107 | Val mean-roc_auc_score: 0.8905
|
| 335 |
+
2025-09-26 00:09:22,133 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 92/100 | Train Loss: 0.0083 | Val mean-roc_auc_score: 0.8906
|
| 336 |
+
2025-09-26 00:09:42,350 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 93/100 | Train Loss: 0.0089 | Val mean-roc_auc_score: 0.8937
|
| 337 |
+
2025-09-26 00:10:00,021 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 94/100 | Train Loss: 0.0094 | Val mean-roc_auc_score: 0.8964
|
| 338 |
+
2025-09-26 00:10:18,461 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 95/100 | Train Loss: 0.0080 | Val mean-roc_auc_score: 0.8927
|
| 339 |
+
2025-09-26 00:10:39,692 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 96/100 | Train Loss: 0.0071 | Val mean-roc_auc_score: 0.8936
|
| 340 |
+
2025-09-26 00:10:57,210 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 97/100 | Train Loss: 0.0059 | Val mean-roc_auc_score: 0.8933
|
| 341 |
+
2025-09-26 00:11:16,686 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 98/100 | Train Loss: 0.0054 | Val mean-roc_auc_score: 0.8947
|
| 342 |
+
2025-09-26 00:11:33,949 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 99/100 | Train Loss: 0.0042 | Val mean-roc_auc_score: 0.8948
|
| 343 |
+
2025-09-26 00:11:54,047 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 100/100 | Train Loss: 0.0076 | Val mean-roc_auc_score: 0.8953
|
| 344 |
+
2025-09-26 00:11:54,825 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Test mean-roc_auc_score: 0.8661
|
| 345 |
+
2025-09-26 00:11:55,305 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Final Triplicate Test Results — Avg mean-roc_auc_score: 0.8707, Std Dev: 0.0032
|
logs_modchembert_classification_ModChemBERT-MLM/modchembert_deepchem_splits_run_cocrystal_epochs100_batch_size32_20250926_032625.log
ADDED
|
@@ -0,0 +1,343 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
2025-09-26 03:26:25,847 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Running benchmark for dataset: cocrystal
|
| 2 |
+
2025-09-26 03:26:25,847 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - dataset: cocrystal, tasks: ['label'], epochs: 100, learning rate: 3e-05
|
| 3 |
+
2025-09-26 03:26:25,852 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Starting triplicate run 1 for dataset cocrystal at 2025-09-26_03-26-25
|
| 4 |
+
2025-09-26 03:26:33,750 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 1/100 | Train Loss: 0.7601 | Val mean-roc_auc_score: 0.7131
|
| 5 |
+
2025-09-26 03:26:33,750 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Global step of best model: 37
|
| 6 |
+
2025-09-26 03:26:34,512 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Best model saved at epoch 1 with val mean-roc_auc_score: 0.7131
|
| 7 |
+
2025-09-26 03:26:40,506 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 2/100 | Train Loss: 0.5743 | Val mean-roc_auc_score: 0.7954
|
| 8 |
+
2025-09-26 03:26:40,699 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Global step of best model: 74
|
| 9 |
+
2025-09-26 03:26:41,337 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Best model saved at epoch 2 with val mean-roc_auc_score: 0.7954
|
| 10 |
+
2025-09-26 03:26:47,468 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 3/100 | Train Loss: 0.4744 | Val mean-roc_auc_score: 0.8806
|
| 11 |
+
2025-09-26 03:26:47,682 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Global step of best model: 111
|
| 12 |
+
2025-09-26 03:26:48,419 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Best model saved at epoch 3 with val mean-roc_auc_score: 0.8806
|
| 13 |
+
2025-09-26 03:26:51,593 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 4/100 | Train Loss: 0.4291 | Val mean-roc_auc_score: 0.8371
|
| 14 |
+
2025-09-26 03:26:57,376 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 5/100 | Train Loss: 0.4003 | Val mean-roc_auc_score: 0.8841
|
| 15 |
+
2025-09-26 03:26:57,578 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Global step of best model: 185
|
| 16 |
+
2025-09-26 03:26:58,208 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Best model saved at epoch 5 with val mean-roc_auc_score: 0.8841
|
| 17 |
+
2025-09-26 03:27:04,063 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 6/100 | Train Loss: 0.3580 | Val mean-roc_auc_score: 0.8746
|
| 18 |
+
2025-09-26 03:27:09,933 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 7/100 | Train Loss: 0.3074 | Val mean-roc_auc_score: 0.9041
|
| 19 |
+
2025-09-26 03:27:10,133 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Global step of best model: 259
|
| 20 |
+
2025-09-26 03:27:10,791 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Best model saved at epoch 7 with val mean-roc_auc_score: 0.9041
|
| 21 |
+
2025-09-26 03:27:16,784 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 8/100 | Train Loss: 0.3041 | Val mean-roc_auc_score: 0.9085
|
| 22 |
+
2025-09-26 03:27:16,991 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Global step of best model: 296
|
| 23 |
+
2025-09-26 03:27:17,633 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Best model saved at epoch 8 with val mean-roc_auc_score: 0.9085
|
| 24 |
+
2025-09-26 03:27:21,091 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 9/100 | Train Loss: 0.2803 | Val mean-roc_auc_score: 0.8873
|
| 25 |
+
2025-09-26 03:27:27,335 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 10/100 | Train Loss: 0.2399 | Val mean-roc_auc_score: 0.8841
|
| 26 |
+
2025-09-26 03:27:33,470 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 11/100 | Train Loss: 0.2701 | Val mean-roc_auc_score: 0.8820
|
| 27 |
+
2025-09-26 03:27:40,097 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 12/100 | Train Loss: 0.1934 | Val mean-roc_auc_score: 0.9026
|
| 28 |
+
2025-09-26 03:27:46,593 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 13/100 | Train Loss: 0.2044 | Val mean-roc_auc_score: 0.8929
|
| 29 |
+
2025-09-26 03:27:50,629 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 14/100 | Train Loss: 0.2743 | Val mean-roc_auc_score: 0.8800
|
| 30 |
+
2025-09-26 03:27:57,306 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 15/100 | Train Loss: 0.1537 | Val mean-roc_auc_score: 0.8909
|
| 31 |
+
2025-09-26 03:28:04,017 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 16/100 | Train Loss: 0.2061 | Val mean-roc_auc_score: 0.9033
|
| 32 |
+
2025-09-26 03:28:10,847 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 17/100 | Train Loss: 0.1562 | Val mean-roc_auc_score: 0.9108
|
| 33 |
+
2025-09-26 03:28:11,027 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Global step of best model: 629
|
| 34 |
+
2025-09-26 03:28:11,694 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Best model saved at epoch 17 with val mean-roc_auc_score: 0.9108
|
| 35 |
+
2025-09-26 03:28:18,242 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 18/100 | Train Loss: 0.1001 | Val mean-roc_auc_score: 0.8987
|
| 36 |
+
2025-09-26 03:28:22,085 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 19/100 | Train Loss: 0.1439 | Val mean-roc_auc_score: 0.8919
|
| 37 |
+
2025-09-26 03:28:29,030 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 20/100 | Train Loss: 0.1090 | Val mean-roc_auc_score: 0.8866
|
| 38 |
+
2025-09-26 03:28:35,023 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 21/100 | Train Loss: 0.0752 | Val mean-roc_auc_score: 0.8724
|
| 39 |
+
2025-09-26 03:28:41,660 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 22/100 | Train Loss: 0.0915 | Val mean-roc_auc_score: 0.8884
|
| 40 |
+
2025-09-26 03:28:48,031 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 23/100 | Train Loss: 0.1199 | Val mean-roc_auc_score: 0.8629
|
| 41 |
+
2025-09-26 03:28:51,763 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 24/100 | Train Loss: 0.0802 | Val mean-roc_auc_score: 0.8562
|
| 42 |
+
2025-09-26 03:28:58,044 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 25/100 | Train Loss: 0.0659 | Val mean-roc_auc_score: 0.8636
|
| 43 |
+
2025-09-26 03:29:04,886 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 26/100 | Train Loss: 0.0608 | Val mean-roc_auc_score: 0.8793
|
| 44 |
+
2025-09-26 03:29:12,802 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 27/100 | Train Loss: 0.0663 | Val mean-roc_auc_score: 0.8576
|
| 45 |
+
2025-09-26 03:29:19,006 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 28/100 | Train Loss: 0.0694 | Val mean-roc_auc_score: 0.8438
|
| 46 |
+
2025-09-26 03:29:22,570 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 29/100 | Train Loss: 0.0811 | Val mean-roc_auc_score: 0.8666
|
| 47 |
+
2025-09-26 03:29:28,712 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 30/100 | Train Loss: 0.0301 | Val mean-roc_auc_score: 0.8717
|
| 48 |
+
2025-09-26 03:29:35,109 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 31/100 | Train Loss: 0.0210 | Val mean-roc_auc_score: 0.8636
|
| 49 |
+
2025-09-26 03:29:42,042 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 32/100 | Train Loss: 0.0231 | Val mean-roc_auc_score: 0.8531
|
| 50 |
+
2025-09-26 03:29:48,555 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 33/100 | Train Loss: 0.1496 | Val mean-roc_auc_score: 0.8402
|
| 51 |
+
2025-09-26 03:29:51,893 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 34/100 | Train Loss: 0.0769 | Val mean-roc_auc_score: 0.8454
|
| 52 |
+
2025-09-26 03:29:57,682 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 35/100 | Train Loss: 0.0321 | Val mean-roc_auc_score: 0.8371
|
| 53 |
+
2025-09-26 03:30:03,325 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 36/100 | Train Loss: 0.0454 | Val mean-roc_auc_score: 0.8557
|
| 54 |
+
2025-09-26 03:30:09,572 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 37/100 | Train Loss: 0.0519 | Val mean-roc_auc_score: 0.8600
|
| 55 |
+
2025-09-26 03:30:15,483 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 38/100 | Train Loss: 0.0251 | Val mean-roc_auc_score: 0.8539
|
| 56 |
+
2025-09-26 03:30:18,720 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 39/100 | Train Loss: 0.0153 | Val mean-roc_auc_score: 0.8475
|
| 57 |
+
2025-09-26 03:30:24,570 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 40/100 | Train Loss: 0.0225 | Val mean-roc_auc_score: 0.8455
|
| 58 |
+
2025-09-26 03:30:30,581 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 41/100 | Train Loss: 0.0101 | Val mean-roc_auc_score: 0.8399
|
| 59 |
+
2025-09-26 03:30:36,489 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 42/100 | Train Loss: 0.0140 | Val mean-roc_auc_score: 0.8414
|
| 60 |
+
2025-09-26 03:30:42,241 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 43/100 | Train Loss: 0.0151 | Val mean-roc_auc_score: 0.8349
|
| 61 |
+
2025-09-26 03:30:47,915 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 44/100 | Train Loss: 0.0226 | Val mean-roc_auc_score: 0.8275
|
| 62 |
+
2025-09-26 03:30:51,248 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 45/100 | Train Loss: 0.0187 | Val mean-roc_auc_score: 0.8359
|
| 63 |
+
2025-09-26 03:30:56,621 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 46/100 | Train Loss: 0.0093 | Val mean-roc_auc_score: 0.8259
|
| 64 |
+
2025-09-26 03:31:02,652 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 47/100 | Train Loss: 0.0183 | Val mean-roc_auc_score: 0.8339
|
| 65 |
+
2025-09-26 03:31:08,692 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 48/100 | Train Loss: 0.0058 | Val mean-roc_auc_score: 0.8313
|
| 66 |
+
2025-09-26 03:31:13,951 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 49/100 | Train Loss: 0.0042 | Val mean-roc_auc_score: 0.8301
|
| 67 |
+
2025-09-26 03:31:18,901 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 50/100 | Train Loss: 0.0068 | Val mean-roc_auc_score: 0.8267
|
| 68 |
+
2025-09-26 03:31:22,056 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 51/100 | Train Loss: 0.0036 | Val mean-roc_auc_score: 0.8255
|
| 69 |
+
2025-09-26 03:31:28,021 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 52/100 | Train Loss: 0.0054 | Val mean-roc_auc_score: 0.8250
|
| 70 |
+
2025-09-26 03:31:33,890 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 53/100 | Train Loss: 0.0040 | Val mean-roc_auc_score: 0.8238
|
| 71 |
+
2025-09-26 03:31:40,031 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 54/100 | Train Loss: 0.0030 | Val mean-roc_auc_score: 0.8232
|
| 72 |
+
2025-09-26 03:31:46,740 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 55/100 | Train Loss: 0.0036 | Val mean-roc_auc_score: 0.8254
|
| 73 |
+
2025-09-26 03:31:49,753 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 56/100 | Train Loss: 0.0041 | Val mean-roc_auc_score: 0.8267
|
| 74 |
+
2025-09-26 03:31:55,903 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 57/100 | Train Loss: 0.0008 | Val mean-roc_auc_score: 0.8259
|
| 75 |
+
2025-09-26 03:32:01,613 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 58/100 | Train Loss: 0.0034 | Val mean-roc_auc_score: 0.8316
|
| 76 |
+
2025-09-26 03:32:07,458 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 59/100 | Train Loss: 0.0018 | Val mean-roc_auc_score: 0.8303
|
| 77 |
+
2025-09-26 03:32:13,493 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 60/100 | Train Loss: 0.0015 | Val mean-roc_auc_score: 0.8293
|
| 78 |
+
2025-09-26 03:32:16,943 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 61/100 | Train Loss: 0.0018 | Val mean-roc_auc_score: 0.8314
|
| 79 |
+
2025-09-26 03:32:23,157 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 62/100 | Train Loss: 0.0027 | Val mean-roc_auc_score: 0.8269
|
| 80 |
+
2025-09-26 03:32:29,138 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 63/100 | Train Loss: 0.0052 | Val mean-roc_auc_score: 0.8348
|
| 81 |
+
2025-09-26 03:32:34,696 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 64/100 | Train Loss: 0.0083 | Val mean-roc_auc_score: 0.8342
|
| 82 |
+
2025-09-26 03:32:40,471 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 65/100 | Train Loss: 0.0014 | Val mean-roc_auc_score: 0.8356
|
| 83 |
+
2025-09-26 03:32:46,134 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 66/100 | Train Loss: 0.0037 | Val mean-roc_auc_score: 0.8318
|
| 84 |
+
2025-09-26 03:32:49,319 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 67/100 | Train Loss: 0.0150 | Val mean-roc_auc_score: 0.8327
|
| 85 |
+
2025-09-26 03:32:54,806 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 68/100 | Train Loss: 0.0820 | Val mean-roc_auc_score: 0.8124
|
| 86 |
+
2025-09-26 03:33:00,629 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 69/100 | Train Loss: 0.0353 | Val mean-roc_auc_score: 0.8427
|
| 87 |
+
2025-09-26 03:33:06,412 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 70/100 | Train Loss: 0.0263 | Val mean-roc_auc_score: 0.8491
|
| 88 |
+
2025-09-26 03:33:11,952 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 71/100 | Train Loss: 0.0150 | Val mean-roc_auc_score: 0.8373
|
| 89 |
+
2025-09-26 03:33:17,939 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 72/100 | Train Loss: 0.0091 | Val mean-roc_auc_score: 0.8273
|
| 90 |
+
2025-09-26 03:33:21,064 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 73/100 | Train Loss: 0.0003 | Val mean-roc_auc_score: 0.8275
|
| 91 |
+
2025-09-26 03:33:26,656 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 74/100 | Train Loss: 0.0073 | Val mean-roc_auc_score: 0.8271
|
| 92 |
+
2025-09-26 03:33:32,232 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 75/100 | Train Loss: 0.0041 | Val mean-roc_auc_score: 0.8285
|
| 93 |
+
2025-09-26 03:33:38,068 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 76/100 | Train Loss: 0.0023 | Val mean-roc_auc_score: 0.8285
|
| 94 |
+
2025-09-26 03:33:43,918 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 77/100 | Train Loss: 0.0043 | Val mean-roc_auc_score: 0.8318
|
| 95 |
+
2025-09-26 03:33:47,264 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 78/100 | Train Loss: 0.0130 | Val mean-roc_auc_score: 0.8299
|
| 96 |
+
2025-09-26 03:33:52,595 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 79/100 | Train Loss: 0.0234 | Val mean-roc_auc_score: 0.8400
|
| 97 |
+
2025-09-26 03:33:58,269 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 80/100 | Train Loss: 0.0144 | Val mean-roc_auc_score: 0.8359
|
| 98 |
+
2025-09-26 03:34:04,015 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 81/100 | Train Loss: 0.0062 | Val mean-roc_auc_score: 0.8317
|
| 99 |
+
2025-09-26 03:34:11,056 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 82/100 | Train Loss: 0.0068 | Val mean-roc_auc_score: 0.8332
|
| 100 |
+
2025-09-26 03:34:16,997 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 83/100 | Train Loss: 0.0035 | Val mean-roc_auc_score: 0.8341
|
| 101 |
+
2025-09-26 03:34:20,183 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 84/100 | Train Loss: 0.0037 | Val mean-roc_auc_score: 0.8340
|
| 102 |
+
2025-09-26 03:34:25,775 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 85/100 | Train Loss: 0.0048 | Val mean-roc_auc_score: 0.8327
|
| 103 |
+
2025-09-26 03:34:31,604 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 86/100 | Train Loss: 0.0068 | Val mean-roc_auc_score: 0.8338
|
| 104 |
+
2025-09-26 03:34:37,925 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 87/100 | Train Loss: 0.0032 | Val mean-roc_auc_score: 0.8417
|
| 105 |
+
2025-09-26 03:34:43,695 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 88/100 | Train Loss: 0.0031 | Val mean-roc_auc_score: 0.8414
|
| 106 |
+
2025-09-26 03:34:46,575 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 89/100 | Train Loss: 0.0027 | Val mean-roc_auc_score: 0.8386
|
| 107 |
+
2025-09-26 03:34:52,051 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 90/100 | Train Loss: 0.0053 | Val mean-roc_auc_score: 0.8282
|
| 108 |
+
2025-09-26 03:34:57,641 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 91/100 | Train Loss: 0.0045 | Val mean-roc_auc_score: 0.8322
|
| 109 |
+
2025-09-26 03:35:03,670 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 92/100 | Train Loss: 0.0038 | Val mean-roc_auc_score: 0.8329
|
| 110 |
+
2025-09-26 03:35:09,482 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 93/100 | Train Loss: 0.0021 | Val mean-roc_auc_score: 0.8333
|
| 111 |
+
2025-09-26 03:35:15,174 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 94/100 | Train Loss: 0.0015 | Val mean-roc_auc_score: 0.8335
|
| 112 |
+
2025-09-26 03:35:18,103 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 95/100 | Train Loss: 0.0024 | Val mean-roc_auc_score: 0.8342
|
| 113 |
+
2025-09-26 03:35:23,600 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 96/100 | Train Loss: 0.0017 | Val mean-roc_auc_score: 0.8336
|
| 114 |
+
2025-09-26 03:35:29,499 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 97/100 | Train Loss: 0.0013 | Val mean-roc_auc_score: 0.8333
|
| 115 |
+
2025-09-26 03:35:34,952 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 98/100 | Train Loss: 0.0020 | Val mean-roc_auc_score: 0.8330
|
| 116 |
+
2025-09-26 03:35:41,012 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 99/100 | Train Loss: 0.0031 | Val mean-roc_auc_score: 0.8269
|
| 117 |
+
2025-09-26 03:35:46,265 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 100/100 | Train Loss: 0.0015 | Val mean-roc_auc_score: 0.8281
|
| 118 |
+
2025-09-26 03:35:46,765 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Test mean-roc_auc_score: 0.8042
|
| 119 |
+
2025-09-26 03:35:47,106 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Starting triplicate run 2 for dataset cocrystal at 2025-09-26_03-35-47
|
| 120 |
+
2025-09-26 03:35:49,507 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 1/100 | Train Loss: 0.6486 | Val mean-roc_auc_score: 0.8163
|
| 121 |
+
2025-09-26 03:35:49,507 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Global step of best model: 37
|
| 122 |
+
2025-09-26 03:35:50,292 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Best model saved at epoch 1 with val mean-roc_auc_score: 0.8163
|
| 123 |
+
2025-09-26 03:35:56,253 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 2/100 | Train Loss: 0.5034 | Val mean-roc_auc_score: 0.8022
|
| 124 |
+
2025-09-26 03:36:01,295 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 3/100 | Train Loss: 0.4801 | Val mean-roc_auc_score: 0.8443
|
| 125 |
+
2025-09-26 03:36:01,490 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Global step of best model: 111
|
| 126 |
+
2025-09-26 03:36:02,118 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Best model saved at epoch 3 with val mean-roc_auc_score: 0.8443
|
| 127 |
+
2025-09-26 03:36:07,535 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 4/100 | Train Loss: 0.4037 | Val mean-roc_auc_score: 0.8155
|
| 128 |
+
2025-09-26 03:36:13,251 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 5/100 | Train Loss: 0.3649 | Val mean-roc_auc_score: 0.8682
|
| 129 |
+
2025-09-26 03:36:13,452 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Global step of best model: 185
|
| 130 |
+
2025-09-26 03:36:14,113 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Best model saved at epoch 5 with val mean-roc_auc_score: 0.8682
|
| 131 |
+
2025-09-26 03:36:17,851 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 6/100 | Train Loss: 0.3324 | Val mean-roc_auc_score: 0.8910
|
| 132 |
+
2025-09-26 03:36:18,353 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Global step of best model: 222
|
| 133 |
+
2025-09-26 03:36:19,103 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Best model saved at epoch 6 with val mean-roc_auc_score: 0.8910
|
| 134 |
+
2025-09-26 03:36:24,892 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 7/100 | Train Loss: 0.3142 | Val mean-roc_auc_score: 0.8613
|
| 135 |
+
2025-09-26 03:36:30,833 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 8/100 | Train Loss: 0.2922 | Val mean-roc_auc_score: 0.8839
|
| 136 |
+
2025-09-26 03:36:36,761 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 9/100 | Train Loss: 0.2576 | Val mean-roc_auc_score: 0.8473
|
| 137 |
+
2025-09-26 03:36:42,243 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 10/100 | Train Loss: 0.2500 | Val mean-roc_auc_score: 0.8893
|
| 138 |
+
2025-09-26 03:36:45,496 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 11/100 | Train Loss: 0.2645 | Val mean-roc_auc_score: 0.8412
|
| 139 |
+
2025-09-26 03:36:51,269 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 12/100 | Train Loss: 0.2213 | Val mean-roc_auc_score: 0.8085
|
| 140 |
+
2025-09-26 03:36:56,980 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 13/100 | Train Loss: 0.1909 | Val mean-roc_auc_score: 0.8415
|
| 141 |
+
2025-09-26 03:37:02,559 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 14/100 | Train Loss: 0.1910 | Val mean-roc_auc_score: 0.8342
|
| 142 |
+
2025-09-26 03:37:08,416 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 15/100 | Train Loss: 0.1520 | Val mean-roc_auc_score: 0.8218
|
| 143 |
+
2025-09-26 03:37:13,833 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 16/100 | Train Loss: 0.1571 | Val mean-roc_auc_score: 0.8240
|
| 144 |
+
2025-09-26 03:37:17,224 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 17/100 | Train Loss: 0.2047 | Val mean-roc_auc_score: 0.7879
|
| 145 |
+
2025-09-26 03:37:22,908 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 18/100 | Train Loss: 0.1394 | Val mean-roc_auc_score: 0.8287
|
| 146 |
+
2025-09-26 03:37:28,351 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 19/100 | Train Loss: 0.1836 | Val mean-roc_auc_score: 0.8753
|
| 147 |
+
2025-09-26 03:37:33,867 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 20/100 | Train Loss: 0.1005 | Val mean-roc_auc_score: 0.8601
|
| 148 |
+
2025-09-26 03:37:39,097 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 21/100 | Train Loss: 0.0861 | Val mean-roc_auc_score: 0.8311
|
| 149 |
+
2025-09-26 03:37:45,069 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 22/100 | Train Loss: 0.1010 | Val mean-roc_auc_score: 0.8317
|
| 150 |
+
2025-09-26 03:37:47,771 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 23/100 | Train Loss: 0.0976 | Val mean-roc_auc_score: 0.8443
|
| 151 |
+
2025-09-26 03:37:53,646 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 24/100 | Train Loss: 0.0608 | Val mean-roc_auc_score: 0.8534
|
| 152 |
+
2025-09-26 03:37:59,062 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 25/100 | Train Loss: 0.0894 | Val mean-roc_auc_score: 0.8381
|
| 153 |
+
2025-09-26 03:38:04,606 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 26/100 | Train Loss: 0.0684 | Val mean-roc_auc_score: 0.8527
|
| 154 |
+
2025-09-26 03:38:11,931 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 27/100 | Train Loss: 0.0524 | Val mean-roc_auc_score: 0.8307
|
| 155 |
+
2025-09-26 03:38:15,385 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 28/100 | Train Loss: 0.0540 | Val mean-roc_auc_score: 0.7894
|
| 156 |
+
2025-09-26 03:38:21,180 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 29/100 | Train Loss: 0.0705 | Val mean-roc_auc_score: 0.8286
|
| 157 |
+
2025-09-26 03:38:27,333 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 30/100 | Train Loss: 0.0641 | Val mean-roc_auc_score: 0.8135
|
| 158 |
+
2025-09-26 03:38:33,024 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 31/100 | Train Loss: 0.0382 | Val mean-roc_auc_score: 0.7995
|
| 159 |
+
2025-09-26 03:38:39,428 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 32/100 | Train Loss: 0.0174 | Val mean-roc_auc_score: 0.7848
|
| 160 |
+
2025-09-26 03:38:45,567 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 33/100 | Train Loss: 0.0086 | Val mean-roc_auc_score: 0.7719
|
| 161 |
+
2025-09-26 03:38:48,636 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 34/100 | Train Loss: 0.0183 | Val mean-roc_auc_score: 0.7721
|
| 162 |
+
2025-09-26 03:38:54,553 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 35/100 | Train Loss: 0.0591 | Val mean-roc_auc_score: 0.7882
|
| 163 |
+
2025-09-26 03:39:00,081 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 36/100 | Train Loss: 0.0811 | Val mean-roc_auc_score: 0.7862
|
| 164 |
+
2025-09-26 03:39:05,709 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 37/100 | Train Loss: 0.0319 | Val mean-roc_auc_score: 0.7892
|
| 165 |
+
2025-09-26 03:39:11,322 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 38/100 | Train Loss: 0.0238 | Val mean-roc_auc_score: 0.8023
|
| 166 |
+
2025-09-26 03:39:14,072 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 39/100 | Train Loss: 0.0182 | Val mean-roc_auc_score: 0.7780
|
| 167 |
+
2025-09-26 03:39:19,411 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 40/100 | Train Loss: 0.0250 | Val mean-roc_auc_score: 0.8085
|
| 168 |
+
2025-09-26 03:39:25,019 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 41/100 | Train Loss: 0.0218 | Val mean-roc_auc_score: 0.7811
|
| 169 |
+
2025-09-26 03:39:30,915 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 42/100 | Train Loss: 0.0183 | Val mean-roc_auc_score: 0.7855
|
| 170 |
+
2025-09-26 03:39:36,703 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 43/100 | Train Loss: 0.0159 | Val mean-roc_auc_score: 0.7883
|
| 171 |
+
2025-09-26 03:39:42,286 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 44/100 | Train Loss: 0.0485 | Val mean-roc_auc_score: 0.7928
|
| 172 |
+
2025-09-26 03:39:45,415 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 45/100 | Train Loss: 0.0167 | Val mean-roc_auc_score: 0.8072
|
| 173 |
+
2025-09-26 03:39:51,360 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 46/100 | Train Loss: 0.0295 | Val mean-roc_auc_score: 0.7867
|
| 174 |
+
2025-09-26 03:39:57,467 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 47/100 | Train Loss: 0.0163 | Val mean-roc_auc_score: 0.8182
|
| 175 |
+
2025-09-26 03:40:03,088 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 48/100 | Train Loss: 0.0108 | Val mean-roc_auc_score: 0.8126
|
| 176 |
+
2025-09-26 03:40:08,880 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 49/100 | Train Loss: 0.0192 | Val mean-roc_auc_score: 0.8043
|
| 177 |
+
2025-09-26 03:40:12,963 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 50/100 | Train Loss: 0.0135 | Val mean-roc_auc_score: 0.7962
|
| 178 |
+
2025-09-26 03:40:18,462 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 51/100 | Train Loss: 0.0182 | Val mean-roc_auc_score: 0.7809
|
| 179 |
+
2025-09-26 03:40:25,058 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 52/100 | Train Loss: 0.0247 | Val mean-roc_auc_score: 0.7875
|
| 180 |
+
2025-09-26 03:40:31,548 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 53/100 | Train Loss: 0.0124 | Val mean-roc_auc_score: 0.8036
|
| 181 |
+
2025-09-26 03:40:37,711 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 54/100 | Train Loss: 0.0236 | Val mean-roc_auc_score: 0.8072
|
| 182 |
+
2025-09-26 03:40:44,819 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 55/100 | Train Loss: 0.0110 | Val mean-roc_auc_score: 0.8050
|
| 183 |
+
2025-09-26 03:40:48,228 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 56/100 | Train Loss: 0.0147 | Val mean-roc_auc_score: 0.8072
|
| 184 |
+
2025-09-26 03:40:54,128 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 57/100 | Train Loss: 0.0054 | Val mean-roc_auc_score: 0.8057
|
| 185 |
+
2025-09-26 03:41:00,329 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 58/100 | Train Loss: 0.0058 | Val mean-roc_auc_score: 0.8050
|
| 186 |
+
2025-09-26 03:41:06,373 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 59/100 | Train Loss: 0.0057 | Val mean-roc_auc_score: 0.8113
|
| 187 |
+
2025-09-26 03:41:12,451 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 60/100 | Train Loss: 0.0101 | Val mean-roc_auc_score: 0.7981
|
| 188 |
+
2025-09-26 03:41:16,034 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 61/100 | Train Loss: 0.0367 | Val mean-roc_auc_score: 0.8309
|
| 189 |
+
2025-09-26 03:41:22,573 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 62/100 | Train Loss: 0.0196 | Val mean-roc_auc_score: 0.8153
|
| 190 |
+
2025-09-26 03:41:28,383 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 63/100 | Train Loss: 0.0122 | Val mean-roc_auc_score: 0.8140
|
| 191 |
+
2025-09-26 03:41:34,141 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 64/100 | Train Loss: 0.0077 | Val mean-roc_auc_score: 0.7561
|
| 192 |
+
2025-09-26 03:41:39,972 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 65/100 | Train Loss: 0.0160 | Val mean-roc_auc_score: 0.7738
|
| 193 |
+
2025-09-26 03:41:43,272 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 66/100 | Train Loss: 0.0148 | Val mean-roc_auc_score: 0.7854
|
| 194 |
+
2025-09-26 03:41:49,221 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 67/100 | Train Loss: 0.0070 | Val mean-roc_auc_score: 0.7839
|
| 195 |
+
2025-09-26 03:41:54,973 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 68/100 | Train Loss: 0.0049 | Val mean-roc_auc_score: 0.7675
|
| 196 |
+
2025-09-26 03:42:00,264 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 69/100 | Train Loss: 0.0045 | Val mean-roc_auc_score: 0.7699
|
| 197 |
+
2025-09-26 03:42:05,772 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 70/100 | Train Loss: 0.0024 | Val mean-roc_auc_score: 0.7697
|
| 198 |
+
2025-09-26 03:42:11,649 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 71/100 | Train Loss: 0.0023 | Val mean-roc_auc_score: 0.7679
|
| 199 |
+
2025-09-26 03:42:15,042 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 72/100 | Train Loss: 0.0108 | Val mean-roc_auc_score: 0.7523
|
| 200 |
+
2025-09-26 03:42:20,362 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 73/100 | Train Loss: 0.0002 | Val mean-roc_auc_score: 0.7514
|
| 201 |
+
2025-09-26 03:42:25,531 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 74/100 | Train Loss: 0.0041 | Val mean-roc_auc_score: 0.7499
|
| 202 |
+
2025-09-26 03:42:31,256 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 75/100 | Train Loss: 0.0192 | Val mean-roc_auc_score: 0.7689
|
| 203 |
+
2025-09-26 03:42:36,828 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 76/100 | Train Loss: 0.0106 | Val mean-roc_auc_score: 0.7850
|
| 204 |
+
2025-09-26 03:42:42,998 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 77/100 | Train Loss: 0.0069 | Val mean-roc_auc_score: 0.7858
|
| 205 |
+
2025-09-26 03:42:45,789 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 78/100 | Train Loss: 0.0035 | Val mean-roc_auc_score: 0.7859
|
| 206 |
+
2025-09-26 03:42:50,939 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 79/100 | Train Loss: 0.0075 | Val mean-roc_auc_score: 0.7811
|
| 207 |
+
2025-09-26 03:42:56,860 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 80/100 | Train Loss: 0.0027 | Val mean-roc_auc_score: 0.7801
|
| 208 |
+
2025-09-26 03:43:02,653 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 81/100 | Train Loss: 0.0030 | Val mean-roc_auc_score: 0.7783
|
| 209 |
+
2025-09-26 03:43:09,964 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 82/100 | Train Loss: 0.0028 | Val mean-roc_auc_score: 0.7763
|
| 210 |
+
2025-09-26 03:43:12,685 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 83/100 | Train Loss: 0.0060 | Val mean-roc_auc_score: 0.7819
|
| 211 |
+
2025-09-26 03:43:17,834 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 84/100 | Train Loss: 0.0011 | Val mean-roc_auc_score: 0.7820
|
| 212 |
+
2025-09-26 03:43:23,588 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 85/100 | Train Loss: 0.0015 | Val mean-roc_auc_score: 0.7815
|
| 213 |
+
2025-09-26 03:43:28,976 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 86/100 | Train Loss: 0.0065 | Val mean-roc_auc_score: 0.7785
|
| 214 |
+
2025-09-26 03:43:34,739 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 87/100 | Train Loss: 0.0024 | Val mean-roc_auc_score: 0.7782
|
| 215 |
+
2025-09-26 03:43:40,185 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 88/100 | Train Loss: 0.0026 | Val mean-roc_auc_score: 0.7751
|
| 216 |
+
2025-09-26 03:43:43,194 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 89/100 | Train Loss: 0.0121 | Val mean-roc_auc_score: 0.7446
|
| 217 |
+
2025-09-26 03:43:48,676 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 90/100 | Train Loss: 0.0036 | Val mean-roc_auc_score: 0.7460
|
| 218 |
+
2025-09-26 03:43:54,190 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 91/100 | Train Loss: 0.0030 | Val mean-roc_auc_score: 0.7446
|
| 219 |
+
2025-09-26 03:44:00,066 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 92/100 | Train Loss: 0.0095 | Val mean-roc_auc_score: 0.7489
|
| 220 |
+
2025-09-26 03:44:05,190 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 93/100 | Train Loss: 0.0103 | Val mean-roc_auc_score: 0.7407
|
| 221 |
+
2025-09-26 03:44:10,689 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 94/100 | Train Loss: 0.0053 | Val mean-roc_auc_score: 0.7467
|
| 222 |
+
2025-09-26 03:44:13,985 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 95/100 | Train Loss: 0.0041 | Val mean-roc_auc_score: 0.7471
|
| 223 |
+
2025-09-26 03:44:19,555 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 96/100 | Train Loss: 0.0031 | Val mean-roc_auc_score: 0.7472
|
| 224 |
+
2025-09-26 03:44:25,152 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 97/100 | Train Loss: 0.0023 | Val mean-roc_auc_score: 0.7477
|
| 225 |
+
2025-09-26 03:44:30,308 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 98/100 | Train Loss: 0.0036 | Val mean-roc_auc_score: 0.7441
|
| 226 |
+
2025-09-26 03:44:36,093 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 99/100 | Train Loss: 0.0032 | Val mean-roc_auc_score: 0.7464
|
| 227 |
+
2025-09-26 03:44:41,988 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 100/100 | Train Loss: 0.0037 | Val mean-roc_auc_score: 0.7442
|
| 228 |
+
2025-09-26 03:44:42,412 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Test mean-roc_auc_score: 0.7792
|
| 229 |
+
2025-09-26 03:44:42,743 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Starting triplicate run 3 for dataset cocrystal at 2025-09-26_03-44-42
|
| 230 |
+
2025-09-26 03:44:45,810 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 1/100 | Train Loss: 0.7466 | Val mean-roc_auc_score: 0.7670
|
| 231 |
+
2025-09-26 03:44:45,810 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Global step of best model: 37
|
| 232 |
+
2025-09-26 03:44:46,580 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Best model saved at epoch 1 with val mean-roc_auc_score: 0.7670
|
| 233 |
+
2025-09-26 03:44:52,680 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 2/100 | Train Loss: 0.5709 | Val mean-roc_auc_score: 0.8660
|
| 234 |
+
2025-09-26 03:44:52,889 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Global step of best model: 74
|
| 235 |
+
2025-09-26 03:44:53,546 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Best model saved at epoch 2 with val mean-roc_auc_score: 0.8660
|
| 236 |
+
2025-09-26 03:44:59,177 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 3/100 | Train Loss: 0.4375 | Val mean-roc_auc_score: 0.8419
|
| 237 |
+
2025-09-26 03:45:04,962 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 4/100 | Train Loss: 0.4358 | Val mean-roc_auc_score: 0.9030
|
| 238 |
+
2025-09-26 03:45:05,165 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Global step of best model: 148
|
| 239 |
+
2025-09-26 03:45:05,847 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Best model saved at epoch 4 with val mean-roc_auc_score: 0.9030
|
| 240 |
+
2025-09-26 03:45:11,416 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 5/100 | Train Loss: 0.3801 | Val mean-roc_auc_score: 0.9095
|
| 241 |
+
2025-09-26 03:45:11,631 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Global step of best model: 185
|
| 242 |
+
2025-09-26 03:45:12,256 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Best model saved at epoch 5 with val mean-roc_auc_score: 0.9095
|
| 243 |
+
2025-09-26 03:45:15,125 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 6/100 | Train Loss: 0.3835 | Val mean-roc_auc_score: 0.8751
|
| 244 |
+
2025-09-26 03:45:20,514 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 7/100 | Train Loss: 0.3497 | Val mean-roc_auc_score: 0.8844
|
| 245 |
+
2025-09-26 03:45:25,543 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 8/100 | Train Loss: 0.2973 | Val mean-roc_auc_score: 0.9099
|
| 246 |
+
2025-09-26 03:45:25,747 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Global step of best model: 296
|
| 247 |
+
2025-09-26 03:45:26,378 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Best model saved at epoch 8 with val mean-roc_auc_score: 0.9099
|
| 248 |
+
2025-09-26 03:45:31,861 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 9/100 | Train Loss: 0.2917 | Val mean-roc_auc_score: 0.9256
|
| 249 |
+
2025-09-26 03:45:32,072 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Global step of best model: 333
|
| 250 |
+
2025-09-26 03:45:32,713 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Best model saved at epoch 9 with val mean-roc_auc_score: 0.9256
|
| 251 |
+
2025-09-26 03:45:38,581 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 10/100 | Train Loss: 0.2584 | Val mean-roc_auc_score: 0.9042
|
| 252 |
+
2025-09-26 03:45:41,721 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 11/100 | Train Loss: 0.3058 | Val mean-roc_auc_score: 0.8874
|
| 253 |
+
2025-09-26 03:45:47,581 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 12/100 | Train Loss: 0.2297 | Val mean-roc_auc_score: 0.8784
|
| 254 |
+
2025-09-26 03:45:53,019 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 13/100 | Train Loss: 0.1968 | Val mean-roc_auc_score: 0.8768
|
| 255 |
+
2025-09-26 03:45:58,517 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 14/100 | Train Loss: 0.1953 | Val mean-roc_auc_score: 0.8763
|
| 256 |
+
2025-09-26 03:46:03,838 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 15/100 | Train Loss: 0.1993 | Val mean-roc_auc_score: 0.8828
|
| 257 |
+
2025-09-26 03:46:09,452 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 16/100 | Train Loss: 0.1495 | Val mean-roc_auc_score: 0.8701
|
| 258 |
+
2025-09-26 03:46:12,911 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 17/100 | Train Loss: 0.1681 | Val mean-roc_auc_score: 0.8851
|
| 259 |
+
2025-09-26 03:46:18,434 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 18/100 | Train Loss: 0.1394 | Val mean-roc_auc_score: 0.8623
|
| 260 |
+
2025-09-26 03:46:24,041 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 19/100 | Train Loss: 0.1133 | Val mean-roc_auc_score: 0.8506
|
| 261 |
+
2025-09-26 03:46:29,379 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 20/100 | Train Loss: 0.1309 | Val mean-roc_auc_score: 0.8827
|
| 262 |
+
2025-09-26 03:46:35,283 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 21/100 | Train Loss: 0.1001 | Val mean-roc_auc_score: 0.8562
|
| 263 |
+
2025-09-26 03:46:41,145 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 22/100 | Train Loss: 0.0703 | Val mean-roc_auc_score: 0.8444
|
| 264 |
+
2025-09-26 03:46:44,011 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 23/100 | Train Loss: 0.0697 | Val mean-roc_auc_score: 0.8477
|
| 265 |
+
2025-09-26 03:46:49,263 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 24/100 | Train Loss: 0.1436 | Val mean-roc_auc_score: 0.8287
|
| 266 |
+
2025-09-26 03:46:54,752 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 25/100 | Train Loss: 0.0719 | Val mean-roc_auc_score: 0.8543
|
| 267 |
+
2025-09-26 03:47:00,282 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 26/100 | Train Loss: 0.0583 | Val mean-roc_auc_score: 0.8511
|
| 268 |
+
2025-09-26 03:47:07,551 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 27/100 | Train Loss: 0.0270 | Val mean-roc_auc_score: 0.8570
|
| 269 |
+
2025-09-26 03:47:10,348 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 28/100 | Train Loss: 0.0573 | Val mean-roc_auc_score: 0.8787
|
| 270 |
+
2025-09-26 03:47:16,005 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 29/100 | Train Loss: 0.0853 | Val mean-roc_auc_score: 0.8541
|
| 271 |
+
2025-09-26 03:47:21,461 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 30/100 | Train Loss: 0.0707 | Val mean-roc_auc_score: 0.8730
|
| 272 |
+
2025-09-26 03:47:27,045 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 31/100 | Train Loss: 0.0484 | Val mean-roc_auc_score: 0.8734
|
| 273 |
+
2025-09-26 03:47:32,697 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 32/100 | Train Loss: 0.0268 | Val mean-roc_auc_score: 0.8833
|
| 274 |
+
2025-09-26 03:47:37,884 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 33/100 | Train Loss: 0.0184 | Val mean-roc_auc_score: 0.8729
|
| 275 |
+
2025-09-26 03:47:40,940 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 34/100 | Train Loss: 0.0127 | Val mean-roc_auc_score: 0.8779
|
| 276 |
+
2025-09-26 03:47:46,713 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 35/100 | Train Loss: 0.0270 | Val mean-roc_auc_score: 0.8753
|
| 277 |
+
2025-09-26 03:47:52,360 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 36/100 | Train Loss: 0.0481 | Val mean-roc_auc_score: 0.8877
|
| 278 |
+
2025-09-26 03:47:58,000 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 37/100 | Train Loss: 0.0200 | Val mean-roc_auc_score: 0.8892
|
| 279 |
+
2025-09-26 03:48:03,770 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 38/100 | Train Loss: 0.0186 | Val mean-roc_auc_score: 0.8939
|
| 280 |
+
2025-09-26 03:48:09,917 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 39/100 | Train Loss: 0.0130 | Val mean-roc_auc_score: 0.8870
|
| 281 |
+
2025-09-26 03:48:12,811 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 40/100 | Train Loss: 0.0146 | Val mean-roc_auc_score: 0.8894
|
| 282 |
+
2025-09-26 03:48:18,912 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 41/100 | Train Loss: 0.0186 | Val mean-roc_auc_score: 0.8965
|
| 283 |
+
2025-09-26 03:48:25,287 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 42/100 | Train Loss: 0.0270 | Val mean-roc_auc_score: 0.9004
|
| 284 |
+
2025-09-26 03:48:31,448 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 43/100 | Train Loss: 0.0355 | Val mean-roc_auc_score: 0.8999
|
| 285 |
+
2025-09-26 03:48:37,437 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 44/100 | Train Loss: 0.0859 | Val mean-roc_auc_score: 0.8838
|
| 286 |
+
2025-09-26 03:48:40,990 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 45/100 | Train Loss: 0.0412 | Val mean-roc_auc_score: 0.8916
|
| 287 |
+
2025-09-26 03:48:46,873 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 46/100 | Train Loss: 0.0074 | Val mean-roc_auc_score: 0.8896
|
| 288 |
+
2025-09-26 03:48:52,767 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 47/100 | Train Loss: 0.0181 | Val mean-roc_auc_score: 0.8865
|
| 289 |
+
2025-09-26 03:48:58,464 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 48/100 | Train Loss: 0.0115 | Val mean-roc_auc_score: 0.8855
|
| 290 |
+
2025-09-26 03:49:04,345 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 49/100 | Train Loss: 0.0140 | Val mean-roc_auc_score: 0.8827
|
| 291 |
+
2025-09-26 03:49:10,344 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 50/100 | Train Loss: 0.0139 | Val mean-roc_auc_score: 0.8828
|
| 292 |
+
2025-09-26 03:49:13,582 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 51/100 | Train Loss: 0.0231 | Val mean-roc_auc_score: 0.8871
|
| 293 |
+
2025-09-26 03:49:19,748 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 52/100 | Train Loss: 0.0285 | Val mean-roc_auc_score: 0.8687
|
| 294 |
+
2025-09-26 03:49:25,350 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 53/100 | Train Loss: 0.0209 | Val mean-roc_auc_score: 0.8546
|
| 295 |
+
2025-09-26 03:49:31,190 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 54/100 | Train Loss: 0.0386 | Val mean-roc_auc_score: 0.8641
|
| 296 |
+
2025-09-26 03:49:37,859 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 55/100 | Train Loss: 0.0266 | Val mean-roc_auc_score: 0.8627
|
| 297 |
+
2025-09-26 03:49:40,832 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 56/100 | Train Loss: 0.0114 | Val mean-roc_auc_score: 0.8615
|
| 298 |
+
2025-09-26 03:49:47,233 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 57/100 | Train Loss: 0.0050 | Val mean-roc_auc_score: 0.8647
|
| 299 |
+
2025-09-26 03:49:52,411 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 58/100 | Train Loss: 0.0101 | Val mean-roc_auc_score: 0.8667
|
| 300 |
+
2025-09-26 03:49:57,814 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 59/100 | Train Loss: 0.0127 | Val mean-roc_auc_score: 0.8645
|
| 301 |
+
2025-09-26 03:50:03,435 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 60/100 | Train Loss: 0.0058 | Val mean-roc_auc_score: 0.8672
|
| 302 |
+
2025-09-26 03:50:08,923 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 61/100 | Train Loss: 0.0041 | Val mean-roc_auc_score: 0.8670
|
| 303 |
+
2025-09-26 03:50:12,240 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 62/100 | Train Loss: 0.0039 | Val mean-roc_auc_score: 0.8677
|
| 304 |
+
2025-09-26 03:50:17,539 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 63/100 | Train Loss: 0.0033 | Val mean-roc_auc_score: 0.8677
|
| 305 |
+
2025-09-26 03:50:23,200 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 64/100 | Train Loss: 0.0031 | Val mean-roc_auc_score: 0.8670
|
| 306 |
+
2025-09-26 03:50:28,821 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 65/100 | Train Loss: 0.0123 | Val mean-roc_auc_score: 0.8660
|
| 307 |
+
2025-09-26 03:50:34,229 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 66/100 | Train Loss: 0.0030 | Val mean-roc_auc_score: 0.8634
|
| 308 |
+
2025-09-26 03:50:40,315 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 67/100 | Train Loss: 0.0087 | Val mean-roc_auc_score: 0.8635
|
| 309 |
+
2025-09-26 03:50:43,224 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 68/100 | Train Loss: 0.0066 | Val mean-roc_auc_score: 0.8658
|
| 310 |
+
2025-09-26 03:50:48,593 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 69/100 | Train Loss: 0.0046 | Val mean-roc_auc_score: 0.8651
|
| 311 |
+
2025-09-26 03:50:54,168 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 70/100 | Train Loss: 0.0027 | Val mean-roc_auc_score: 0.8652
|
| 312 |
+
2025-09-26 03:50:59,903 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 71/100 | Train Loss: 0.0021 | Val mean-roc_auc_score: 0.8646
|
| 313 |
+
2025-09-26 03:51:05,403 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 72/100 | Train Loss: 0.0018 | Val mean-roc_auc_score: 0.8645
|
| 314 |
+
2025-09-26 03:51:08,305 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 73/100 | Train Loss: 0.0009 | Val mean-roc_auc_score: 0.8649
|
| 315 |
+
2025-09-26 03:51:13,727 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 74/100 | Train Loss: 0.0056 | Val mean-roc_auc_score: 0.8632
|
| 316 |
+
2025-09-26 03:51:19,225 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 75/100 | Train Loss: 0.0038 | Val mean-roc_auc_score: 0.8686
|
| 317 |
+
2025-09-26 03:51:24,932 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 76/100 | Train Loss: 0.0042 | Val mean-roc_auc_score: 0.8685
|
| 318 |
+
2025-09-26 03:51:30,796 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 77/100 | Train Loss: 0.0027 | Val mean-roc_auc_score: 0.8659
|
| 319 |
+
2025-09-26 03:51:36,353 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 78/100 | Train Loss: 0.0023 | Val mean-roc_auc_score: 0.8692
|
| 320 |
+
2025-09-26 03:51:39,956 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 79/100 | Train Loss: 0.0046 | Val mean-roc_auc_score: 0.8694
|
| 321 |
+
2025-09-26 03:51:45,452 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 80/100 | Train Loss: 0.0020 | Val mean-roc_auc_score: 0.8679
|
| 322 |
+
2025-09-26 03:51:51,353 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 81/100 | Train Loss: 0.0046 | Val mean-roc_auc_score: 0.8756
|
| 323 |
+
2025-09-26 03:51:59,017 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 82/100 | Train Loss: 0.0046 | Val mean-roc_auc_score: 0.8691
|
| 324 |
+
2025-09-26 03:52:04,255 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 83/100 | Train Loss: 0.0074 | Val mean-roc_auc_score: 0.8631
|
| 325 |
+
2025-09-26 03:52:09,516 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 84/100 | Train Loss: 0.0038 | Val mean-roc_auc_score: 0.8535
|
| 326 |
+
2025-09-26 03:52:12,638 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 85/100 | Train Loss: 0.0043 | Val mean-roc_auc_score: 0.8537
|
| 327 |
+
2025-09-26 03:52:17,897 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 86/100 | Train Loss: 0.0034 | Val mean-roc_auc_score: 0.8584
|
| 328 |
+
2025-09-26 03:52:23,725 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 87/100 | Train Loss: 0.0040 | Val mean-roc_auc_score: 0.8604
|
| 329 |
+
2025-09-26 03:52:28,945 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 88/100 | Train Loss: 0.0027 | Val mean-roc_auc_score: 0.8623
|
| 330 |
+
2025-09-26 03:52:34,438 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 89/100 | Train Loss: 0.0022 | Val mean-roc_auc_score: 0.8628
|
| 331 |
+
2025-09-26 03:52:37,172 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 90/100 | Train Loss: 0.0018 | Val mean-roc_auc_score: 0.8629
|
| 332 |
+
2025-09-26 03:52:42,747 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 91/100 | Train Loss: 0.0016 | Val mean-roc_auc_score: 0.8632
|
| 333 |
+
2025-09-26 03:52:48,594 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 92/100 | Train Loss: 0.0020 | Val mean-roc_auc_score: 0.8630
|
| 334 |
+
2025-09-26 03:52:54,032 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 93/100 | Train Loss: 0.0085 | Val mean-roc_auc_score: 0.8611
|
| 335 |
+
2025-09-26 03:52:59,910 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 94/100 | Train Loss: 0.0045 | Val mean-roc_auc_score: 0.8578
|
| 336 |
+
2025-09-26 03:53:05,658 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 95/100 | Train Loss: 0.0015 | Val mean-roc_auc_score: 0.8574
|
| 337 |
+
2025-09-26 03:53:08,918 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 96/100 | Train Loss: 0.0015 | Val mean-roc_auc_score: 0.8581
|
| 338 |
+
2025-09-26 03:53:15,163 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 97/100 | Train Loss: 0.0016 | Val mean-roc_auc_score: 0.8577
|
| 339 |
+
2025-09-26 03:53:19,718 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 98/100 | Train Loss: 0.0016 | Val mean-roc_auc_score: 0.8589
|
| 340 |
+
2025-09-26 03:53:23,748 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 99/100 | Train Loss: 0.0017 | Val mean-roc_auc_score: 0.8593
|
| 341 |
+
2025-09-26 03:53:27,689 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 100/100 | Train Loss: 0.0018 | Val mean-roc_auc_score: 0.8603
|
| 342 |
+
2025-09-26 03:53:28,072 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Test mean-roc_auc_score: 0.8067
|
| 343 |
+
2025-09-26 03:53:28,351 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Final Triplicate Test Results — Avg mean-roc_auc_score: 0.7967, Std Dev: 0.0124
|
logs_modchembert_classification_ModChemBERT-MLM/modchembert_deepchem_splits_run_covid19_epochs100_batch_size32_20250926_005642.log
ADDED
|
@@ -0,0 +1,333 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
2025-09-26 00:56:42,083 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Running benchmark for dataset: covid19
|
| 2 |
+
2025-09-26 00:56:42,084 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - dataset: covid19, tasks: ['label'], epochs: 100, learning rate: 3e-05
|
| 3 |
+
2025-09-26 00:56:42,088 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Starting triplicate run 1 for dataset covid19 at 2025-09-26_00-56-42
|
| 4 |
+
2025-09-26 00:56:49,824 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 1/100 | Train Loss: 0.5846 | Val mean-roc_auc_score: 0.8032
|
| 5 |
+
2025-09-26 00:56:49,824 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Global step of best model: 65
|
| 6 |
+
2025-09-26 00:56:50,751 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Best model saved at epoch 1 with val mean-roc_auc_score: 0.8032
|
| 7 |
+
2025-09-26 00:57:00,953 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 2/100 | Train Loss: 0.4375 | Val mean-roc_auc_score: 0.8143
|
| 8 |
+
2025-09-26 00:57:01,141 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Global step of best model: 130
|
| 9 |
+
2025-09-26 00:57:01,701 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Best model saved at epoch 2 with val mean-roc_auc_score: 0.8143
|
| 10 |
+
2025-09-26 00:57:15,472 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 3/100 | Train Loss: 0.3731 | Val mean-roc_auc_score: 0.8106
|
| 11 |
+
2025-09-26 00:57:33,214 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 4/100 | Train Loss: 0.3104 | Val mean-roc_auc_score: 0.8192
|
| 12 |
+
2025-09-26 00:57:33,409 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Global step of best model: 260
|
| 13 |
+
2025-09-26 00:57:34,137 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Best model saved at epoch 4 with val mean-roc_auc_score: 0.8192
|
| 14 |
+
2025-09-26 00:57:48,851 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 5/100 | Train Loss: 0.2500 | Val mean-roc_auc_score: 0.8021
|
| 15 |
+
2025-09-26 00:58:07,052 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 6/100 | Train Loss: 0.1808 | Val mean-roc_auc_score: 0.8087
|
| 16 |
+
2025-09-26 00:58:23,962 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 7/100 | Train Loss: 0.1614 | Val mean-roc_auc_score: 0.7898
|
| 17 |
+
2025-09-26 00:58:43,420 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 8/100 | Train Loss: 0.1047 | Val mean-roc_auc_score: 0.7869
|
| 18 |
+
2025-09-26 00:58:59,038 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 9/100 | Train Loss: 0.1106 | Val mean-roc_auc_score: 0.8218
|
| 19 |
+
2025-09-26 00:58:59,216 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Global step of best model: 585
|
| 20 |
+
2025-09-26 00:58:59,858 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Best model saved at epoch 9 with val mean-roc_auc_score: 0.8218
|
| 21 |
+
2025-09-26 00:59:17,597 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 10/100 | Train Loss: 0.0838 | Val mean-roc_auc_score: 0.8054
|
| 22 |
+
2025-09-26 00:59:37,199 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 11/100 | Train Loss: 0.1000 | Val mean-roc_auc_score: 0.7948
|
| 23 |
+
2025-09-26 00:59:54,177 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 12/100 | Train Loss: 0.0760 | Val mean-roc_auc_score: 0.8087
|
| 24 |
+
2025-09-26 01:00:10,137 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 13/100 | Train Loss: 0.0437 | Val mean-roc_auc_score: 0.8049
|
| 25 |
+
2025-09-26 01:00:30,125 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 14/100 | Train Loss: 0.0723 | Val mean-roc_auc_score: 0.8029
|
| 26 |
+
2025-09-26 01:00:47,244 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 15/100 | Train Loss: 0.0377 | Val mean-roc_auc_score: 0.8108
|
| 27 |
+
2025-09-26 01:01:07,247 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 16/100 | Train Loss: 0.0377 | Val mean-roc_auc_score: 0.8090
|
| 28 |
+
2025-09-26 01:01:23,193 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 17/100 | Train Loss: 0.0316 | Val mean-roc_auc_score: 0.8023
|
| 29 |
+
2025-09-26 01:01:38,064 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 18/100 | Train Loss: 0.0380 | Val mean-roc_auc_score: 0.8061
|
| 30 |
+
2025-09-26 01:01:58,343 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 19/100 | Train Loss: 0.0547 | Val mean-roc_auc_score: 0.7897
|
| 31 |
+
2025-09-26 01:02:15,216 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 20/100 | Train Loss: 0.0529 | Val mean-roc_auc_score: 0.8193
|
| 32 |
+
2025-09-26 01:02:34,894 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 21/100 | Train Loss: 0.0349 | Val mean-roc_auc_score: 0.8093
|
| 33 |
+
2025-09-26 01:02:51,743 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 22/100 | Train Loss: 0.0495 | Val mean-roc_auc_score: 0.8122
|
| 34 |
+
2025-09-26 01:03:08,413 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 23/100 | Train Loss: 0.0261 | Val mean-roc_auc_score: 0.8129
|
| 35 |
+
2025-09-26 01:03:27,984 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 24/100 | Train Loss: 0.0440 | Val mean-roc_auc_score: 0.8034
|
| 36 |
+
2025-09-26 01:03:44,655 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 25/100 | Train Loss: 0.0572 | Val mean-roc_auc_score: 0.8266
|
| 37 |
+
2025-09-26 01:03:44,846 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Global step of best model: 1625
|
| 38 |
+
2025-09-26 01:03:45,467 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Best model saved at epoch 25 with val mean-roc_auc_score: 0.8266
|
| 39 |
+
2025-09-26 01:04:04,303 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 26/100 | Train Loss: 0.0305 | Val mean-roc_auc_score: 0.8296
|
| 40 |
+
2025-09-26 01:04:04,808 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Global step of best model: 1690
|
| 41 |
+
2025-09-26 01:04:05,499 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Best model saved at epoch 26 with val mean-roc_auc_score: 0.8296
|
| 42 |
+
2025-09-26 01:04:20,944 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 27/100 | Train Loss: 0.0341 | Val mean-roc_auc_score: 0.8172
|
| 43 |
+
2025-09-26 01:04:36,096 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 28/100 | Train Loss: 0.0309 | Val mean-roc_auc_score: 0.8221
|
| 44 |
+
2025-09-26 01:04:54,326 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 29/100 | Train Loss: 0.0306 | Val mean-roc_auc_score: 0.8263
|
| 45 |
+
2025-09-26 01:05:09,731 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 30/100 | Train Loss: 0.0280 | Val mean-roc_auc_score: 0.8341
|
| 46 |
+
2025-09-26 01:05:09,916 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Global step of best model: 1950
|
| 47 |
+
2025-09-26 01:05:10,548 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Best model saved at epoch 30 with val mean-roc_auc_score: 0.8341
|
| 48 |
+
2025-09-26 01:05:30,731 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 31/100 | Train Loss: 0.0122 | Val mean-roc_auc_score: 0.8302
|
| 49 |
+
2025-09-26 01:05:46,810 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 32/100 | Train Loss: 0.0260 | Val mean-roc_auc_score: 0.8221
|
| 50 |
+
2025-09-26 01:06:03,118 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 33/100 | Train Loss: 0.0229 | Val mean-roc_auc_score: 0.8281
|
| 51 |
+
2025-09-26 01:06:18,844 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 34/100 | Train Loss: 0.0207 | Val mean-roc_auc_score: 0.8295
|
| 52 |
+
2025-09-26 01:06:34,557 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 35/100 | Train Loss: 0.0183 | Val mean-roc_auc_score: 0.8262
|
| 53 |
+
2025-09-26 01:06:53,484 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 36/100 | Train Loss: 0.0236 | Val mean-roc_auc_score: 0.8259
|
| 54 |
+
2025-09-26 01:07:09,256 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 37/100 | Train Loss: 0.0207 | Val mean-roc_auc_score: 0.8283
|
| 55 |
+
2025-09-26 01:07:27,399 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 38/100 | Train Loss: 0.0209 | Val mean-roc_auc_score: 0.8282
|
| 56 |
+
2025-09-26 01:07:42,935 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 39/100 | Train Loss: 0.0231 | Val mean-roc_auc_score: 0.8271
|
| 57 |
+
2025-09-26 01:07:58,891 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 40/100 | Train Loss: 0.0173 | Val mean-roc_auc_score: 0.8264
|
| 58 |
+
2025-09-26 01:08:17,063 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 41/100 | Train Loss: 0.0302 | Val mean-roc_auc_score: 0.8178
|
| 59 |
+
2025-09-26 01:08:33,390 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 42/100 | Train Loss: 0.0204 | Val mean-roc_auc_score: 0.8203
|
| 60 |
+
2025-09-26 01:08:51,578 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 43/100 | Train Loss: 0.0373 | Val mean-roc_auc_score: 0.8027
|
| 61 |
+
2025-09-26 01:09:06,685 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 44/100 | Train Loss: 0.0435 | Val mean-roc_auc_score: 0.8153
|
| 62 |
+
2025-09-26 01:09:25,146 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 45/100 | Train Loss: 0.0208 | Val mean-roc_auc_score: 0.8201
|
| 63 |
+
2025-09-26 01:09:40,632 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 46/100 | Train Loss: 0.0254 | Val mean-roc_auc_score: 0.8153
|
| 64 |
+
2025-09-26 01:09:58,459 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 47/100 | Train Loss: 0.0190 | Val mean-roc_auc_score: 0.8168
|
| 65 |
+
2025-09-26 01:10:15,629 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 48/100 | Train Loss: 0.0268 | Val mean-roc_auc_score: 0.8208
|
| 66 |
+
2025-09-26 01:10:30,792 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 49/100 | Train Loss: 0.0204 | Val mean-roc_auc_score: 0.8177
|
| 67 |
+
2025-09-26 01:10:49,165 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 50/100 | Train Loss: 0.0242 | Val mean-roc_auc_score: 0.8263
|
| 68 |
+
2025-09-26 01:11:04,549 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 51/100 | Train Loss: 0.0204 | Val mean-roc_auc_score: 0.8249
|
| 69 |
+
2025-09-26 01:11:22,987 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 52/100 | Train Loss: 0.0153 | Val mean-roc_auc_score: 0.8240
|
| 70 |
+
2025-09-26 01:11:40,743 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 53/100 | Train Loss: 0.0192 | Val mean-roc_auc_score: 0.8251
|
| 71 |
+
2025-09-26 01:11:54,621 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 54/100 | Train Loss: 0.0307 | Val mean-roc_auc_score: 0.8309
|
| 72 |
+
2025-09-26 01:12:12,963 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 55/100 | Train Loss: 0.0179 | Val mean-roc_auc_score: 0.8239
|
| 73 |
+
2025-09-26 01:12:26,857 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 56/100 | Train Loss: 0.0195 | Val mean-roc_auc_score: 0.8229
|
| 74 |
+
2025-09-26 01:12:45,295 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 57/100 | Train Loss: 0.0106 | Val mean-roc_auc_score: 0.8213
|
| 75 |
+
2025-09-26 01:13:00,328 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 58/100 | Train Loss: 0.0173 | Val mean-roc_auc_score: 0.8238
|
| 76 |
+
2025-09-26 01:13:18,775 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 59/100 | Train Loss: 0.0148 | Val mean-roc_auc_score: 0.8247
|
| 77 |
+
2025-09-26 01:13:34,627 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 60/100 | Train Loss: 0.0166 | Val mean-roc_auc_score: 0.8215
|
| 78 |
+
2025-09-26 01:13:50,057 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 61/100 | Train Loss: 0.0203 | Val mean-roc_auc_score: 0.8280
|
| 79 |
+
2025-09-26 01:14:10,815 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 62/100 | Train Loss: 0.0211 | Val mean-roc_auc_score: 0.8262
|
| 80 |
+
2025-09-26 01:14:25,508 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 63/100 | Train Loss: 0.0163 | Val mean-roc_auc_score: 0.8250
|
| 81 |
+
2025-09-26 01:14:43,751 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 64/100 | Train Loss: 0.0173 | Val mean-roc_auc_score: 0.8297
|
| 82 |
+
2025-09-26 01:14:59,468 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 65/100 | Train Loss: 0.0105 | Val mean-roc_auc_score: 0.8272
|
| 83 |
+
2025-09-26 01:15:17,675 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 66/100 | Train Loss: 0.0150 | Val mean-roc_auc_score: 0.8281
|
| 84 |
+
2025-09-26 01:15:33,375 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 67/100 | Train Loss: 0.0161 | Val mean-roc_auc_score: 0.8268
|
| 85 |
+
2025-09-26 01:15:48,259 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 68/100 | Train Loss: 0.0125 | Val mean-roc_auc_score: 0.8254
|
| 86 |
+
2025-09-26 01:16:06,467 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 69/100 | Train Loss: 0.0155 | Val mean-roc_auc_score: 0.8275
|
| 87 |
+
2025-09-26 01:16:22,466 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 70/100 | Train Loss: 0.0177 | Val mean-roc_auc_score: 0.8287
|
| 88 |
+
2025-09-26 01:16:41,578 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 71/100 | Train Loss: 0.0148 | Val mean-roc_auc_score: 0.8267
|
| 89 |
+
2025-09-26 01:16:57,905 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 72/100 | Train Loss: 0.0207 | Val mean-roc_auc_score: 0.8295
|
| 90 |
+
2025-09-26 01:17:15,695 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 73/100 | Train Loss: 0.0148 | Val mean-roc_auc_score: 0.8261
|
| 91 |
+
2025-09-26 01:17:30,375 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 74/100 | Train Loss: 0.0137 | Val mean-roc_auc_score: 0.8262
|
| 92 |
+
2025-09-26 01:17:48,470 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 75/100 | Train Loss: 0.0168 | Val mean-roc_auc_score: 0.8268
|
| 93 |
+
2025-09-26 01:18:04,020 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 76/100 | Train Loss: 0.0149 | Val mean-roc_auc_score: 0.8263
|
| 94 |
+
2025-09-26 01:18:23,936 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 77/100 | Train Loss: 0.0178 | Val mean-roc_auc_score: 0.8269
|
| 95 |
+
2025-09-26 01:18:38,904 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 78/100 | Train Loss: 0.0162 | Val mean-roc_auc_score: 0.8273
|
| 96 |
+
2025-09-26 01:18:55,587 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 79/100 | Train Loss: 0.0172 | Val mean-roc_auc_score: 0.8298
|
| 97 |
+
2025-09-26 01:19:11,056 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 80/100 | Train Loss: 0.0167 | Val mean-roc_auc_score: 0.8309
|
| 98 |
+
2025-09-26 01:19:29,320 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 81/100 | Train Loss: 0.0147 | Val mean-roc_auc_score: 0.8283
|
| 99 |
+
2025-09-26 01:19:47,594 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 82/100 | Train Loss: 0.0307 | Val mean-roc_auc_score: 0.8210
|
| 100 |
+
2025-09-26 01:20:02,775 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 83/100 | Train Loss: 0.0260 | Val mean-roc_auc_score: 0.8332
|
| 101 |
+
2025-09-26 01:20:15,601 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 84/100 | Train Loss: 0.0165 | Val mean-roc_auc_score: 0.8306
|
| 102 |
+
2025-09-26 01:20:33,339 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 85/100 | Train Loss: 0.0105 | Val mean-roc_auc_score: 0.8310
|
| 103 |
+
2025-09-26 01:20:49,286 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 86/100 | Train Loss: 0.0208 | Val mean-roc_auc_score: 0.8219
|
| 104 |
+
2025-09-26 01:21:07,486 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 87/100 | Train Loss: 0.0183 | Val mean-roc_auc_score: 0.8235
|
| 105 |
+
2025-09-26 01:21:23,458 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 88/100 | Train Loss: 0.0188 | Val mean-roc_auc_score: 0.8260
|
| 106 |
+
2025-09-26 01:21:41,591 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 89/100 | Train Loss: 0.0168 | Val mean-roc_auc_score: 0.8289
|
| 107 |
+
2025-09-26 01:21:57,253 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 90/100 | Train Loss: 0.0159 | Val mean-roc_auc_score: 0.8262
|
| 108 |
+
2025-09-26 01:22:15,611 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 91/100 | Train Loss: 0.0166 | Val mean-roc_auc_score: 0.8272
|
| 109 |
+
2025-09-26 01:22:31,560 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 92/100 | Train Loss: 0.0166 | Val mean-roc_auc_score: 0.8189
|
| 110 |
+
2025-09-26 01:22:48,886 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 93/100 | Train Loss: 0.0198 | Val mean-roc_auc_score: 0.8259
|
| 111 |
+
2025-09-26 01:23:05,513 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 94/100 | Train Loss: 0.0150 | Val mean-roc_auc_score: 0.8224
|
| 112 |
+
2025-09-26 01:23:20,793 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 95/100 | Train Loss: 0.0207 | Val mean-roc_auc_score: 0.8282
|
| 113 |
+
2025-09-26 01:23:38,328 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 96/100 | Train Loss: 0.0193 | Val mean-roc_auc_score: 0.8225
|
| 114 |
+
2025-09-26 01:23:54,028 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 97/100 | Train Loss: 0.0211 | Val mean-roc_auc_score: 0.8238
|
| 115 |
+
2025-09-26 01:24:11,644 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 98/100 | Train Loss: 0.0155 | Val mean-roc_auc_score: 0.8248
|
| 116 |
+
2025-09-26 01:24:27,311 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 99/100 | Train Loss: 0.0131 | Val mean-roc_auc_score: 0.8243
|
| 117 |
+
2025-09-26 01:24:45,137 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 100/100 | Train Loss: 0.0145 | Val mean-roc_auc_score: 0.8242
|
| 118 |
+
2025-09-26 01:24:46,532 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Test mean-roc_auc_score: 0.7876
|
| 119 |
+
2025-09-26 01:24:46,845 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Starting triplicate run 2 for dataset covid19 at 2025-09-26_01-24-46
|
| 120 |
+
2025-09-26 01:25:00,543 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 1/100 | Train Loss: 0.5962 | Val mean-roc_auc_score: 0.8067
|
| 121 |
+
2025-09-26 01:25:00,543 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Global step of best model: 65
|
| 122 |
+
2025-09-26 01:25:01,591 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Best model saved at epoch 1 with val mean-roc_auc_score: 0.8067
|
| 123 |
+
2025-09-26 01:25:19,461 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 2/100 | Train Loss: 0.4271 | Val mean-roc_auc_score: 0.8156
|
| 124 |
+
2025-09-26 01:25:19,653 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Global step of best model: 130
|
| 125 |
+
2025-09-26 01:25:20,284 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Best model saved at epoch 2 with val mean-roc_auc_score: 0.8156
|
| 126 |
+
2025-09-26 01:25:33,881 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 3/100 | Train Loss: 0.3923 | Val mean-roc_auc_score: 0.8267
|
| 127 |
+
2025-09-26 01:25:34,110 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Global step of best model: 195
|
| 128 |
+
2025-09-26 01:25:34,739 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Best model saved at epoch 3 with val mean-roc_auc_score: 0.8267
|
| 129 |
+
2025-09-26 01:25:51,660 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 4/100 | Train Loss: 0.3271 | Val mean-roc_auc_score: 0.8069
|
| 130 |
+
2025-09-26 01:26:06,024 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 5/100 | Train Loss: 0.2725 | Val mean-roc_auc_score: 0.7808
|
| 131 |
+
2025-09-26 01:26:24,046 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 6/100 | Train Loss: 0.2058 | Val mean-roc_auc_score: 0.8028
|
| 132 |
+
2025-09-26 01:26:39,913 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 7/100 | Train Loss: 0.1477 | Val mean-roc_auc_score: 0.8087
|
| 133 |
+
2025-09-26 01:26:57,566 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 8/100 | Train Loss: 0.1258 | Val mean-roc_auc_score: 0.8003
|
| 134 |
+
2025-09-26 01:27:15,576 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 9/100 | Train Loss: 0.1067 | Val mean-roc_auc_score: 0.8111
|
| 135 |
+
2025-09-26 01:27:29,985 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 10/100 | Train Loss: 0.0906 | Val mean-roc_auc_score: 0.7930
|
| 136 |
+
2025-09-26 01:27:45,185 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 11/100 | Train Loss: 0.0922 | Val mean-roc_auc_score: 0.8047
|
| 137 |
+
2025-09-26 01:28:03,115 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 12/100 | Train Loss: 0.0630 | Val mean-roc_auc_score: 0.8055
|
| 138 |
+
2025-09-26 01:28:18,312 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 13/100 | Train Loss: 0.0490 | Val mean-roc_auc_score: 0.8004
|
| 139 |
+
2025-09-26 01:28:36,476 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 14/100 | Train Loss: 0.0672 | Val mean-roc_auc_score: 0.8019
|
| 140 |
+
2025-09-26 01:28:52,158 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 15/100 | Train Loss: 0.0385 | Val mean-roc_auc_score: 0.8055
|
| 141 |
+
2025-09-26 01:29:12,824 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 16/100 | Train Loss: 0.0490 | Val mean-roc_auc_score: 0.8108
|
| 142 |
+
2025-09-26 01:29:27,163 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 17/100 | Train Loss: 0.0563 | Val mean-roc_auc_score: 0.8109
|
| 143 |
+
2025-09-26 01:29:42,425 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 18/100 | Train Loss: 0.0421 | Val mean-roc_auc_score: 0.8104
|
| 144 |
+
2025-09-26 01:30:00,423 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 19/100 | Train Loss: 0.0607 | Val mean-roc_auc_score: 0.8214
|
| 145 |
+
2025-09-26 01:30:16,036 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 20/100 | Train Loss: 0.0466 | Val mean-roc_auc_score: 0.7984
|
| 146 |
+
2025-09-26 01:30:34,153 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 21/100 | Train Loss: 0.0476 | Val mean-roc_auc_score: 0.8192
|
| 147 |
+
2025-09-26 01:30:50,547 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 22/100 | Train Loss: 0.0424 | Val mean-roc_auc_score: 0.8090
|
| 148 |
+
2025-09-26 01:31:08,603 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 23/100 | Train Loss: 0.0298 | Val mean-roc_auc_score: 0.8071
|
| 149 |
+
2025-09-26 01:31:23,983 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 24/100 | Train Loss: 0.0297 | Val mean-roc_auc_score: 0.8205
|
| 150 |
+
2025-09-26 01:31:41,844 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 25/100 | Train Loss: 0.0338 | Val mean-roc_auc_score: 0.8160
|
| 151 |
+
2025-09-26 01:31:57,378 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 26/100 | Train Loss: 0.0312 | Val mean-roc_auc_score: 0.8067
|
| 152 |
+
2025-09-26 01:32:13,543 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 27/100 | Train Loss: 0.0284 | Val mean-roc_auc_score: 0.8168
|
| 153 |
+
2025-09-26 01:32:31,209 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 28/100 | Train Loss: 0.0379 | Val mean-roc_auc_score: 0.8133
|
| 154 |
+
2025-09-26 01:32:46,915 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 29/100 | Train Loss: 0.0237 | Val mean-roc_auc_score: 0.8137
|
| 155 |
+
2025-09-26 01:33:05,061 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 30/100 | Train Loss: 0.0262 | Val mean-roc_auc_score: 0.8123
|
| 156 |
+
2025-09-26 01:33:22,086 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 31/100 | Train Loss: 0.0292 | Val mean-roc_auc_score: 0.8185
|
| 157 |
+
2025-09-26 01:33:41,085 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 32/100 | Train Loss: 0.0385 | Val mean-roc_auc_score: 0.8175
|
| 158 |
+
2025-09-26 01:33:56,680 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 33/100 | Train Loss: 0.0286 | Val mean-roc_auc_score: 0.8215
|
| 159 |
+
2025-09-26 01:34:12,203 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 34/100 | Train Loss: 0.0218 | Val mean-roc_auc_score: 0.8132
|
| 160 |
+
2025-09-26 01:34:30,767 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 35/100 | Train Loss: 0.0210 | Val mean-roc_auc_score: 0.8058
|
| 161 |
+
2025-09-26 01:34:46,255 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 36/100 | Train Loss: 0.0186 | Val mean-roc_auc_score: 0.8056
|
| 162 |
+
2025-09-26 01:35:04,822 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 37/100 | Train Loss: 0.0266 | Val mean-roc_auc_score: 0.7935
|
| 163 |
+
2025-09-26 01:35:20,415 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 38/100 | Train Loss: 0.0279 | Val mean-roc_auc_score: 0.7865
|
| 164 |
+
2025-09-26 01:35:38,567 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 39/100 | Train Loss: 0.0239 | Val mean-roc_auc_score: 0.7847
|
| 165 |
+
2025-09-26 01:35:54,571 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 40/100 | Train Loss: 0.0207 | Val mean-roc_auc_score: 0.7873
|
| 166 |
+
2025-09-26 01:36:10,136 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 41/100 | Train Loss: 0.0214 | Val mean-roc_auc_score: 0.7872
|
| 167 |
+
2025-09-26 01:36:28,415 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 42/100 | Train Loss: 0.0260 | Val mean-roc_auc_score: 0.7928
|
| 168 |
+
2025-09-26 01:36:43,993 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 43/100 | Train Loss: 0.0178 | Val mean-roc_auc_score: 0.7902
|
| 169 |
+
2025-09-26 01:37:01,897 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 44/100 | Train Loss: 0.0191 | Val mean-roc_auc_score: 0.7900
|
| 170 |
+
2025-09-26 01:37:17,671 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 45/100 | Train Loss: 0.0152 | Val mean-roc_auc_score: 0.7920
|
| 171 |
+
2025-09-26 01:37:36,000 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 46/100 | Train Loss: 0.0189 | Val mean-roc_auc_score: 0.7942
|
| 172 |
+
2025-09-26 01:37:54,070 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 47/100 | Train Loss: 0.0161 | Val mean-roc_auc_score: 0.7957
|
| 173 |
+
2025-09-26 01:38:09,553 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 48/100 | Train Loss: 0.0508 | Val mean-roc_auc_score: 0.7857
|
| 174 |
+
2025-09-26 01:38:27,780 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 49/100 | Train Loss: 0.0433 | Val mean-roc_auc_score: 0.7972
|
| 175 |
+
2025-09-26 01:38:43,563 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 50/100 | Train Loss: 0.0266 | Val mean-roc_auc_score: 0.7927
|
| 176 |
+
2025-09-26 01:39:01,626 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 51/100 | Train Loss: 0.0260 | Val mean-roc_auc_score: 0.7946
|
| 177 |
+
2025-09-26 01:39:18,172 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 52/100 | Train Loss: 0.0208 | Val mean-roc_auc_score: 0.7922
|
| 178 |
+
2025-09-26 01:39:36,757 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 53/100 | Train Loss: 0.0155 | Val mean-roc_auc_score: 0.7922
|
| 179 |
+
2025-09-26 01:39:52,725 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 54/100 | Train Loss: 0.0180 | Val mean-roc_auc_score: 0.7910
|
| 180 |
+
2025-09-26 01:40:08,431 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 55/100 | Train Loss: 0.0212 | Val mean-roc_auc_score: 0.7914
|
| 181 |
+
2025-09-26 01:40:27,077 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 56/100 | Train Loss: 0.0187 | Val mean-roc_auc_score: 0.7887
|
| 182 |
+
2025-09-26 01:40:43,810 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 57/100 | Train Loss: 0.0320 | Val mean-roc_auc_score: 0.7842
|
| 183 |
+
2025-09-26 01:41:02,078 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 58/100 | Train Loss: 0.0266 | Val mean-roc_auc_score: 0.7903
|
| 184 |
+
2025-09-26 01:41:18,003 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 59/100 | Train Loss: 0.0148 | Val mean-roc_auc_score: 0.7950
|
| 185 |
+
2025-09-26 01:41:36,383 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 60/100 | Train Loss: 0.0180 | Val mean-roc_auc_score: 0.7986
|
| 186 |
+
2025-09-26 01:41:51,836 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 61/100 | Train Loss: 0.0208 | Val mean-roc_auc_score: 0.7925
|
| 187 |
+
2025-09-26 01:42:08,668 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 62/100 | Train Loss: 0.0284 | Val mean-roc_auc_score: 0.7913
|
| 188 |
+
2025-09-26 01:42:26,186 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 63/100 | Train Loss: 0.0219 | Val mean-roc_auc_score: 0.8023
|
| 189 |
+
2025-09-26 01:42:42,221 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 64/100 | Train Loss: 0.0220 | Val mean-roc_auc_score: 0.8011
|
| 190 |
+
2025-09-26 01:43:00,754 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 65/100 | Train Loss: 0.0238 | Val mean-roc_auc_score: 0.7989
|
| 191 |
+
2025-09-26 01:43:17,234 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 66/100 | Train Loss: 0.0177 | Val mean-roc_auc_score: 0.7977
|
| 192 |
+
2025-09-26 01:43:36,956 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 67/100 | Train Loss: 0.0170 | Val mean-roc_auc_score: 0.7964
|
| 193 |
+
2025-09-26 01:43:53,964 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 68/100 | Train Loss: 0.0239 | Val mean-roc_auc_score: 0.7960
|
| 194 |
+
2025-09-26 01:44:10,587 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 69/100 | Train Loss: 0.0163 | Val mean-roc_auc_score: 0.7976
|
| 195 |
+
2025-09-26 01:44:29,631 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 70/100 | Train Loss: 0.0147 | Val mean-roc_auc_score: 0.7967
|
| 196 |
+
2025-09-26 01:44:46,333 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 71/100 | Train Loss: 0.0105 | Val mean-roc_auc_score: 0.7988
|
| 197 |
+
2025-09-26 01:45:06,483 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 72/100 | Train Loss: 0.0168 | Val mean-roc_auc_score: 0.7986
|
| 198 |
+
2025-09-26 01:45:22,978 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 73/100 | Train Loss: 0.0128 | Val mean-roc_auc_score: 0.7987
|
| 199 |
+
2025-09-26 01:45:39,654 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 74/100 | Train Loss: 0.0110 | Val mean-roc_auc_score: 0.8010
|
| 200 |
+
2025-09-26 01:45:59,364 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 75/100 | Train Loss: 0.0172 | Val mean-roc_auc_score: 0.8004
|
| 201 |
+
2025-09-26 01:46:16,769 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 76/100 | Train Loss: 0.0194 | Val mean-roc_auc_score: 0.8000
|
| 202 |
+
2025-09-26 01:46:35,317 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 77/100 | Train Loss: 0.0207 | Val mean-roc_auc_score: 0.8002
|
| 203 |
+
2025-09-26 01:46:53,880 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 78/100 | Train Loss: 0.0155 | Val mean-roc_auc_score: 0.8001
|
| 204 |
+
2025-09-26 01:47:10,863 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 79/100 | Train Loss: 0.0172 | Val mean-roc_auc_score: 0.7940
|
| 205 |
+
2025-09-26 01:47:30,898 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 80/100 | Train Loss: 0.0178 | Val mean-roc_auc_score: 0.7952
|
| 206 |
+
2025-09-26 01:47:48,378 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 81/100 | Train Loss: 0.0166 | Val mean-roc_auc_score: 0.7963
|
| 207 |
+
2025-09-26 01:48:05,976 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 82/100 | Train Loss: 0.0167 | Val mean-roc_auc_score: 0.7959
|
| 208 |
+
2025-09-26 01:48:25,373 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 83/100 | Train Loss: 0.0404 | Val mean-roc_auc_score: 0.8098
|
| 209 |
+
2025-09-26 01:48:42,501 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 84/100 | Train Loss: 0.0432 | Val mean-roc_auc_score: 0.7890
|
| 210 |
+
2025-09-26 01:49:01,756 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 85/100 | Train Loss: 0.0275 | Val mean-roc_auc_score: 0.7918
|
| 211 |
+
2025-09-26 01:49:18,583 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 86/100 | Train Loss: 0.0196 | Val mean-roc_auc_score: 0.7928
|
| 212 |
+
2025-09-26 01:49:35,336 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 87/100 | Train Loss: 0.0203 | Val mean-roc_auc_score: 0.7936
|
| 213 |
+
2025-09-26 01:49:55,285 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 88/100 | Train Loss: 0.0207 | Val mean-roc_auc_score: 0.7946
|
| 214 |
+
2025-09-26 01:50:12,801 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 89/100 | Train Loss: 0.0156 | Val mean-roc_auc_score: 0.7956
|
| 215 |
+
2025-09-26 01:50:32,505 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 90/100 | Train Loss: 0.0206 | Val mean-roc_auc_score: 0.7986
|
| 216 |
+
2025-09-26 01:50:50,131 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 91/100 | Train Loss: 0.0150 | Val mean-roc_auc_score: 0.8000
|
| 217 |
+
2025-09-26 01:51:07,164 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 92/100 | Train Loss: 0.0187 | Val mean-roc_auc_score: 0.7941
|
| 218 |
+
2025-09-26 01:51:27,220 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 93/100 | Train Loss: 0.0185 | Val mean-roc_auc_score: 0.7932
|
| 219 |
+
2025-09-26 01:51:42,976 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 94/100 | Train Loss: 0.0254 | Val mean-roc_auc_score: 0.7944
|
| 220 |
+
2025-09-26 01:52:03,059 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 95/100 | Train Loss: 0.0169 | Val mean-roc_auc_score: 0.7955
|
| 221 |
+
2025-09-26 01:52:20,859 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 96/100 | Train Loss: 0.0178 | Val mean-roc_auc_score: 0.7955
|
| 222 |
+
2025-09-26 01:52:38,101 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 97/100 | Train Loss: 0.0058 | Val mean-roc_auc_score: 0.7955
|
| 223 |
+
2025-09-26 01:52:58,265 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 98/100 | Train Loss: 0.0153 | Val mean-roc_auc_score: 0.7977
|
| 224 |
+
2025-09-26 01:53:16,294 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 99/100 | Train Loss: 0.0146 | Val mean-roc_auc_score: 0.7989
|
| 225 |
+
2025-09-26 01:53:33,836 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 100/100 | Train Loss: 0.0159 | Val mean-roc_auc_score: 0.7969
|
| 226 |
+
2025-09-26 01:53:35,347 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Test mean-roc_auc_score: 0.8280
|
| 227 |
+
2025-09-26 01:53:35,853 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Starting triplicate run 3 for dataset covid19 at 2025-09-26_01-53-35
|
| 228 |
+
2025-09-26 01:53:54,168 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 1/100 | Train Loss: 0.5577 | Val mean-roc_auc_score: 0.8144
|
| 229 |
+
2025-09-26 01:53:54,168 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Global step of best model: 65
|
| 230 |
+
2025-09-26 01:53:54,826 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Best model saved at epoch 1 with val mean-roc_auc_score: 0.8144
|
| 231 |
+
2025-09-26 01:54:12,638 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 2/100 | Train Loss: 0.4458 | Val mean-roc_auc_score: 0.8279
|
| 232 |
+
2025-09-26 01:54:12,847 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Global step of best model: 130
|
| 233 |
+
2025-09-26 01:54:13,519 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Best model saved at epoch 2 with val mean-roc_auc_score: 0.8279
|
| 234 |
+
2025-09-26 01:54:32,128 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 3/100 | Train Loss: 0.3750 | Val mean-roc_auc_score: 0.8270
|
| 235 |
+
2025-09-26 01:54:49,946 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 4/100 | Train Loss: 0.3229 | Val mean-roc_auc_score: 0.8057
|
| 236 |
+
2025-09-26 01:55:05,957 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 5/100 | Train Loss: 0.2963 | Val mean-roc_auc_score: 0.8168
|
| 237 |
+
2025-09-26 01:55:25,372 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 6/100 | Train Loss: 0.1971 | Val mean-roc_auc_score: 0.7926
|
| 238 |
+
2025-09-26 01:55:42,533 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 7/100 | Train Loss: 0.1705 | Val mean-roc_auc_score: 0.8105
|
| 239 |
+
2025-09-26 01:56:01,253 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 8/100 | Train Loss: 0.1453 | Val mean-roc_auc_score: 0.8042
|
| 240 |
+
2025-09-26 01:56:17,784 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 9/100 | Train Loss: 0.0933 | Val mean-roc_auc_score: 0.8015
|
| 241 |
+
2025-09-26 01:56:34,053 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 10/100 | Train Loss: 0.0838 | Val mean-roc_auc_score: 0.8062
|
| 242 |
+
2025-09-26 01:56:53,084 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 11/100 | Train Loss: 0.0854 | Val mean-roc_auc_score: 0.8018
|
| 243 |
+
2025-09-26 01:57:10,494 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 12/100 | Train Loss: 0.0697 | Val mean-roc_auc_score: 0.7863
|
| 244 |
+
2025-09-26 01:57:29,378 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 13/100 | Train Loss: 0.0587 | Val mean-roc_auc_score: 0.7869
|
| 245 |
+
2025-09-26 01:57:46,321 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 14/100 | Train Loss: 0.0629 | Val mean-roc_auc_score: 0.7822
|
| 246 |
+
2025-09-26 01:58:02,928 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 15/100 | Train Loss: 0.0462 | Val mean-roc_auc_score: 0.8026
|
| 247 |
+
2025-09-26 01:58:23,166 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 16/100 | Train Loss: 0.0371 | Val mean-roc_auc_score: 0.7940
|
| 248 |
+
2025-09-26 01:58:39,270 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 17/100 | Train Loss: 0.0377 | Val mean-roc_auc_score: 0.7950
|
| 249 |
+
2025-09-26 01:58:58,108 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 18/100 | Train Loss: 0.0358 | Val mean-roc_auc_score: 0.7896
|
| 250 |
+
2025-09-26 01:59:15,135 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 19/100 | Train Loss: 0.0384 | Val mean-roc_auc_score: 0.7945
|
| 251 |
+
2025-09-26 01:59:32,365 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 20/100 | Train Loss: 0.0312 | Val mean-roc_auc_score: 0.7837
|
| 252 |
+
2025-09-26 01:59:51,516 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 21/100 | Train Loss: 0.0361 | Val mean-roc_auc_score: 0.8033
|
| 253 |
+
2025-09-26 02:00:09,309 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 22/100 | Train Loss: 0.0424 | Val mean-roc_auc_score: 0.8108
|
| 254 |
+
2025-09-26 02:00:28,563 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 23/100 | Train Loss: 0.0351 | Val mean-roc_auc_score: 0.7995
|
| 255 |
+
2025-09-26 02:00:45,601 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 24/100 | Train Loss: 0.0310 | Val mean-roc_auc_score: 0.7786
|
| 256 |
+
2025-09-26 02:01:02,414 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 25/100 | Train Loss: 0.0484 | Val mean-roc_auc_score: 0.7941
|
| 257 |
+
2025-09-26 02:01:21,974 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 26/100 | Train Loss: 0.0284 | Val mean-roc_auc_score: 0.7972
|
| 258 |
+
2025-09-26 02:01:40,203 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 27/100 | Train Loss: 0.0305 | Val mean-roc_auc_score: 0.8081
|
| 259 |
+
2025-09-26 02:01:57,413 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 28/100 | Train Loss: 0.0354 | Val mean-roc_auc_score: 0.8019
|
| 260 |
+
2025-09-26 02:02:16,640 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 29/100 | Train Loss: 0.0262 | Val mean-roc_auc_score: 0.8038
|
| 261 |
+
2025-09-26 02:02:33,488 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 30/100 | Train Loss: 0.0245 | Val mean-roc_auc_score: 0.7944
|
| 262 |
+
2025-09-26 02:02:53,687 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 31/100 | Train Loss: 0.0120 | Val mean-roc_auc_score: 0.7958
|
| 263 |
+
2025-09-26 02:03:09,981 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 32/100 | Train Loss: 0.0216 | Val mean-roc_auc_score: 0.7984
|
| 264 |
+
2025-09-26 02:03:26,729 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 33/100 | Train Loss: 0.0278 | Val mean-roc_auc_score: 0.7913
|
| 265 |
+
2025-09-26 02:03:45,717 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 34/100 | Train Loss: 0.0204 | Val mean-roc_auc_score: 0.7933
|
| 266 |
+
2025-09-26 02:04:02,924 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 35/100 | Train Loss: 0.0258 | Val mean-roc_auc_score: 0.7968
|
| 267 |
+
2025-09-26 02:04:22,504 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 36/100 | Train Loss: 0.0235 | Val mean-roc_auc_score: 0.7967
|
| 268 |
+
2025-09-26 02:04:40,228 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 37/100 | Train Loss: 0.0496 | Val mean-roc_auc_score: 0.7844
|
| 269 |
+
2025-09-26 02:04:57,520 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 38/100 | Train Loss: 0.0255 | Val mean-roc_auc_score: 0.7786
|
| 270 |
+
2025-09-26 02:05:17,382 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 39/100 | Train Loss: 0.0247 | Val mean-roc_auc_score: 0.7916
|
| 271 |
+
2025-09-26 02:05:34,660 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 40/100 | Train Loss: 0.0232 | Val mean-roc_auc_score: 0.7857
|
| 272 |
+
2025-09-26 02:05:54,267 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 41/100 | Train Loss: 0.0189 | Val mean-roc_auc_score: 0.7893
|
| 273 |
+
2025-09-26 02:06:11,912 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 42/100 | Train Loss: 0.0292 | Val mean-roc_auc_score: 0.7843
|
| 274 |
+
2025-09-26 02:06:29,694 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 43/100 | Train Loss: 0.0406 | Val mean-roc_auc_score: 0.7834
|
| 275 |
+
2025-09-26 02:06:50,099 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 44/100 | Train Loss: 0.0232 | Val mean-roc_auc_score: 0.7850
|
| 276 |
+
2025-09-26 02:07:07,697 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 45/100 | Train Loss: 0.0216 | Val mean-roc_auc_score: 0.7860
|
| 277 |
+
2025-09-26 02:07:25,146 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 46/100 | Train Loss: 0.0166 | Val mean-roc_auc_score: 0.7868
|
| 278 |
+
2025-09-26 02:07:44,976 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 47/100 | Train Loss: 0.0214 | Val mean-roc_auc_score: 0.7864
|
| 279 |
+
2025-09-26 02:08:01,135 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 48/100 | Train Loss: 0.0159 | Val mean-roc_auc_score: 0.7841
|
| 280 |
+
2025-09-26 02:08:21,195 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 49/100 | Train Loss: 0.0221 | Val mean-roc_auc_score: 0.7929
|
| 281 |
+
2025-09-26 02:08:38,867 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 50/100 | Train Loss: 0.0242 | Val mean-roc_auc_score: 0.7924
|
| 282 |
+
2025-09-26 02:08:56,424 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 51/100 | Train Loss: 0.0299 | Val mean-roc_auc_score: 0.7940
|
| 283 |
+
2025-09-26 02:09:16,402 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 52/100 | Train Loss: 0.0220 | Val mean-roc_auc_score: 0.7990
|
| 284 |
+
2025-09-26 02:09:33,559 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 53/100 | Train Loss: 0.0200 | Val mean-roc_auc_score: 0.8018
|
| 285 |
+
2025-09-26 02:09:53,111 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 54/100 | Train Loss: 0.0289 | Val mean-roc_auc_score: 0.8018
|
| 286 |
+
2025-09-26 02:10:10,428 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 55/100 | Train Loss: 0.0413 | Val mean-roc_auc_score: 0.7912
|
| 287 |
+
2025-09-26 02:10:27,677 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 56/100 | Train Loss: 0.0590 | Val mean-roc_auc_score: 0.7906
|
| 288 |
+
2025-09-26 02:10:47,911 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 57/100 | Train Loss: 0.0121 | Val mean-roc_auc_score: 0.7973
|
| 289 |
+
2025-09-26 02:11:05,324 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 58/100 | Train Loss: 0.0198 | Val mean-roc_auc_score: 0.7955
|
| 290 |
+
2025-09-26 02:11:22,662 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 59/100 | Train Loss: 0.0221 | Val mean-roc_auc_score: 0.7962
|
| 291 |
+
2025-09-26 02:11:42,986 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 60/100 | Train Loss: 0.0275 | Val mean-roc_auc_score: 0.8079
|
| 292 |
+
2025-09-26 02:12:00,508 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 61/100 | Train Loss: 0.0272 | Val mean-roc_auc_score: 0.8015
|
| 293 |
+
2025-09-26 02:12:20,587 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 62/100 | Train Loss: 0.0250 | Val mean-roc_auc_score: 0.7937
|
| 294 |
+
2025-09-26 02:12:36,542 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 63/100 | Train Loss: 0.0212 | Val mean-roc_auc_score: 0.7944
|
| 295 |
+
2025-09-26 02:12:54,179 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 64/100 | Train Loss: 0.0180 | Val mean-roc_auc_score: 0.7974
|
| 296 |
+
2025-09-26 02:13:14,205 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 65/100 | Train Loss: 0.0208 | Val mean-roc_auc_score: 0.8007
|
| 297 |
+
2025-09-26 02:13:31,694 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 66/100 | Train Loss: 0.0201 | Val mean-roc_auc_score: 0.7983
|
| 298 |
+
2025-09-26 02:13:51,193 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 67/100 | Train Loss: 0.0271 | Val mean-roc_auc_score: 0.7911
|
| 299 |
+
2025-09-26 02:14:08,069 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 68/100 | Train Loss: 0.0137 | Val mean-roc_auc_score: 0.7902
|
| 300 |
+
2025-09-26 02:14:24,972 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 69/100 | Train Loss: 0.0201 | Val mean-roc_auc_score: 0.7864
|
| 301 |
+
2025-09-26 02:14:44,221 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 70/100 | Train Loss: 0.0206 | Val mean-roc_auc_score: 0.7902
|
| 302 |
+
2025-09-26 02:15:00,955 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 71/100 | Train Loss: 0.0228 | Val mean-roc_auc_score: 0.7907
|
| 303 |
+
2025-09-26 02:15:19,994 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 72/100 | Train Loss: 0.0183 | Val mean-roc_auc_score: 0.7873
|
| 304 |
+
2025-09-26 02:15:36,624 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 73/100 | Train Loss: 0.0191 | Val mean-roc_auc_score: 0.7871
|
| 305 |
+
2025-09-26 02:15:53,373 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 74/100 | Train Loss: 0.0316 | Val mean-roc_auc_score: 0.7855
|
| 306 |
+
2025-09-26 02:16:12,668 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 75/100 | Train Loss: 0.0181 | Val mean-roc_auc_score: 0.7864
|
| 307 |
+
2025-09-26 02:16:29,832 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 76/100 | Train Loss: 0.0177 | Val mean-roc_auc_score: 0.7815
|
| 308 |
+
2025-09-26 02:16:50,560 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 77/100 | Train Loss: 0.0170 | Val mean-roc_auc_score: 0.7866
|
| 309 |
+
2025-09-26 02:17:05,518 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 78/100 | Train Loss: 0.0184 | Val mean-roc_auc_score: 0.7859
|
| 310 |
+
2025-09-26 02:17:22,442 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 79/100 | Train Loss: 0.0199 | Val mean-roc_auc_score: 0.7895
|
| 311 |
+
2025-09-26 02:17:41,588 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 80/100 | Train Loss: 0.0250 | Val mean-roc_auc_score: 0.7825
|
| 312 |
+
2025-09-26 02:17:58,389 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 81/100 | Train Loss: 0.0197 | Val mean-roc_auc_score: 0.7823
|
| 313 |
+
2025-09-26 02:18:17,269 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 82/100 | Train Loss: 0.0187 | Val mean-roc_auc_score: 0.7851
|
| 314 |
+
2025-09-26 02:18:34,034 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 83/100 | Train Loss: 0.0160 | Val mean-roc_auc_score: 0.7844
|
| 315 |
+
2025-09-26 02:18:50,699 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 84/100 | Train Loss: 0.0173 | Val mean-roc_auc_score: 0.7855
|
| 316 |
+
2025-09-26 02:19:09,858 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 85/100 | Train Loss: 0.0152 | Val mean-roc_auc_score: 0.7847
|
| 317 |
+
2025-09-26 02:19:26,642 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 86/100 | Train Loss: 0.0174 | Val mean-roc_auc_score: 0.7865
|
| 318 |
+
2025-09-26 02:19:45,744 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 87/100 | Train Loss: 0.0291 | Val mean-roc_auc_score: 0.7786
|
| 319 |
+
2025-09-26 02:20:02,307 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 88/100 | Train Loss: 0.0177 | Val mean-roc_auc_score: 0.7855
|
| 320 |
+
2025-09-26 02:20:18,906 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 89/100 | Train Loss: 0.0187 | Val mean-roc_auc_score: 0.7810
|
| 321 |
+
2025-09-26 02:20:37,995 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 90/100 | Train Loss: 0.0189 | Val mean-roc_auc_score: 0.7841
|
| 322 |
+
2025-09-26 02:20:54,637 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 91/100 | Train Loss: 0.0169 | Val mean-roc_auc_score: 0.8003
|
| 323 |
+
2025-09-26 02:21:13,632 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 92/100 | Train Loss: 0.0201 | Val mean-roc_auc_score: 0.7931
|
| 324 |
+
2025-09-26 02:21:30,232 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 93/100 | Train Loss: 0.0167 | Val mean-roc_auc_score: 0.7948
|
| 325 |
+
2025-09-26 02:21:47,388 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 94/100 | Train Loss: 0.0153 | Val mean-roc_auc_score: 0.8044
|
| 326 |
+
2025-09-26 02:22:04,117 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 95/100 | Train Loss: 0.0168 | Val mean-roc_auc_score: 0.8013
|
| 327 |
+
2025-09-26 02:22:20,565 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 96/100 | Train Loss: 0.0161 | Val mean-roc_auc_score: 0.8040
|
| 328 |
+
2025-09-26 02:22:39,461 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 97/100 | Train Loss: 0.0163 | Val mean-roc_auc_score: 0.8017
|
| 329 |
+
2025-09-26 02:22:56,119 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 98/100 | Train Loss: 0.0148 | Val mean-roc_auc_score: 0.8024
|
| 330 |
+
2025-09-26 02:23:15,097 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 99/100 | Train Loss: 0.0158 | Val mean-roc_auc_score: 0.7951
|
| 331 |
+
2025-09-26 02:23:31,459 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 100/100 | Train Loss: 0.0203 | Val mean-roc_auc_score: 0.7885
|
| 332 |
+
2025-09-26 02:23:32,835 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Test mean-roc_auc_score: 0.8162
|
| 333 |
+
2025-09-26 02:23:33,155 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Final Triplicate Test Results — Avg mean-roc_auc_score: 0.8106, Std Dev: 0.0170
|
logs_modchembert_regression_ModChemBERT-MLM/modchembert_deepchem_splits_run_adme_microsom_stab_h_epochs100_batch_size32_20250926_053743.log
ADDED
|
@@ -0,0 +1,369 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
2025-09-26 05:37:43,942 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Running benchmark for dataset: adme_microsom_stab_h
|
| 2 |
+
2025-09-26 05:37:43,942 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - dataset: adme_microsom_stab_h, tasks: ['y'], epochs: 100, learning rate: 3e-05, transform: True
|
| 3 |
+
2025-09-26 05:37:43,950 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Starting triplicate run 1 for dataset adme_microsom_stab_h at 2025-09-26_05-37-43
|
| 4 |
+
2025-09-26 05:37:52,991 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 1/100 | Train Loss: 1.0602 | Val rms_score: 0.4819
|
| 5 |
+
2025-09-26 05:37:52,991 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Global step of best model: 54
|
| 6 |
+
2025-09-26 05:37:53,802 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Best model saved at epoch 1 with val rms_score: 0.4819
|
| 7 |
+
2025-09-26 05:38:00,651 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 2/100 | Train Loss: 0.7070 | Val rms_score: 0.4703
|
| 8 |
+
2025-09-26 05:38:00,834 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Global step of best model: 108
|
| 9 |
+
2025-09-26 05:38:01,375 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Best model saved at epoch 2 with val rms_score: 0.4703
|
| 10 |
+
2025-09-26 05:38:08,218 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 3/100 | Train Loss: 0.7037 | Val rms_score: 0.4584
|
| 11 |
+
2025-09-26 05:38:08,403 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Global step of best model: 162
|
| 12 |
+
2025-09-26 05:38:06,504 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Best model saved at epoch 3 with val rms_score: 0.4584
|
| 13 |
+
2025-09-26 05:38:13,271 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 4/100 | Train Loss: 0.6211 | Val rms_score: 0.4320
|
| 14 |
+
2025-09-26 05:38:13,450 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Global step of best model: 216
|
| 15 |
+
2025-09-26 05:38:13,992 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Best model saved at epoch 4 with val rms_score: 0.4320
|
| 16 |
+
2025-09-26 05:38:20,710 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 5/100 | Train Loss: 0.4815 | Val rms_score: 0.4330
|
| 17 |
+
2025-09-26 05:38:27,661 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 6/100 | Train Loss: 0.3568 | Val rms_score: 0.4864
|
| 18 |
+
2025-09-26 05:38:39,185 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 7/100 | Train Loss: 0.2708 | Val rms_score: 0.4502
|
| 19 |
+
2025-09-26 05:38:53,106 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 8/100 | Train Loss: 0.1973 | Val rms_score: 0.4523
|
| 20 |
+
2025-09-26 05:39:05,984 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 9/100 | Train Loss: 0.1609 | Val rms_score: 0.4557
|
| 21 |
+
2025-09-26 05:39:16,645 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 10/100 | Train Loss: 0.1281 | Val rms_score: 0.4361
|
| 22 |
+
2025-09-26 05:39:28,927 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 11/100 | Train Loss: 0.1152 | Val rms_score: 0.4696
|
| 23 |
+
2025-09-26 05:39:38,952 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 12/100 | Train Loss: 0.1081 | Val rms_score: 0.4429
|
| 24 |
+
2025-09-26 05:39:50,995 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 13/100 | Train Loss: 0.0991 | Val rms_score: 0.4747
|
| 25 |
+
2025-09-26 05:40:03,225 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 14/100 | Train Loss: 0.0880 | Val rms_score: 0.4569
|
| 26 |
+
2025-09-26 05:40:13,184 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 15/100 | Train Loss: 0.0738 | Val rms_score: 0.4469
|
| 27 |
+
2025-09-26 05:40:25,982 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 16/100 | Train Loss: 0.0752 | Val rms_score: 0.4501
|
| 28 |
+
2025-09-26 05:40:36,617 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 17/100 | Train Loss: 0.0668 | Val rms_score: 0.4415
|
| 29 |
+
2025-09-26 05:40:48,529 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 18/100 | Train Loss: 0.0689 | Val rms_score: 0.4454
|
| 30 |
+
2025-09-26 05:41:02,344 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 19/100 | Train Loss: 0.0619 | Val rms_score: 0.4493
|
| 31 |
+
2025-09-26 05:41:12,240 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 20/100 | Train Loss: 0.0561 | Val rms_score: 0.4435
|
| 32 |
+
2025-09-26 05:41:25,093 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 21/100 | Train Loss: 0.0593 | Val rms_score: 0.4447
|
| 33 |
+
2025-09-26 05:41:35,876 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 22/100 | Train Loss: 0.0564 | Val rms_score: 0.4510
|
| 34 |
+
2025-09-26 05:41:48,683 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 23/100 | Train Loss: 0.0539 | Val rms_score: 0.4402
|
| 35 |
+
2025-09-26 05:42:01,728 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 24/100 | Train Loss: 0.0584 | Val rms_score: 0.4406
|
| 36 |
+
2025-09-26 05:42:11,386 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 25/100 | Train Loss: 0.0525 | Val rms_score: 0.4350
|
| 37 |
+
2025-09-26 05:42:23,912 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 26/100 | Train Loss: 0.0547 | Val rms_score: 0.4435
|
| 38 |
+
2025-09-26 05:42:34,757 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 27/100 | Train Loss: 0.0451 | Val rms_score: 0.4398
|
| 39 |
+
2025-09-26 05:42:47,890 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 28/100 | Train Loss: 0.0501 | Val rms_score: 0.4333
|
| 40 |
+
2025-09-26 05:43:00,696 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 29/100 | Train Loss: 0.0480 | Val rms_score: 0.4427
|
| 41 |
+
2025-09-26 05:43:11,186 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 30/100 | Train Loss: 0.0486 | Val rms_score: 0.4378
|
| 42 |
+
2025-09-26 05:43:24,039 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 31/100 | Train Loss: 0.0440 | Val rms_score: 0.4456
|
| 43 |
+
2025-09-26 05:43:35,038 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 32/100 | Train Loss: 0.0438 | Val rms_score: 0.4400
|
| 44 |
+
2025-09-26 05:43:47,959 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 33/100 | Train Loss: 0.0425 | Val rms_score: 0.4409
|
| 45 |
+
2025-09-26 05:44:00,582 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 34/100 | Train Loss: 0.0447 | Val rms_score: 0.4427
|
| 46 |
+
2025-09-26 05:44:10,881 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 35/100 | Train Loss: 0.0414 | Val rms_score: 0.4345
|
| 47 |
+
2025-09-26 05:44:23,856 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 36/100 | Train Loss: 0.0405 | Val rms_score: 0.4401
|
| 48 |
+
2025-09-26 05:44:35,325 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 37/100 | Train Loss: 0.0376 | Val rms_score: 0.4387
|
| 49 |
+
2025-09-26 05:44:50,641 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 38/100 | Train Loss: 0.0397 | Val rms_score: 0.4377
|
| 50 |
+
2025-09-26 05:45:04,249 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 39/100 | Train Loss: 0.0430 | Val rms_score: 0.4338
|
| 51 |
+
2025-09-26 05:45:15,273 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 40/100 | Train Loss: 0.0408 | Val rms_score: 0.4411
|
| 52 |
+
2025-09-26 05:45:28,800 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 41/100 | Train Loss: 0.0458 | Val rms_score: 0.4507
|
| 53 |
+
2025-09-26 05:45:39,742 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 42/100 | Train Loss: 0.0370 | Val rms_score: 0.4432
|
| 54 |
+
2025-09-26 05:45:53,645 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 43/100 | Train Loss: 0.0366 | Val rms_score: 0.4445
|
| 55 |
+
2025-09-26 05:46:03,632 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 44/100 | Train Loss: 0.0376 | Val rms_score: 0.4354
|
| 56 |
+
2025-09-26 05:46:16,698 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 45/100 | Train Loss: 0.0365 | Val rms_score: 0.4337
|
| 57 |
+
2025-09-26 05:46:29,928 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 46/100 | Train Loss: 0.0379 | Val rms_score: 0.4296
|
| 58 |
+
2025-09-26 05:46:30,424 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Global step of best model: 2484
|
| 59 |
+
2025-09-26 05:46:31,319 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Best model saved at epoch 46 with val rms_score: 0.4296
|
| 60 |
+
2025-09-26 05:46:42,486 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 47/100 | Train Loss: 0.0364 | Val rms_score: 0.4361
|
| 61 |
+
2025-09-26 05:46:56,549 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 48/100 | Train Loss: 0.0373 | Val rms_score: 0.4352
|
| 62 |
+
2025-09-26 05:47:07,984 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 49/100 | Train Loss: 0.0338 | Val rms_score: 0.4437
|
| 63 |
+
2025-09-26 05:47:21,996 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 50/100 | Train Loss: 0.0363 | Val rms_score: 0.4378
|
| 64 |
+
2025-09-26 05:47:33,688 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 51/100 | Train Loss: 0.0340 | Val rms_score: 0.4345
|
| 65 |
+
2025-09-26 05:47:47,313 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 52/100 | Train Loss: 0.0344 | Val rms_score: 0.4375
|
| 66 |
+
2025-09-26 05:48:00,490 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 53/100 | Train Loss: 0.0336 | Val rms_score: 0.4319
|
| 67 |
+
2025-09-26 05:48:11,322 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 54/100 | Train Loss: 0.0344 | Val rms_score: 0.4292
|
| 68 |
+
2025-09-26 05:48:11,571 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Global step of best model: 2916
|
| 69 |
+
2025-09-26 05:48:12,346 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Best model saved at epoch 54 with val rms_score: 0.4292
|
| 70 |
+
2025-09-26 05:48:26,924 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 55/100 | Train Loss: 0.0333 | Val rms_score: 0.4306
|
| 71 |
+
2025-09-26 05:48:38,734 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 56/100 | Train Loss: 0.0317 | Val rms_score: 0.4279
|
| 72 |
+
2025-09-26 05:48:39,273 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Global step of best model: 3024
|
| 73 |
+
2025-09-26 05:48:39,929 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Best model saved at epoch 56 with val rms_score: 0.4279
|
| 74 |
+
2025-09-26 05:48:51,771 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 57/100 | Train Loss: 0.0295 | Val rms_score: 0.4259
|
| 75 |
+
2025-09-26 05:48:51,987 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Global step of best model: 3078
|
| 76 |
+
2025-09-26 05:48:52,615 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Best model saved at epoch 57 with val rms_score: 0.4259
|
| 77 |
+
2025-09-26 05:49:02,631 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 58/100 | Train Loss: 0.0308 | Val rms_score: 0.4331
|
| 78 |
+
2025-09-26 05:49:15,340 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 59/100 | Train Loss: 0.0320 | Val rms_score: 0.4386
|
| 79 |
+
2025-09-26 05:49:27,629 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 60/100 | Train Loss: 0.0295 | Val rms_score: 0.4359
|
| 80 |
+
2025-09-26 05:49:37,673 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 61/100 | Train Loss: 0.0288 | Val rms_score: 0.4420
|
| 81 |
+
2025-09-26 05:49:49,260 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 62/100 | Train Loss: 0.0309 | Val rms_score: 0.4306
|
| 82 |
+
2025-09-26 05:50:01,723 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 63/100 | Train Loss: 0.0249 | Val rms_score: 0.4334
|
| 83 |
+
2025-09-26 05:50:11,921 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 64/100 | Train Loss: 0.0308 | Val rms_score: 0.4344
|
| 84 |
+
2025-09-26 05:50:24,287 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 65/100 | Train Loss: 0.0273 | Val rms_score: 0.4388
|
| 85 |
+
2025-09-26 05:50:34,515 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 66/100 | Train Loss: 0.0305 | Val rms_score: 0.4354
|
| 86 |
+
2025-09-26 05:50:47,041 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 67/100 | Train Loss: 0.0319 | Val rms_score: 0.4322
|
| 87 |
+
2025-09-26 05:51:00,020 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 68/100 | Train Loss: 0.0302 | Val rms_score: 0.4480
|
| 88 |
+
2025-09-26 05:51:10,225 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 69/100 | Train Loss: 0.0270 | Val rms_score: 0.4353
|
| 89 |
+
2025-09-26 05:51:23,273 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 70/100 | Train Loss: 0.0284 | Val rms_score: 0.4311
|
| 90 |
+
2025-09-26 05:51:33,722 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 71/100 | Train Loss: 0.0317 | Val rms_score: 0.4289
|
| 91 |
+
2025-09-26 05:51:46,492 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 72/100 | Train Loss: 0.0263 | Val rms_score: 0.4327
|
| 92 |
+
2025-09-26 05:51:59,341 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 73/100 | Train Loss: 0.0285 | Val rms_score: 0.4323
|
| 93 |
+
2025-09-26 05:52:09,286 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 74/100 | Train Loss: 0.0272 | Val rms_score: 0.4312
|
| 94 |
+
2025-09-26 05:52:23,407 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 75/100 | Train Loss: 0.0252 | Val rms_score: 0.4306
|
| 95 |
+
2025-09-26 05:52:33,596 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 76/100 | Train Loss: 0.0293 | Val rms_score: 0.4295
|
| 96 |
+
2025-09-26 05:52:46,057 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 77/100 | Train Loss: 0.0273 | Val rms_score: 0.4312
|
| 97 |
+
2025-09-26 05:52:58,344 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 78/100 | Train Loss: 0.0251 | Val rms_score: 0.4338
|
| 98 |
+
2025-09-26 05:53:08,535 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 79/100 | Train Loss: 0.0276 | Val rms_score: 0.4310
|
| 99 |
+
2025-09-26 05:53:20,091 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 80/100 | Train Loss: 0.0270 | Val rms_score: 0.4301
|
| 100 |
+
2025-09-26 05:53:30,357 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 81/100 | Train Loss: 0.0278 | Val rms_score: 0.4337
|
| 101 |
+
2025-09-26 05:53:42,846 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 82/100 | Train Loss: 0.0279 | Val rms_score: 0.4370
|
| 102 |
+
2025-09-26 05:53:55,325 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 83/100 | Train Loss: 0.0275 | Val rms_score: 0.4322
|
| 103 |
+
2025-09-26 05:54:05,151 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 84/100 | Train Loss: 0.0250 | Val rms_score: 0.4316
|
| 104 |
+
2025-09-26 05:54:17,364 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 85/100 | Train Loss: 0.0246 | Val rms_score: 0.4347
|
| 105 |
+
2025-09-26 05:54:29,691 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 86/100 | Train Loss: 0.0252 | Val rms_score: 0.4361
|
| 106 |
+
2025-09-26 05:54:39,660 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 87/100 | Train Loss: 0.0246 | Val rms_score: 0.4335
|
| 107 |
+
2025-09-26 05:54:51,855 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 88/100 | Train Loss: 0.0248 | Val rms_score: 0.4362
|
| 108 |
+
2025-09-26 05:55:02,186 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 89/100 | Train Loss: 0.0246 | Val rms_score: 0.4325
|
| 109 |
+
2025-09-26 05:55:14,493 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 90/100 | Train Loss: 0.0250 | Val rms_score: 0.4319
|
| 110 |
+
2025-09-26 05:55:27,004 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 91/100 | Train Loss: 0.0248 | Val rms_score: 0.4367
|
| 111 |
+
2025-09-26 05:55:36,490 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 92/100 | Train Loss: 0.0245 | Val rms_score: 0.4374
|
| 112 |
+
2025-09-26 05:55:51,568 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 93/100 | Train Loss: 0.0273 | Val rms_score: 0.4308
|
| 113 |
+
2025-09-26 05:56:01,593 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 94/100 | Train Loss: 0.0256 | Val rms_score: 0.4379
|
| 114 |
+
2025-09-26 05:56:14,140 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 95/100 | Train Loss: 0.0246 | Val rms_score: 0.4334
|
| 115 |
+
2025-09-26 05:56:26,227 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 96/100 | Train Loss: 0.0243 | Val rms_score: 0.4296
|
| 116 |
+
2025-09-26 05:56:35,802 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 97/100 | Train Loss: 0.0240 | Val rms_score: 0.4307
|
| 117 |
+
2025-09-26 05:56:48,026 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 98/100 | Train Loss: 0.0236 | Val rms_score: 0.4355
|
| 118 |
+
2025-09-26 05:56:59,407 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 99/100 | Train Loss: 0.0241 | Val rms_score: 0.4367
|
| 119 |
+
2025-09-26 05:57:09,433 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 100/100 | Train Loss: 0.0247 | Val rms_score: 0.4327
|
| 120 |
+
2025-09-26 05:57:10,503 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Test rms_score: 0.4402
|
| 121 |
+
2025-09-26 05:57:10,898 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Starting triplicate run 2 for dataset adme_microsom_stab_h at 2025-09-26_05-57-10
|
| 122 |
+
2025-09-26 05:57:21,603 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 1/100 | Train Loss: 1.0231 | Val rms_score: 0.4773
|
| 123 |
+
2025-09-26 05:57:21,603 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Global step of best model: 54
|
| 124 |
+
2025-09-26 05:57:22,236 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Best model saved at epoch 1 with val rms_score: 0.4773
|
| 125 |
+
2025-09-26 05:57:32,268 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 2/100 | Train Loss: 0.8555 | Val rms_score: 0.4937
|
| 126 |
+
2025-09-26 05:57:44,208 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 3/100 | Train Loss: 0.6991 | Val rms_score: 0.4478
|
| 127 |
+
2025-09-26 05:57:44,410 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Global step of best model: 162
|
| 128 |
+
2025-09-26 05:57:45,051 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Best model saved at epoch 3 with val rms_score: 0.4478
|
| 129 |
+
2025-09-26 05:57:57,162 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 4/100 | Train Loss: 0.6016 | Val rms_score: 0.4510
|
| 130 |
+
2025-09-26 05:58:06,825 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 5/100 | Train Loss: 0.4352 | Val rms_score: 0.4385
|
| 131 |
+
2025-09-26 05:58:07,038 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Global step of best model: 270
|
| 132 |
+
2025-09-26 05:58:07,968 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Best model saved at epoch 5 with val rms_score: 0.4385
|
| 133 |
+
2025-09-26 05:58:20,010 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 6/100 | Train Loss: 0.3281 | Val rms_score: 0.4548
|
| 134 |
+
2025-09-26 05:58:29,722 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 7/100 | Train Loss: 0.2176 | Val rms_score: 0.4892
|
| 135 |
+
2025-09-26 05:58:41,310 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 8/100 | Train Loss: 0.1709 | Val rms_score: 0.4479
|
| 136 |
+
2025-09-26 05:58:52,958 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 9/100 | Train Loss: 0.1291 | Val rms_score: 0.4516
|
| 137 |
+
2025-09-26 05:59:03,069 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 10/100 | Train Loss: 0.1133 | Val rms_score: 0.4824
|
| 138 |
+
2025-09-26 05:59:16,876 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 11/100 | Train Loss: 0.1007 | Val rms_score: 0.4571
|
| 139 |
+
2025-09-26 05:59:27,944 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 12/100 | Train Loss: 0.0990 | Val rms_score: 0.4602
|
| 140 |
+
2025-09-26 05:59:40,786 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 13/100 | Train Loss: 0.1328 | Val rms_score: 0.4677
|
| 141 |
+
2025-09-26 05:59:53,325 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 14/100 | Train Loss: 0.0833 | Val rms_score: 0.4732
|
| 142 |
+
2025-09-26 06:00:02,512 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 15/100 | Train Loss: 0.0609 | Val rms_score: 0.4463
|
| 143 |
+
2025-09-26 06:00:13,720 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 16/100 | Train Loss: 0.0660 | Val rms_score: 0.4495
|
| 144 |
+
2025-09-26 06:00:26,542 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 17/100 | Train Loss: 0.0612 | Val rms_score: 0.4583
|
| 145 |
+
2025-09-26 06:00:36,418 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 18/100 | Train Loss: 0.0596 | Val rms_score: 0.4540
|
| 146 |
+
2025-09-26 06:00:50,403 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 19/100 | Train Loss: 0.0568 | Val rms_score: 0.4545
|
| 147 |
+
2025-09-26 06:01:00,276 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 20/100 | Train Loss: 0.0584 | Val rms_score: 0.4434
|
| 148 |
+
2025-09-26 06:01:12,692 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 21/100 | Train Loss: 0.0581 | Val rms_score: 0.4459
|
| 149 |
+
2025-09-26 06:01:25,418 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 22/100 | Train Loss: 0.0495 | Val rms_score: 0.4494
|
| 150 |
+
2025-09-26 06:01:35,064 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 23/100 | Train Loss: 0.0499 | Val rms_score: 0.4404
|
| 151 |
+
2025-09-26 06:01:47,445 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 24/100 | Train Loss: 0.0486 | Val rms_score: 0.4496
|
| 152 |
+
2025-09-26 06:01:56,650 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 25/100 | Train Loss: 0.0488 | Val rms_score: 0.4535
|
| 153 |
+
2025-09-26 06:02:09,183 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 26/100 | Train Loss: 0.0376 | Val rms_score: 0.4461
|
| 154 |
+
2025-09-26 06:02:21,829 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 27/100 | Train Loss: 0.0454 | Val rms_score: 0.4492
|
| 155 |
+
2025-09-26 06:02:31,670 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 28/100 | Train Loss: 0.0423 | Val rms_score: 0.4457
|
| 156 |
+
2025-09-26 06:02:44,196 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 29/100 | Train Loss: 0.0437 | Val rms_score: 0.4404
|
| 157 |
+
2025-09-26 06:02:56,994 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 30/100 | Train Loss: 0.0436 | Val rms_score: 0.4571
|
| 158 |
+
2025-09-26 06:03:06,908 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 31/100 | Train Loss: 0.0460 | Val rms_score: 0.4317
|
| 159 |
+
2025-09-26 06:03:07,391 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Global step of best model: 1674
|
| 160 |
+
2025-09-26 06:03:08,260 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Best model saved at epoch 31 with val rms_score: 0.4317
|
| 161 |
+
2025-09-26 06:03:20,403 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 32/100 | Train Loss: 0.0393 | Val rms_score: 0.4439
|
| 162 |
+
2025-09-26 06:03:29,915 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 33/100 | Train Loss: 0.0396 | Val rms_score: 0.4562
|
| 163 |
+
2025-09-26 06:03:42,453 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 34/100 | Train Loss: 0.0391 | Val rms_score: 0.4392
|
| 164 |
+
2025-09-26 06:03:55,306 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 35/100 | Train Loss: 0.0382 | Val rms_score: 0.4384
|
| 165 |
+
2025-09-26 06:04:05,612 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 36/100 | Train Loss: 0.0364 | Val rms_score: 0.4439
|
| 166 |
+
2025-09-26 06:04:18,738 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 37/100 | Train Loss: 0.0359 | Val rms_score: 0.4419
|
| 167 |
+
2025-09-26 06:04:31,155 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 38/100 | Train Loss: 0.0355 | Val rms_score: 0.4463
|
| 168 |
+
2025-09-26 06:04:44,871 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 39/100 | Train Loss: 0.0324 | Val rms_score: 0.4407
|
| 169 |
+
2025-09-26 06:04:55,970 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 40/100 | Train Loss: 0.0327 | Val rms_score: 0.4428
|
| 170 |
+
2025-09-26 06:05:09,652 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 41/100 | Train Loss: 0.0349 | Val rms_score: 0.4335
|
| 171 |
+
2025-09-26 06:05:23,584 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 42/100 | Train Loss: 0.0353 | Val rms_score: 0.4431
|
| 172 |
+
2025-09-26 06:05:34,819 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 43/100 | Train Loss: 0.0387 | Val rms_score: 0.4348
|
| 173 |
+
2025-09-26 06:05:47,612 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 44/100 | Train Loss: 0.0373 | Val rms_score: 0.4358
|
| 174 |
+
2025-09-26 06:05:57,951 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 45/100 | Train Loss: 0.0314 | Val rms_score: 0.4400
|
| 175 |
+
2025-09-26 06:06:10,717 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 46/100 | Train Loss: 0.0336 | Val rms_score: 0.4389
|
| 176 |
+
2025-09-26 06:06:24,429 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 47/100 | Train Loss: 0.0327 | Val rms_score: 0.4377
|
| 177 |
+
2025-09-26 06:06:34,768 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 48/100 | Train Loss: 0.0312 | Val rms_score: 0.4463
|
| 178 |
+
2025-09-26 06:06:47,702 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 49/100 | Train Loss: 0.0345 | Val rms_score: 0.4392
|
| 179 |
+
2025-09-26 06:06:58,085 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 50/100 | Train Loss: 0.0326 | Val rms_score: 0.4384
|
| 180 |
+
2025-09-26 06:07:10,795 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 51/100 | Train Loss: 0.0323 | Val rms_score: 0.4412
|
| 181 |
+
2025-09-26 06:07:24,414 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 52/100 | Train Loss: 0.0356 | Val rms_score: 0.4365
|
| 182 |
+
2025-09-26 06:07:34,604 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 53/100 | Train Loss: 0.0320 | Val rms_score: 0.4405
|
| 183 |
+
2025-09-26 06:07:47,126 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 54/100 | Train Loss: 0.0366 | Val rms_score: 0.4321
|
| 184 |
+
2025-09-26 06:07:57,348 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 55/100 | Train Loss: 0.0317 | Val rms_score: 0.4400
|
| 185 |
+
2025-09-26 06:08:11,917 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 56/100 | Train Loss: 0.0295 | Val rms_score: 0.4317
|
| 186 |
+
2025-09-26 06:08:23,308 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 57/100 | Train Loss: 0.0273 | Val rms_score: 0.4409
|
| 187 |
+
2025-09-26 06:08:37,511 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 58/100 | Train Loss: 0.0282 | Val rms_score: 0.4379
|
| 188 |
+
2025-09-26 06:08:50,706 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 59/100 | Train Loss: 0.0295 | Val rms_score: 0.4344
|
| 189 |
+
2025-09-26 06:09:02,013 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 60/100 | Train Loss: 0.0299 | Val rms_score: 0.4396
|
| 190 |
+
2025-09-26 06:09:15,699 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 61/100 | Train Loss: 0.0292 | Val rms_score: 0.4405
|
| 191 |
+
2025-09-26 06:09:26,251 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 62/100 | Train Loss: 0.0298 | Val rms_score: 0.4357
|
| 192 |
+
2025-09-26 06:09:38,779 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 63/100 | Train Loss: 0.0342 | Val rms_score: 0.4343
|
| 193 |
+
2025-09-26 06:09:51,257 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 64/100 | Train Loss: 0.0273 | Val rms_score: 0.4333
|
| 194 |
+
2025-09-26 06:10:01,310 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 65/100 | Train Loss: 0.0324 | Val rms_score: 0.4276
|
| 195 |
+
2025-09-26 06:10:01,519 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Global step of best model: 3510
|
| 196 |
+
2025-09-26 06:10:02,275 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Best model saved at epoch 65 with val rms_score: 0.4276
|
| 197 |
+
2025-09-26 06:10:15,787 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 66/100 | Train Loss: 0.0272 | Val rms_score: 0.4379
|
| 198 |
+
2025-09-26 06:10:27,298 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 67/100 | Train Loss: 0.0276 | Val rms_score: 0.4367
|
| 199 |
+
2025-09-26 06:10:40,684 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 68/100 | Train Loss: 0.0275 | Val rms_score: 0.4417
|
| 200 |
+
2025-09-26 06:10:51,602 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 69/100 | Train Loss: 0.0263 | Val rms_score: 0.4363
|
| 201 |
+
2025-09-26 06:11:05,215 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 70/100 | Train Loss: 0.0268 | Val rms_score: 0.4355
|
| 202 |
+
2025-09-26 06:11:18,436 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 71/100 | Train Loss: 0.0280 | Val rms_score: 0.4329
|
| 203 |
+
2025-09-26 06:11:29,918 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 72/100 | Train Loss: 0.0265 | Val rms_score: 0.4367
|
| 204 |
+
2025-09-26 06:11:43,601 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 73/100 | Train Loss: 0.0277 | Val rms_score: 0.4412
|
| 205 |
+
2025-09-26 06:11:55,375 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 74/100 | Train Loss: 0.0246 | Val rms_score: 0.4365
|
| 206 |
+
2025-09-26 06:12:09,642 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 75/100 | Train Loss: 0.0264 | Val rms_score: 0.4385
|
| 207 |
+
2025-09-26 06:12:20,914 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 76/100 | Train Loss: 0.0308 | Val rms_score: 0.4381
|
| 208 |
+
2025-09-26 06:12:34,373 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 77/100 | Train Loss: 0.0247 | Val rms_score: 0.4372
|
| 209 |
+
2025-09-26 06:12:47,826 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 78/100 | Train Loss: 0.0226 | Val rms_score: 0.4331
|
| 210 |
+
2025-09-26 06:12:58,734 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 79/100 | Train Loss: 0.0240 | Val rms_score: 0.4385
|
| 211 |
+
2025-09-26 06:13:12,462 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 80/100 | Train Loss: 0.0264 | Val rms_score: 0.4351
|
| 212 |
+
2025-09-26 06:13:22,705 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 81/100 | Train Loss: 0.0237 | Val rms_score: 0.4317
|
| 213 |
+
2025-09-26 06:13:36,302 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 82/100 | Train Loss: 0.0240 | Val rms_score: 0.4395
|
| 214 |
+
2025-09-26 06:13:49,383 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 83/100 | Train Loss: 0.0240 | Val rms_score: 0.4359
|
| 215 |
+
2025-09-26 06:13:59,639 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 84/100 | Train Loss: 0.0257 | Val rms_score: 0.4395
|
| 216 |
+
2025-09-26 06:14:12,583 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 85/100 | Train Loss: 0.0245 | Val rms_score: 0.4357
|
| 217 |
+
2025-09-26 06:14:22,771 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 86/100 | Train Loss: 0.0245 | Val rms_score: 0.4328
|
| 218 |
+
2025-09-26 06:14:35,948 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 87/100 | Train Loss: 0.0247 | Val rms_score: 0.4330
|
| 219 |
+
2025-09-26 06:14:48,682 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 88/100 | Train Loss: 0.0240 | Val rms_score: 0.4294
|
| 220 |
+
2025-09-26 06:14:58,808 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 89/100 | Train Loss: 0.0243 | Val rms_score: 0.4356
|
| 221 |
+
2025-09-26 06:15:11,385 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 90/100 | Train Loss: 0.0224 | Val rms_score: 0.4372
|
| 222 |
+
2025-09-26 06:15:21,908 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 91/100 | Train Loss: 0.0244 | Val rms_score: 0.4321
|
| 223 |
+
2025-09-26 06:15:35,425 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 92/100 | Train Loss: 0.0247 | Val rms_score: 0.4317
|
| 224 |
+
2025-09-26 06:15:48,219 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 93/100 | Train Loss: 0.0236 | Val rms_score: 0.4299
|
| 225 |
+
2025-09-26 06:15:58,783 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 94/100 | Train Loss: 0.0258 | Val rms_score: 0.4318
|
| 226 |
+
2025-09-26 06:16:11,246 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 95/100 | Train Loss: 0.0216 | Val rms_score: 0.4298
|
| 227 |
+
2025-09-26 06:16:21,395 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 96/100 | Train Loss: 0.0224 | Val rms_score: 0.4323
|
| 228 |
+
2025-09-26 06:16:35,058 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 97/100 | Train Loss: 0.0233 | Val rms_score: 0.4273
|
| 229 |
+
2025-09-26 06:16:35,248 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Global step of best model: 5238
|
| 230 |
+
2025-09-26 06:16:35,903 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Best model saved at epoch 97 with val rms_score: 0.4273
|
| 231 |
+
2025-09-26 06:16:49,824 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 98/100 | Train Loss: 0.0230 | Val rms_score: 0.4266
|
| 232 |
+
2025-09-26 06:16:50,074 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Global step of best model: 5292
|
| 233 |
+
2025-09-26 06:16:50,742 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Best model saved at epoch 98 with val rms_score: 0.4266
|
| 234 |
+
2025-09-26 06:17:00,868 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 99/100 | Train Loss: 0.0222 | Val rms_score: 0.4326
|
| 235 |
+
2025-09-26 06:17:14,132 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 100/100 | Train Loss: 0.0226 | Val rms_score: 0.4362
|
| 236 |
+
2025-09-26 06:17:14,954 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Test rms_score: 0.4414
|
| 237 |
+
2025-09-26 06:17:15,419 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Starting triplicate run 3 for dataset adme_microsom_stab_h at 2025-09-26_06-17-15
|
| 238 |
+
2025-09-26 06:17:25,956 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 1/100 | Train Loss: 1.0417 | Val rms_score: 0.4941
|
| 239 |
+
2025-09-26 06:17:25,956 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Global step of best model: 54
|
| 240 |
+
2025-09-26 06:17:26,780 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Best model saved at epoch 1 with val rms_score: 0.4941
|
| 241 |
+
2025-09-26 06:17:40,321 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 2/100 | Train Loss: 0.7656 | Val rms_score: 0.4596
|
| 242 |
+
2025-09-26 06:17:40,523 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Global step of best model: 108
|
| 243 |
+
2025-09-26 06:17:41,188 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Best model saved at epoch 2 with val rms_score: 0.4596
|
| 244 |
+
2025-09-26 06:17:52,474 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 3/100 | Train Loss: 0.6713 | Val rms_score: 0.4479
|
| 245 |
+
2025-09-26 06:17:52,675 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Global step of best model: 162
|
| 246 |
+
2025-09-26 06:17:53,271 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Best model saved at epoch 3 with val rms_score: 0.4479
|
| 247 |
+
2025-09-26 06:18:06,848 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 4/100 | Train Loss: 0.5586 | Val rms_score: 0.4736
|
| 248 |
+
2025-09-26 06:18:18,006 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 5/100 | Train Loss: 0.4028 | Val rms_score: 0.4475
|
| 249 |
+
2025-09-26 06:18:18,228 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Global step of best model: 270
|
| 250 |
+
2025-09-26 06:18:18,879 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Best model saved at epoch 5 with val rms_score: 0.4475
|
| 251 |
+
2025-09-26 06:18:31,631 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 6/100 | Train Loss: 0.2982 | Val rms_score: 0.4846
|
| 252 |
+
2025-09-26 06:18:45,379 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 7/100 | Train Loss: 0.2176 | Val rms_score: 0.4516
|
| 253 |
+
2025-09-26 06:18:55,635 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 8/100 | Train Loss: 0.1719 | Val rms_score: 0.4592
|
| 254 |
+
2025-09-26 06:19:09,717 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 9/100 | Train Loss: 0.1406 | Val rms_score: 0.4470
|
| 255 |
+
2025-09-26 06:19:09,894 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Global step of best model: 486
|
| 256 |
+
2025-09-26 06:19:10,560 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Best model saved at epoch 9 with val rms_score: 0.4470
|
| 257 |
+
2025-09-26 06:19:21,824 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 10/100 | Train Loss: 0.1250 | Val rms_score: 0.4450
|
| 258 |
+
2025-09-26 06:19:22,052 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Global step of best model: 540
|
| 259 |
+
2025-09-26 06:19:22,674 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Best model saved at epoch 10 with val rms_score: 0.4450
|
| 260 |
+
2025-09-26 06:19:36,176 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 11/100 | Train Loss: 0.1209 | Val rms_score: 0.4310
|
| 261 |
+
2025-09-26 06:19:36,884 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Global step of best model: 594
|
| 262 |
+
2025-09-26 06:19:37,690 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Best model saved at epoch 11 with val rms_score: 0.4310
|
| 263 |
+
2025-09-26 06:19:48,504 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 12/100 | Train Loss: 0.1081 | Val rms_score: 0.4437
|
| 264 |
+
2025-09-26 06:20:01,853 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 13/100 | Train Loss: 0.0845 | Val rms_score: 0.4569
|
| 265 |
+
2025-09-26 06:20:15,501 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 14/100 | Train Loss: 0.0851 | Val rms_score: 0.4445
|
| 266 |
+
2025-09-26 06:20:26,936 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 15/100 | Train Loss: 0.0758 | Val rms_score: 0.4481
|
| 267 |
+
2025-09-26 06:20:40,541 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 16/100 | Train Loss: 0.0738 | Val rms_score: 0.4340
|
| 268 |
+
2025-09-26 06:20:52,168 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 17/100 | Train Loss: 0.0799 | Val rms_score: 0.4417
|
| 269 |
+
2025-09-26 06:21:05,802 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 18/100 | Train Loss: 0.0683 | Val rms_score: 0.4439
|
| 270 |
+
2025-09-26 06:21:17,311 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 19/100 | Train Loss: 0.0763 | Val rms_score: 0.4316
|
| 271 |
+
2025-09-26 06:21:30,273 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 20/100 | Train Loss: 0.0723 | Val rms_score: 0.4329
|
| 272 |
+
2025-09-26 06:21:43,164 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 21/100 | Train Loss: 0.0611 | Val rms_score: 0.4343
|
| 273 |
+
2025-09-26 06:21:53,982 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 22/100 | Train Loss: 0.0671 | Val rms_score: 0.4404
|
| 274 |
+
2025-09-26 06:22:07,100 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 23/100 | Train Loss: 0.0592 | Val rms_score: 0.4405
|
| 275 |
+
2025-09-26 06:22:18,148 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 24/100 | Train Loss: 0.0538 | Val rms_score: 0.4376
|
| 276 |
+
2025-09-26 06:22:30,602 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 25/100 | Train Loss: 0.0550 | Val rms_score: 0.4411
|
| 277 |
+
2025-09-26 06:22:44,595 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 26/100 | Train Loss: 0.0471 | Val rms_score: 0.4353
|
| 278 |
+
2025-09-26 06:22:56,457 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 27/100 | Train Loss: 0.0498 | Val rms_score: 0.4377
|
| 279 |
+
2025-09-26 06:23:10,141 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 28/100 | Train Loss: 0.0488 | Val rms_score: 0.4397
|
| 280 |
+
2025-09-26 06:23:21,740 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 29/100 | Train Loss: 0.0472 | Val rms_score: 0.4363
|
| 281 |
+
2025-09-26 06:23:35,585 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 30/100 | Train Loss: 0.0490 | Val rms_score: 0.4393
|
| 282 |
+
2025-09-26 06:23:47,077 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 31/100 | Train Loss: 0.0469 | Val rms_score: 0.4412
|
| 283 |
+
2025-09-26 06:24:01,259 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 32/100 | Train Loss: 0.0477 | Val rms_score: 0.4435
|
| 284 |
+
2025-09-26 06:24:15,110 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 33/100 | Train Loss: 0.0457 | Val rms_score: 0.4415
|
| 285 |
+
2025-09-26 06:24:27,062 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 34/100 | Train Loss: 0.0475 | Val rms_score: 0.4491
|
| 286 |
+
2025-09-26 06:24:41,390 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 35/100 | Train Loss: 0.0446 | Val rms_score: 0.4376
|
| 287 |
+
2025-09-26 06:24:52,955 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 36/100 | Train Loss: 0.0396 | Val rms_score: 0.4343
|
| 288 |
+
2025-09-26 06:25:06,853 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 37/100 | Train Loss: 0.0411 | Val rms_score: 0.4391
|
| 289 |
+
2025-09-26 06:25:18,654 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 38/100 | Train Loss: 0.0412 | Val rms_score: 0.4411
|
| 290 |
+
2025-09-26 06:25:31,450 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 39/100 | Train Loss: 0.0399 | Val rms_score: 0.4400
|
| 291 |
+
2025-09-26 06:25:44,055 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 40/100 | Train Loss: 0.0394 | Val rms_score: 0.4355
|
| 292 |
+
2025-09-26 06:25:54,774 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 41/100 | Train Loss: 0.0371 | Val rms_score: 0.4338
|
| 293 |
+
2025-09-26 06:26:08,172 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 42/100 | Train Loss: 0.0396 | Val rms_score: 0.4305
|
| 294 |
+
2025-09-26 06:26:08,375 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Global step of best model: 2268
|
| 295 |
+
2025-09-26 06:26:09,124 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Best model saved at epoch 42 with val rms_score: 0.4305
|
| 296 |
+
2025-09-26 06:26:18,982 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 43/100 | Train Loss: 0.0348 | Val rms_score: 0.4331
|
| 297 |
+
2025-09-26 06:26:32,044 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 44/100 | Train Loss: 0.0370 | Val rms_score: 0.4392
|
| 298 |
+
2025-09-26 06:26:45,535 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 45/100 | Train Loss: 0.0409 | Val rms_score: 0.4412
|
| 299 |
+
2025-09-26 06:26:56,715 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 46/100 | Train Loss: 0.0359 | Val rms_score: 0.4303
|
| 300 |
+
2025-09-26 06:26:57,206 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Global step of best model: 2484
|
| 301 |
+
2025-09-26 06:26:57,848 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Best model saved at epoch 46 with val rms_score: 0.4303
|
| 302 |
+
2025-09-26 06:27:10,895 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 47/100 | Train Loss: 0.0345 | Val rms_score: 0.4374
|
| 303 |
+
2025-09-26 06:27:21,616 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 48/100 | Train Loss: 0.0331 | Val rms_score: 0.4331
|
| 304 |
+
2025-09-26 06:27:34,828 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 49/100 | Train Loss: 0.0363 | Val rms_score: 0.4328
|
| 305 |
+
2025-09-26 06:27:45,540 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 50/100 | Train Loss: 0.0376 | Val rms_score: 0.4294
|
| 306 |
+
2025-09-26 06:27:45,720 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Global step of best model: 2700
|
| 307 |
+
2025-09-26 06:27:46,398 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Best model saved at epoch 50 with val rms_score: 0.4294
|
| 308 |
+
2025-09-26 06:27:58,852 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 51/100 | Train Loss: 0.0399 | Val rms_score: 0.4304
|
| 309 |
+
2025-09-26 06:28:11,309 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 52/100 | Train Loss: 0.0400 | Val rms_score: 0.4307
|
| 310 |
+
2025-09-26 06:28:21,631 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 53/100 | Train Loss: 0.0353 | Val rms_score: 0.4288
|
| 311 |
+
2025-09-26 06:28:21,805 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Global step of best model: 2862
|
| 312 |
+
2025-09-26 06:28:22,469 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Best model saved at epoch 53 with val rms_score: 0.4288
|
| 313 |
+
2025-09-26 06:28:34,876 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 54/100 | Train Loss: 0.0334 | Val rms_score: 0.4318
|
| 314 |
+
2025-09-26 06:28:44,908 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 55/100 | Train Loss: 0.0346 | Val rms_score: 0.4344
|
| 315 |
+
2025-09-26 06:28:59,670 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 56/100 | Train Loss: 0.0355 | Val rms_score: 0.4323
|
| 316 |
+
2025-09-26 06:29:12,250 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 57/100 | Train Loss: 0.0356 | Val rms_score: 0.4392
|
| 317 |
+
2025-09-26 06:29:22,355 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 58/100 | Train Loss: 0.0354 | Val rms_score: 0.4329
|
| 318 |
+
2025-09-26 06:29:34,931 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 59/100 | Train Loss: 0.0318 | Val rms_score: 0.4273
|
| 319 |
+
2025-09-26 06:29:35,122 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Global step of best model: 3186
|
| 320 |
+
2025-09-26 06:29:35,808 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Best model saved at epoch 59 with val rms_score: 0.4273
|
| 321 |
+
2025-09-26 06:29:45,994 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 60/100 | Train Loss: 0.0316 | Val rms_score: 0.4377
|
| 322 |
+
2025-09-26 06:29:57,844 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 61/100 | Train Loss: 0.0344 | Val rms_score: 0.4346
|
| 323 |
+
2025-09-26 06:30:09,700 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 62/100 | Train Loss: 0.0303 | Val rms_score: 0.4344
|
| 324 |
+
2025-09-26 06:30:19,575 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 63/100 | Train Loss: 0.0532 | Val rms_score: 0.4340
|
| 325 |
+
2025-09-26 06:30:31,951 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 64/100 | Train Loss: 0.0308 | Val rms_score: 0.4307
|
| 326 |
+
2025-09-26 06:30:43,935 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 65/100 | Train Loss: 0.0277 | Val rms_score: 0.4281
|
| 327 |
+
2025-09-26 06:30:53,610 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 66/100 | Train Loss: 0.0333 | Val rms_score: 0.4267
|
| 328 |
+
2025-09-26 06:30:54,143 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Global step of best model: 3564
|
| 329 |
+
2025-09-26 06:30:54,809 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Best model saved at epoch 66 with val rms_score: 0.4267
|
| 330 |
+
2025-09-26 06:31:07,206 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 67/100 | Train Loss: 0.0302 | Val rms_score: 0.4369
|
| 331 |
+
2025-09-26 06:31:16,825 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 68/100 | Train Loss: 0.0295 | Val rms_score: 0.4305
|
| 332 |
+
2025-09-26 06:31:29,437 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 69/100 | Train Loss: 0.0317 | Val rms_score: 0.4319
|
| 333 |
+
2025-09-26 06:31:42,195 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 70/100 | Train Loss: 0.0286 | Val rms_score: 0.4326
|
| 334 |
+
2025-09-26 06:31:52,227 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 71/100 | Train Loss: 0.0299 | Val rms_score: 0.4315
|
| 335 |
+
2025-09-26 06:32:04,889 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 72/100 | Train Loss: 0.0304 | Val rms_score: 0.4313
|
| 336 |
+
2025-09-26 06:32:14,508 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 73/100 | Train Loss: 0.0286 | Val rms_score: 0.4285
|
| 337 |
+
2025-09-26 06:32:27,143 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 74/100 | Train Loss: 0.0292 | Val rms_score: 0.4252
|
| 338 |
+
2025-09-26 06:32:27,368 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Global step of best model: 3996
|
| 339 |
+
2025-09-26 06:32:28,060 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Best model saved at epoch 74 with val rms_score: 0.4252
|
| 340 |
+
2025-09-26 06:32:43,008 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 75/100 | Train Loss: 0.0292 | Val rms_score: 0.4359
|
| 341 |
+
2025-09-26 06:32:54,248 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 76/100 | Train Loss: 0.0271 | Val rms_score: 0.4310
|
| 342 |
+
2025-09-26 06:33:08,914 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 77/100 | Train Loss: 0.0279 | Val rms_score: 0.4321
|
| 343 |
+
2025-09-26 06:33:20,671 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 78/100 | Train Loss: 0.0303 | Val rms_score: 0.4314
|
| 344 |
+
2025-09-26 06:33:35,157 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 79/100 | Train Loss: 0.0289 | Val rms_score: 0.4298
|
| 345 |
+
2025-09-26 06:33:45,493 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 80/100 | Train Loss: 0.0291 | Val rms_score: 0.4306
|
| 346 |
+
2025-09-26 06:33:58,739 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 81/100 | Train Loss: 0.0289 | Val rms_score: 0.4314
|
| 347 |
+
2025-09-26 06:34:12,524 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 82/100 | Train Loss: 0.0312 | Val rms_score: 0.4265
|
| 348 |
+
2025-09-26 06:34:23,271 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 83/100 | Train Loss: 0.0272 | Val rms_score: 0.4247
|
| 349 |
+
2025-09-26 06:34:23,443 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Global step of best model: 4482
|
| 350 |
+
2025-09-26 06:34:24,126 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Best model saved at epoch 83 with val rms_score: 0.4247
|
| 351 |
+
2025-09-26 06:34:38,248 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 84/100 | Train Loss: 0.0299 | Val rms_score: 0.4267
|
| 352 |
+
2025-09-26 06:34:49,725 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 85/100 | Train Loss: 0.0262 | Val rms_score: 0.4392
|
| 353 |
+
2025-09-26 06:35:03,597 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 86/100 | Train Loss: 0.0272 | Val rms_score: 0.4270
|
| 354 |
+
2025-09-26 06:35:15,789 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 87/100 | Train Loss: 0.0259 | Val rms_score: 0.4314
|
| 355 |
+
2025-09-26 06:35:29,965 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 88/100 | Train Loss: 0.0264 | Val rms_score: 0.4342
|
| 356 |
+
2025-09-26 06:35:41,501 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 89/100 | Train Loss: 0.0280 | Val rms_score: 0.4320
|
| 357 |
+
2025-09-26 06:35:55,652 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 90/100 | Train Loss: 0.0262 | Val rms_score: 0.4305
|
| 358 |
+
2025-09-26 06:36:09,216 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 91/100 | Train Loss: 0.0264 | Val rms_score: 0.4300
|
| 359 |
+
2025-09-26 06:36:20,869 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 92/100 | Train Loss: 0.0276 | Val rms_score: 0.4344
|
| 360 |
+
2025-09-26 06:36:35,054 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 93/100 | Train Loss: 0.0293 | Val rms_score: 0.4294
|
| 361 |
+
2025-09-26 06:36:45,532 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 94/100 | Train Loss: 0.0259 | Val rms_score: 0.4309
|
| 362 |
+
2025-09-26 06:36:59,174 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 95/100 | Train Loss: 0.0251 | Val rms_score: 0.4301
|
| 363 |
+
2025-09-26 06:37:09,849 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 96/100 | Train Loss: 0.0240 | Val rms_score: 0.4324
|
| 364 |
+
2025-09-26 06:37:23,906 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 97/100 | Train Loss: 0.0245 | Val rms_score: 0.4316
|
| 365 |
+
2025-09-26 06:37:36,166 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 98/100 | Train Loss: 0.0265 | Val rms_score: 0.4317
|
| 366 |
+
2025-09-26 06:37:46,419 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 99/100 | Train Loss: 0.0260 | Val rms_score: 0.4367
|
| 367 |
+
2025-09-26 06:37:59,627 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 100/100 | Train Loss: 0.0262 | Val rms_score: 0.4302
|
| 368 |
+
2025-09-26 06:38:00,385 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Test rms_score: 0.4651
|
| 369 |
+
2025-09-26 06:38:00,719 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Final Triplicate Test Results — Avg rms_score: 0.4489, Std Dev: 0.0114
|
logs_modchembert_regression_ModChemBERT-MLM/modchembert_deepchem_splits_run_adme_microsom_stab_r_epochs100_batch_size32_20250926_075155.log
ADDED
|
@@ -0,0 +1,327 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
2025-09-26 07:51:55,917 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Running benchmark for dataset: adme_microsom_stab_r
|
| 2 |
+
2025-09-26 07:51:55,917 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - dataset: adme_microsom_stab_r, tasks: ['y'], epochs: 100, learning rate: 3e-05, transform: True
|
| 3 |
+
2025-09-26 07:51:55,923 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Starting triplicate run 1 for dataset adme_microsom_stab_r at 2025-09-26_07-51-55
|
| 4 |
+
2025-09-26 07:52:04,967 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 1/100 | Train Loss: 0.9154 | Val rms_score: 0.6173
|
| 5 |
+
2025-09-26 07:52:04,968 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Global step of best model: 68
|
| 6 |
+
2025-09-26 07:52:05,750 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Best model saved at epoch 1 with val rms_score: 0.6173
|
| 7 |
+
2025-09-26 07:52:13,302 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 2/100 | Train Loss: 0.6840 | Val rms_score: 0.5431
|
| 8 |
+
2025-09-26 07:52:13,478 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Global step of best model: 136
|
| 9 |
+
2025-09-26 07:52:14,049 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Best model saved at epoch 2 with val rms_score: 0.5431
|
| 10 |
+
2025-09-26 07:52:21,246 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 3/100 | Train Loss: 0.5156 | Val rms_score: 0.5279
|
| 11 |
+
2025-09-26 07:52:21,429 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Global step of best model: 204
|
| 12 |
+
2025-09-26 07:52:22,077 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Best model saved at epoch 3 with val rms_score: 0.5279
|
| 13 |
+
2025-09-26 07:52:29,642 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 4/100 | Train Loss: 0.4412 | Val rms_score: 0.5454
|
| 14 |
+
2025-09-26 07:52:36,874 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 5/100 | Train Loss: 0.3844 | Val rms_score: 0.5296
|
| 15 |
+
2025-09-26 07:52:44,541 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 6/100 | Train Loss: 0.2500 | Val rms_score: 0.5630
|
| 16 |
+
2025-09-26 07:52:52,136 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 7/100 | Train Loss: 0.2022 | Val rms_score: 0.5342
|
| 17 |
+
2025-09-26 07:52:59,527 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 8/100 | Train Loss: 0.1456 | Val rms_score: 0.5559
|
| 18 |
+
2025-09-26 07:53:06,824 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 9/100 | Train Loss: 0.1504 | Val rms_score: 0.5489
|
| 19 |
+
2025-09-26 07:53:15,479 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 10/100 | Train Loss: 0.1153 | Val rms_score: 0.5703
|
| 20 |
+
2025-09-26 07:53:29,441 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 11/100 | Train Loss: 0.0977 | Val rms_score: 0.5672
|
| 21 |
+
2025-09-26 07:53:43,480 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 12/100 | Train Loss: 0.0835 | Val rms_score: 0.5733
|
| 22 |
+
2025-09-26 07:53:57,263 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 13/100 | Train Loss: 0.0804 | Val rms_score: 0.5529
|
| 23 |
+
2025-09-26 07:54:09,874 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 14/100 | Train Loss: 0.0691 | Val rms_score: 0.5533
|
| 24 |
+
2025-09-26 07:54:25,350 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 15/100 | Train Loss: 0.0645 | Val rms_score: 0.5625
|
| 25 |
+
2025-09-26 07:54:38,813 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 16/100 | Train Loss: 0.0653 | Val rms_score: 0.5679
|
| 26 |
+
2025-09-26 07:54:52,710 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 17/100 | Train Loss: 0.0650 | Val rms_score: 0.5552
|
| 27 |
+
2025-09-26 07:55:06,729 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 18/100 | Train Loss: 0.0579 | Val rms_score: 0.5594
|
| 28 |
+
2025-09-26 07:55:20,657 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 19/100 | Train Loss: 0.0563 | Val rms_score: 0.5645
|
| 29 |
+
2025-09-26 07:55:33,835 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 20/100 | Train Loss: 0.0547 | Val rms_score: 0.5659
|
| 30 |
+
2025-09-26 07:55:47,364 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 21/100 | Train Loss: 0.0583 | Val rms_score: 0.5557
|
| 31 |
+
2025-09-26 07:56:00,977 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 22/100 | Train Loss: 0.0528 | Val rms_score: 0.5587
|
| 32 |
+
2025-09-26 07:56:14,822 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 23/100 | Train Loss: 0.0498 | Val rms_score: 0.5597
|
| 33 |
+
2025-09-26 07:56:28,699 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 24/100 | Train Loss: 0.0498 | Val rms_score: 0.5510
|
| 34 |
+
2025-09-26 07:56:42,090 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 25/100 | Train Loss: 0.0471 | Val rms_score: 0.5522
|
| 35 |
+
2025-09-26 07:56:56,116 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 26/100 | Train Loss: 0.0443 | Val rms_score: 0.5529
|
| 36 |
+
2025-09-26 07:57:10,243 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 27/100 | Train Loss: 0.0451 | Val rms_score: 0.5512
|
| 37 |
+
2025-09-26 07:57:22,979 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 28/100 | Train Loss: 0.0479 | Val rms_score: 0.5540
|
| 38 |
+
2025-09-26 07:57:36,693 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 29/100 | Train Loss: 0.0450 | Val rms_score: 0.5757
|
| 39 |
+
2025-09-26 07:57:52,186 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 30/100 | Train Loss: 0.0422 | Val rms_score: 0.5558
|
| 40 |
+
2025-09-26 07:58:05,437 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 31/100 | Train Loss: 0.0415 | Val rms_score: 0.5495
|
| 41 |
+
2025-09-26 07:58:19,908 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 32/100 | Train Loss: 0.0400 | Val rms_score: 0.5559
|
| 42 |
+
2025-09-26 07:58:33,246 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 33/100 | Train Loss: 0.0423 | Val rms_score: 0.5489
|
| 43 |
+
2025-09-26 07:58:47,041 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 34/100 | Train Loss: 0.0391 | Val rms_score: 0.5420
|
| 44 |
+
2025-09-26 07:59:00,468 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 35/100 | Train Loss: 0.0365 | Val rms_score: 0.5501
|
| 45 |
+
2025-09-26 07:59:13,868 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 36/100 | Train Loss: 0.0368 | Val rms_score: 0.5548
|
| 46 |
+
2025-09-26 07:59:27,606 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 37/100 | Train Loss: 0.0352 | Val rms_score: 0.5497
|
| 47 |
+
2025-09-26 07:59:41,572 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 38/100 | Train Loss: 0.0368 | Val rms_score: 0.5497
|
| 48 |
+
2025-09-26 07:59:55,488 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 39/100 | Train Loss: 0.0352 | Val rms_score: 0.5473
|
| 49 |
+
2025-09-26 08:00:09,073 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 40/100 | Train Loss: 0.0359 | Val rms_score: 0.5453
|
| 50 |
+
2025-09-26 08:00:21,544 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 41/100 | Train Loss: 0.0333 | Val rms_score: 0.5541
|
| 51 |
+
2025-09-26 08:00:35,388 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 42/100 | Train Loss: 0.0336 | Val rms_score: 0.5557
|
| 52 |
+
2025-09-26 08:00:48,311 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 43/100 | Train Loss: 0.0303 | Val rms_score: 0.5534
|
| 53 |
+
2025-09-26 08:01:02,567 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 44/100 | Train Loss: 0.0331 | Val rms_score: 0.5498
|
| 54 |
+
2025-09-26 08:01:16,533 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 45/100 | Train Loss: 0.0319 | Val rms_score: 0.5514
|
| 55 |
+
2025-09-26 08:01:30,734 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 46/100 | Train Loss: 0.0343 | Val rms_score: 0.5436
|
| 56 |
+
2025-09-26 08:01:44,919 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 47/100 | Train Loss: 0.0322 | Val rms_score: 0.5430
|
| 57 |
+
2025-09-26 08:01:58,049 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 48/100 | Train Loss: 0.0315 | Val rms_score: 0.5461
|
| 58 |
+
2025-09-26 08:02:12,306 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 49/100 | Train Loss: 0.0298 | Val rms_score: 0.5402
|
| 59 |
+
2025-09-26 08:02:25,612 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 50/100 | Train Loss: 0.0335 | Val rms_score: 0.5496
|
| 60 |
+
2025-09-26 08:02:39,524 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 51/100 | Train Loss: 0.0308 | Val rms_score: 0.5472
|
| 61 |
+
2025-09-26 08:02:53,664 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 52/100 | Train Loss: 0.0299 | Val rms_score: 0.5449
|
| 62 |
+
2025-09-26 08:03:07,097 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 53/100 | Train Loss: 0.0325 | Val rms_score: 0.5432
|
| 63 |
+
2025-09-26 08:03:21,075 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 54/100 | Train Loss: 0.0288 | Val rms_score: 0.5461
|
| 64 |
+
2025-09-26 08:03:34,570 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 55/100 | Train Loss: 0.0312 | Val rms_score: 0.5518
|
| 65 |
+
2025-09-26 08:03:48,409 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 56/100 | Train Loss: 0.0305 | Val rms_score: 0.5464
|
| 66 |
+
2025-09-26 08:04:02,034 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 57/100 | Train Loss: 0.0303 | Val rms_score: 0.5429
|
| 67 |
+
2025-09-26 08:04:14,195 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 58/100 | Train Loss: 0.0289 | Val rms_score: 0.5469
|
| 68 |
+
2025-09-26 08:04:29,478 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 59/100 | Train Loss: 0.0278 | Val rms_score: 0.5468
|
| 69 |
+
2025-09-26 08:04:43,185 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 60/100 | Train Loss: 0.0256 | Val rms_score: 0.5391
|
| 70 |
+
2025-09-26 08:04:58,113 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 61/100 | Train Loss: 0.0308 | Val rms_score: 0.5465
|
| 71 |
+
2025-09-26 08:05:11,785 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 62/100 | Train Loss: 0.0284 | Val rms_score: 0.5500
|
| 72 |
+
2025-09-26 08:05:25,525 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 63/100 | Train Loss: 0.0294 | Val rms_score: 0.5526
|
| 73 |
+
2025-09-26 08:05:39,718 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 64/100 | Train Loss: 0.0270 | Val rms_score: 0.5477
|
| 74 |
+
2025-09-26 08:05:53,407 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 65/100 | Train Loss: 0.0275 | Val rms_score: 0.5444
|
| 75 |
+
2025-09-26 08:06:07,623 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 66/100 | Train Loss: 0.0257 | Val rms_score: 0.5423
|
| 76 |
+
2025-09-26 08:06:21,658 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 67/100 | Train Loss: 0.0272 | Val rms_score: 0.5432
|
| 77 |
+
2025-09-26 08:06:34,720 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 68/100 | Train Loss: 0.0293 | Val rms_score: 0.5459
|
| 78 |
+
2025-09-26 08:06:48,200 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 69/100 | Train Loss: 0.0260 | Val rms_score: 0.5432
|
| 79 |
+
2025-09-26 08:07:01,689 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 70/100 | Train Loss: 0.0263 | Val rms_score: 0.5402
|
| 80 |
+
2025-09-26 08:07:15,986 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 71/100 | Train Loss: 0.0278 | Val rms_score: 0.5442
|
| 81 |
+
2025-09-26 08:07:28,674 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 72/100 | Train Loss: 0.0269 | Val rms_score: 0.5401
|
| 82 |
+
2025-09-26 08:07:42,688 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 73/100 | Train Loss: 0.0254 | Val rms_score: 0.5408
|
| 83 |
+
2025-09-26 08:07:57,861 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 74/100 | Train Loss: 0.0258 | Val rms_score: 0.5411
|
| 84 |
+
2025-09-26 08:08:10,910 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 75/100 | Train Loss: 0.0265 | Val rms_score: 0.5503
|
| 85 |
+
2025-09-26 08:08:24,554 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 76/100 | Train Loss: 0.0270 | Val rms_score: 0.5456
|
| 86 |
+
2025-09-26 08:08:38,230 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 77/100 | Train Loss: 0.0277 | Val rms_score: 0.5459
|
| 87 |
+
2025-09-26 08:08:52,210 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 78/100 | Train Loss: 0.0312 | Val rms_score: 0.5447
|
| 88 |
+
2025-09-26 08:09:06,060 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 79/100 | Train Loss: 0.0240 | Val rms_score: 0.5448
|
| 89 |
+
2025-09-26 08:09:19,786 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 80/100 | Train Loss: 0.0254 | Val rms_score: 0.5384
|
| 90 |
+
2025-09-26 08:09:33,995 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 81/100 | Train Loss: 0.0245 | Val rms_score: 0.5447
|
| 91 |
+
2025-09-26 08:09:47,671 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 82/100 | Train Loss: 0.0252 | Val rms_score: 0.5426
|
| 92 |
+
2025-09-26 08:10:01,297 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 83/100 | Train Loss: 0.0243 | Val rms_score: 0.5394
|
| 93 |
+
2025-09-26 08:10:13,887 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 84/100 | Train Loss: 0.0244 | Val rms_score: 0.5415
|
| 94 |
+
2025-09-26 08:10:27,366 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 85/100 | Train Loss: 0.0249 | Val rms_score: 0.5426
|
| 95 |
+
2025-09-26 08:10:41,439 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 86/100 | Train Loss: 0.0246 | Val rms_score: 0.5412
|
| 96 |
+
2025-09-26 08:10:53,996 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 87/100 | Train Loss: 0.0260 | Val rms_score: 0.5398
|
| 97 |
+
2025-09-26 08:11:07,882 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 88/100 | Train Loss: 0.0245 | Val rms_score: 0.5434
|
| 98 |
+
2025-09-26 08:11:22,628 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 89/100 | Train Loss: 0.0246 | Val rms_score: 0.5429
|
| 99 |
+
2025-09-26 08:11:36,108 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 90/100 | Train Loss: 0.0218 | Val rms_score: 0.5419
|
| 100 |
+
2025-09-26 08:11:50,693 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 91/100 | Train Loss: 0.0247 | Val rms_score: 0.5496
|
| 101 |
+
2025-09-26 08:12:04,484 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 92/100 | Train Loss: 0.0223 | Val rms_score: 0.5394
|
| 102 |
+
2025-09-26 08:12:18,451 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 93/100 | Train Loss: 0.0238 | Val rms_score: 0.5432
|
| 103 |
+
2025-09-26 08:12:32,207 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 94/100 | Train Loss: 0.0232 | Val rms_score: 0.5413
|
| 104 |
+
2025-09-26 08:12:45,210 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 95/100 | Train Loss: 0.0233 | Val rms_score: 0.5410
|
| 105 |
+
2025-09-26 08:12:58,585 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 96/100 | Train Loss: 0.0223 | Val rms_score: 0.5421
|
| 106 |
+
2025-09-26 08:13:13,145 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 97/100 | Train Loss: 0.0239 | Val rms_score: 0.5420
|
| 107 |
+
2025-09-26 08:13:27,487 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 98/100 | Train Loss: 0.0228 | Val rms_score: 0.5419
|
| 108 |
+
2025-09-26 08:13:39,750 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 99/100 | Train Loss: 0.0205 | Val rms_score: 0.5422
|
| 109 |
+
2025-09-26 08:13:53,331 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 100/100 | Train Loss: 0.0239 | Val rms_score: 0.5450
|
| 110 |
+
2025-09-26 08:13:54,218 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Test rms_score: 0.4644
|
| 111 |
+
2025-09-26 08:13:54,538 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Starting triplicate run 2 for dataset adme_microsom_stab_r at 2025-09-26_08-13-54
|
| 112 |
+
2025-09-26 08:14:07,516 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 1/100 | Train Loss: 0.8346 | Val rms_score: 0.5692
|
| 113 |
+
2025-09-26 08:14:07,516 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Global step of best model: 68
|
| 114 |
+
2025-09-26 08:14:08,201 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Best model saved at epoch 1 with val rms_score: 0.5692
|
| 115 |
+
2025-09-26 08:14:22,538 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 2/100 | Train Loss: 0.6597 | Val rms_score: 0.5538
|
| 116 |
+
2025-09-26 08:14:22,717 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Global step of best model: 136
|
| 117 |
+
2025-09-26 08:14:23,302 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Best model saved at epoch 2 with val rms_score: 0.5538
|
| 118 |
+
2025-09-26 08:14:36,674 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 3/100 | Train Loss: 0.4180 | Val rms_score: 0.5612
|
| 119 |
+
2025-09-26 08:14:48,123 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 4/100 | Train Loss: 0.3713 | Val rms_score: 0.5294
|
| 120 |
+
2025-09-26 08:14:48,314 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Global step of best model: 272
|
| 121 |
+
2025-09-26 08:14:48,885 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Best model saved at epoch 4 with val rms_score: 0.5294
|
| 122 |
+
2025-09-26 08:15:00,774 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 5/100 | Train Loss: 0.3187 | Val rms_score: 0.5392
|
| 123 |
+
2025-09-26 08:15:13,578 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 6/100 | Train Loss: 0.2314 | Val rms_score: 0.5693
|
| 124 |
+
2025-09-26 08:15:27,667 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 7/100 | Train Loss: 0.1535 | Val rms_score: 0.5733
|
| 125 |
+
2025-09-26 08:15:41,792 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 8/100 | Train Loss: 0.1342 | Val rms_score: 0.5545
|
| 126 |
+
2025-09-26 08:15:55,393 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 9/100 | Train Loss: 0.1113 | Val rms_score: 0.5443
|
| 127 |
+
2025-09-26 08:16:08,911 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 10/100 | Train Loss: 0.0988 | Val rms_score: 0.5573
|
| 128 |
+
2025-09-26 08:16:23,228 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 11/100 | Train Loss: 0.0872 | Val rms_score: 0.5505
|
| 129 |
+
2025-09-26 08:16:37,087 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 12/100 | Train Loss: 0.0762 | Val rms_score: 0.5507
|
| 130 |
+
2025-09-26 08:16:50,335 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 13/100 | Train Loss: 0.0689 | Val rms_score: 0.5756
|
| 131 |
+
2025-09-26 08:17:02,709 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 14/100 | Train Loss: 0.0658 | Val rms_score: 0.5479
|
| 132 |
+
2025-09-26 08:17:17,753 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 15/100 | Train Loss: 0.0637 | Val rms_score: 0.5629
|
| 133 |
+
2025-09-26 08:17:31,389 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 16/100 | Train Loss: 0.0625 | Val rms_score: 0.5560
|
| 134 |
+
2025-09-26 08:17:46,570 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 17/100 | Train Loss: 0.0608 | Val rms_score: 0.5440
|
| 135 |
+
2025-09-26 08:18:00,212 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 18/100 | Train Loss: 0.0596 | Val rms_score: 0.5521
|
| 136 |
+
2025-09-26 08:18:14,303 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 19/100 | Train Loss: 0.0545 | Val rms_score: 0.5527
|
| 137 |
+
2025-09-26 08:18:28,228 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 20/100 | Train Loss: 0.0516 | Val rms_score: 0.5501
|
| 138 |
+
2025-09-26 08:18:42,436 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 21/100 | Train Loss: 0.0511 | Val rms_score: 0.5436
|
| 139 |
+
2025-09-26 08:18:55,929 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 22/100 | Train Loss: 0.0517 | Val rms_score: 0.5545
|
| 140 |
+
2025-09-26 08:19:09,457 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 23/100 | Train Loss: 0.0498 | Val rms_score: 0.5409
|
| 141 |
+
2025-09-26 08:19:22,978 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 24/100 | Train Loss: 0.0471 | Val rms_score: 0.5559
|
| 142 |
+
2025-09-26 08:19:40,936 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 25/100 | Train Loss: 0.0441 | Val rms_score: 0.5529
|
| 143 |
+
2025-09-26 08:19:58,300 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 26/100 | Train Loss: 0.0439 | Val rms_score: 0.5458
|
| 144 |
+
2025-09-26 08:20:13,722 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 27/100 | Train Loss: 0.0503 | Val rms_score: 0.5481
|
| 145 |
+
2025-09-26 08:20:31,186 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 28/100 | Train Loss: 0.0422 | Val rms_score: 0.5407
|
| 146 |
+
2025-09-26 08:20:48,389 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 29/100 | Train Loss: 0.0423 | Val rms_score: 0.5478
|
| 147 |
+
2025-09-26 08:21:06,559 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 30/100 | Train Loss: 0.0406 | Val rms_score: 0.5482
|
| 148 |
+
2025-09-26 08:21:23,774 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 31/100 | Train Loss: 0.0376 | Val rms_score: 0.5444
|
| 149 |
+
2025-09-26 08:21:39,578 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 32/100 | Train Loss: 0.0372 | Val rms_score: 0.5455
|
| 150 |
+
2025-09-26 08:21:55,614 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 33/100 | Train Loss: 0.0364 | Val rms_score: 0.5453
|
| 151 |
+
2025-09-26 08:22:12,919 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 34/100 | Train Loss: 0.0356 | Val rms_score: 0.5538
|
| 152 |
+
2025-09-26 08:22:29,963 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 35/100 | Train Loss: 0.0386 | Val rms_score: 0.5482
|
| 153 |
+
2025-09-26 08:22:47,128 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 36/100 | Train Loss: 0.0373 | Val rms_score: 0.5465
|
| 154 |
+
2025-09-26 08:23:04,405 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 37/100 | Train Loss: 0.0352 | Val rms_score: 0.5428
|
| 155 |
+
2025-09-26 08:23:22,227 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 38/100 | Train Loss: 0.0363 | Val rms_score: 0.5495
|
| 156 |
+
2025-09-26 08:23:40,145 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 39/100 | Train Loss: 0.0364 | Val rms_score: 0.5391
|
| 157 |
+
2025-09-26 08:23:56,679 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 40/100 | Train Loss: 0.0367 | Val rms_score: 0.5473
|
| 158 |
+
2025-09-26 08:24:13,628 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 41/100 | Train Loss: 0.0329 | Val rms_score: 0.5387
|
| 159 |
+
2025-09-26 08:24:31,646 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 42/100 | Train Loss: 0.0339 | Val rms_score: 0.5419
|
| 160 |
+
2025-09-26 08:24:47,067 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 43/100 | Train Loss: 0.0340 | Val rms_score: 0.5375
|
| 161 |
+
2025-09-26 08:25:04,800 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 44/100 | Train Loss: 0.0347 | Val rms_score: 0.5463
|
| 162 |
+
2025-09-26 08:25:23,286 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 45/100 | Train Loss: 0.0310 | Val rms_score: 0.5425
|
| 163 |
+
2025-09-26 08:25:40,931 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 46/100 | Train Loss: 0.0338 | Val rms_score: 0.5476
|
| 164 |
+
2025-09-26 08:25:58,715 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 47/100 | Train Loss: 0.0315 | Val rms_score: 0.5449
|
| 165 |
+
2025-09-26 08:26:15,271 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 48/100 | Train Loss: 0.0295 | Val rms_score: 0.5436
|
| 166 |
+
2025-09-26 08:26:33,127 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 49/100 | Train Loss: 0.0303 | Val rms_score: 0.5459
|
| 167 |
+
2025-09-26 08:26:50,824 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 50/100 | Train Loss: 0.0271 | Val rms_score: 0.5431
|
| 168 |
+
2025-09-26 08:27:08,428 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 51/100 | Train Loss: 0.0296 | Val rms_score: 0.5412
|
| 169 |
+
2025-09-26 08:27:25,805 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 52/100 | Train Loss: 0.0297 | Val rms_score: 0.5397
|
| 170 |
+
2025-09-26 08:27:43,332 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 53/100 | Train Loss: 0.0222 | Val rms_score: 0.5387
|
| 171 |
+
2025-09-26 08:27:59,417 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 54/100 | Train Loss: 0.0280 | Val rms_score: 0.5430
|
| 172 |
+
2025-09-26 08:28:16,677 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 55/100 | Train Loss: 0.0293 | Val rms_score: 0.5496
|
| 173 |
+
2025-09-26 08:28:34,527 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 56/100 | Train Loss: 0.0298 | Val rms_score: 0.5341
|
| 174 |
+
2025-09-26 08:28:52,894 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 57/100 | Train Loss: 0.0278 | Val rms_score: 0.5469
|
| 175 |
+
2025-09-26 08:29:10,103 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 58/100 | Train Loss: 0.0272 | Val rms_score: 0.5437
|
| 176 |
+
2025-09-26 08:29:29,110 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 59/100 | Train Loss: 0.0286 | Val rms_score: 0.5310
|
| 177 |
+
2025-09-26 08:29:45,642 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 60/100 | Train Loss: 0.0280 | Val rms_score: 0.5366
|
| 178 |
+
2025-09-26 08:30:01,556 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 61/100 | Train Loss: 0.0301 | Val rms_score: 0.5404
|
| 179 |
+
2025-09-26 08:30:19,627 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 62/100 | Train Loss: 0.0275 | Val rms_score: 0.5392
|
| 180 |
+
2025-09-26 08:30:37,665 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 63/100 | Train Loss: 0.0247 | Val rms_score: 0.5416
|
| 181 |
+
2025-09-26 08:30:55,013 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 64/100 | Train Loss: 0.0269 | Val rms_score: 0.5433
|
| 182 |
+
2025-09-26 08:31:12,095 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 65/100 | Train Loss: 0.0250 | Val rms_score: 0.5379
|
| 183 |
+
2025-09-26 08:31:28,490 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 66/100 | Train Loss: 0.0248 | Val rms_score: 0.5421
|
| 184 |
+
2025-09-26 08:31:46,914 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 67/100 | Train Loss: 0.0247 | Val rms_score: 0.5396
|
| 185 |
+
2025-09-26 08:32:04,717 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 68/100 | Train Loss: 0.0265 | Val rms_score: 0.5402
|
| 186 |
+
2025-09-26 08:32:22,257 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 69/100 | Train Loss: 0.0262 | Val rms_score: 0.5446
|
| 187 |
+
2025-09-26 08:32:39,134 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 70/100 | Train Loss: 0.0259 | Val rms_score: 0.5391
|
| 188 |
+
2025-09-26 08:32:56,382 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 71/100 | Train Loss: 0.0248 | Val rms_score: 0.5420
|
| 189 |
+
2025-09-26 08:33:13,453 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 72/100 | Train Loss: 0.0262 | Val rms_score: 0.5420
|
| 190 |
+
2025-09-26 08:33:30,715 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 73/100 | Train Loss: 0.0262 | Val rms_score: 0.5422
|
| 191 |
+
2025-09-26 08:33:49,664 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 74/100 | Train Loss: 0.0258 | Val rms_score: 0.5368
|
| 192 |
+
2025-09-26 08:34:06,913 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 75/100 | Train Loss: 0.0256 | Val rms_score: 0.5370
|
| 193 |
+
2025-09-26 08:34:23,656 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 76/100 | Train Loss: 0.0236 | Val rms_score: 0.5431
|
| 194 |
+
2025-09-26 08:34:40,005 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 77/100 | Train Loss: 0.0233 | Val rms_score: 0.5391
|
| 195 |
+
2025-09-26 08:34:57,855 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 78/100 | Train Loss: 0.0247 | Val rms_score: 0.5381
|
| 196 |
+
2025-09-26 08:35:16,000 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 79/100 | Train Loss: 0.0236 | Val rms_score: 0.5405
|
| 197 |
+
2025-09-26 08:35:33,703 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 80/100 | Train Loss: 0.0237 | Val rms_score: 0.5415
|
| 198 |
+
2025-09-26 08:35:51,042 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 81/100 | Train Loss: 0.0216 | Val rms_score: 0.5366
|
| 199 |
+
2025-09-26 08:36:09,401 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 82/100 | Train Loss: 0.0246 | Val rms_score: 0.5386
|
| 200 |
+
2025-09-26 08:36:27,393 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 83/100 | Train Loss: 0.0233 | Val rms_score: 0.5396
|
| 201 |
+
2025-09-26 08:36:44,994 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 84/100 | Train Loss: 0.0202 | Val rms_score: 0.5384
|
| 202 |
+
2025-09-26 08:37:02,379 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 85/100 | Train Loss: 0.0239 | Val rms_score: 0.5431
|
| 203 |
+
2025-09-26 08:37:19,584 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 86/100 | Train Loss: 0.0221 | Val rms_score: 0.5368
|
| 204 |
+
2025-09-26 08:37:37,075 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 87/100 | Train Loss: 0.0234 | Val rms_score: 0.5358
|
| 205 |
+
2025-09-26 08:37:54,840 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 88/100 | Train Loss: 0.0229 | Val rms_score: 0.5386
|
| 206 |
+
2025-09-26 08:38:13,810 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 89/100 | Train Loss: 0.0225 | Val rms_score: 0.5386
|
| 207 |
+
2025-09-26 08:38:31,065 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 90/100 | Train Loss: 0.0209 | Val rms_score: 0.5417
|
| 208 |
+
2025-09-26 08:38:48,301 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 91/100 | Train Loss: 0.0240 | Val rms_score: 0.5379
|
| 209 |
+
2025-09-26 08:39:05,846 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 92/100 | Train Loss: 0.0227 | Val rms_score: 0.5334
|
| 210 |
+
2025-09-26 08:39:23,419 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 93/100 | Train Loss: 0.0228 | Val rms_score: 0.5358
|
| 211 |
+
2025-09-26 08:39:41,269 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 94/100 | Train Loss: 0.0242 | Val rms_score: 0.5366
|
| 212 |
+
2025-09-26 08:39:58,591 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 95/100 | Train Loss: 0.0230 | Val rms_score: 0.5341
|
| 213 |
+
2025-09-26 08:40:16,047 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 96/100 | Train Loss: 0.0222 | Val rms_score: 0.5341
|
| 214 |
+
2025-09-26 08:40:33,216 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 97/100 | Train Loss: 0.0215 | Val rms_score: 0.5369
|
| 215 |
+
2025-09-26 08:40:50,625 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 98/100 | Train Loss: 0.0212 | Val rms_score: 0.5364
|
| 216 |
+
2025-09-26 08:41:08,043 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 99/100 | Train Loss: 0.0227 | Val rms_score: 0.5397
|
| 217 |
+
2025-09-26 08:41:26,317 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 100/100 | Train Loss: 0.0217 | Val rms_score: 0.5390
|
| 218 |
+
2025-09-26 08:41:27,447 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Test rms_score: 0.4432
|
| 219 |
+
2025-09-26 08:41:27,764 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Starting triplicate run 3 for dataset adme_microsom_stab_r at 2025-09-26_08-41-27
|
| 220 |
+
2025-09-26 08:41:43,333 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 1/100 | Train Loss: 0.9559 | Val rms_score: 0.5904
|
| 221 |
+
2025-09-26 08:41:43,333 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Global step of best model: 68
|
| 222 |
+
2025-09-26 08:41:44,210 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Best model saved at epoch 1 with val rms_score: 0.5904
|
| 223 |
+
2025-09-26 08:42:02,091 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 2/100 | Train Loss: 0.6597 | Val rms_score: 0.6026
|
| 224 |
+
2025-09-26 08:42:18,141 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 3/100 | Train Loss: 0.5508 | Val rms_score: 0.5719
|
| 225 |
+
2025-09-26 08:42:18,286 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Global step of best model: 204
|
| 226 |
+
2025-09-26 08:42:18,856 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Best model saved at epoch 3 with val rms_score: 0.5719
|
| 227 |
+
2025-09-26 08:42:36,763 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 4/100 | Train Loss: 0.4706 | Val rms_score: 0.5304
|
| 228 |
+
2025-09-26 08:42:36,948 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Global step of best model: 272
|
| 229 |
+
2025-09-26 08:42:37,525 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Best model saved at epoch 4 with val rms_score: 0.5304
|
| 230 |
+
2025-09-26 08:42:54,665 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 5/100 | Train Loss: 0.3469 | Val rms_score: 0.5380
|
| 231 |
+
2025-09-26 08:43:11,889 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 6/100 | Train Loss: 0.2852 | Val rms_score: 0.5660
|
| 232 |
+
2025-09-26 08:43:28,861 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 7/100 | Train Loss: 0.2086 | Val rms_score: 0.5399
|
| 233 |
+
2025-09-26 08:43:46,147 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 8/100 | Train Loss: 0.1761 | Val rms_score: 0.5607
|
| 234 |
+
2025-09-26 08:44:04,148 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 9/100 | Train Loss: 0.1719 | Val rms_score: 0.5615
|
| 235 |
+
2025-09-26 08:44:20,279 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 10/100 | Train Loss: 0.1149 | Val rms_score: 0.5455
|
| 236 |
+
2025-09-26 08:44:37,286 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 11/100 | Train Loss: 0.1055 | Val rms_score: 0.5547
|
| 237 |
+
2025-09-26 08:44:55,376 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 12/100 | Train Loss: 0.0898 | Val rms_score: 0.5643
|
| 238 |
+
2025-09-26 08:45:11,324 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 13/100 | Train Loss: 0.0850 | Val rms_score: 0.5527
|
| 239 |
+
2025-09-26 08:45:28,201 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 14/100 | Train Loss: 0.0847 | Val rms_score: 0.5365
|
| 240 |
+
2025-09-26 08:45:46,850 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 15/100 | Train Loss: 0.0809 | Val rms_score: 0.5457
|
| 241 |
+
2025-09-26 08:46:04,327 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 16/100 | Train Loss: 0.0790 | Val rms_score: 0.5533
|
| 242 |
+
2025-09-26 08:46:22,294 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 17/100 | Train Loss: 0.0639 | Val rms_score: 0.5377
|
| 243 |
+
2025-09-26 08:46:37,776 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 18/100 | Train Loss: 0.0684 | Val rms_score: 0.5563
|
| 244 |
+
2025-09-26 08:46:54,852 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 19/100 | Train Loss: 0.0616 | Val rms_score: 0.5416
|
| 245 |
+
2025-09-26 08:47:12,558 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 20/100 | Train Loss: 0.0552 | Val rms_score: 0.5482
|
| 246 |
+
2025-09-26 08:47:29,751 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 21/100 | Train Loss: 0.0516 | Val rms_score: 0.5551
|
| 247 |
+
2025-09-26 08:47:46,804 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 22/100 | Train Loss: 0.0506 | Val rms_score: 0.5508
|
| 248 |
+
2025-09-26 08:48:04,197 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 23/100 | Train Loss: 0.0520 | Val rms_score: 0.5555
|
| 249 |
+
2025-09-26 08:48:20,275 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 24/100 | Train Loss: 0.0508 | Val rms_score: 0.5601
|
| 250 |
+
2025-09-26 08:48:36,395 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 25/100 | Train Loss: 0.0473 | Val rms_score: 0.5486
|
| 251 |
+
2025-09-26 08:48:53,914 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 26/100 | Train Loss: 0.0508 | Val rms_score: 0.5483
|
| 252 |
+
2025-09-26 08:49:11,878 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 27/100 | Train Loss: 0.0469 | Val rms_score: 0.5418
|
| 253 |
+
2025-09-26 08:49:29,687 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 28/100 | Train Loss: 0.0383 | Val rms_score: 0.5552
|
| 254 |
+
2025-09-26 08:49:46,908 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 29/100 | Train Loss: 0.0441 | Val rms_score: 0.5436
|
| 255 |
+
2025-09-26 08:50:03,942 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 30/100 | Train Loss: 0.0471 | Val rms_score: 0.5462
|
| 256 |
+
2025-09-26 08:50:20,617 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 31/100 | Train Loss: 0.0369 | Val rms_score: 0.5563
|
| 257 |
+
2025-09-26 08:50:38,848 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 32/100 | Train Loss: 0.0416 | Val rms_score: 0.5420
|
| 258 |
+
2025-09-26 08:50:56,664 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 33/100 | Train Loss: 0.0428 | Val rms_score: 0.5429
|
| 259 |
+
2025-09-26 08:51:14,167 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 34/100 | Train Loss: 0.0400 | Val rms_score: 0.5338
|
| 260 |
+
2025-09-26 08:51:31,263 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 35/100 | Train Loss: 0.0384 | Val rms_score: 0.5388
|
| 261 |
+
2025-09-26 08:51:47,138 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 36/100 | Train Loss: 0.0410 | Val rms_score: 0.5446
|
| 262 |
+
2025-09-26 08:52:05,084 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 37/100 | Train Loss: 0.0381 | Val rms_score: 0.5418
|
| 263 |
+
2025-09-26 08:52:23,025 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 38/100 | Train Loss: 0.0391 | Val rms_score: 0.5498
|
| 264 |
+
2025-09-26 08:52:40,942 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 39/100 | Train Loss: 0.0364 | Val rms_score: 0.5445
|
| 265 |
+
2025-09-26 08:52:57,421 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 40/100 | Train Loss: 0.0402 | Val rms_score: 0.5435
|
| 266 |
+
2025-09-26 08:53:14,949 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 41/100 | Train Loss: 0.0370 | Val rms_score: 0.5473
|
| 267 |
+
2025-09-26 08:53:30,508 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 42/100 | Train Loss: 0.0349 | Val rms_score: 0.5379
|
| 268 |
+
2025-09-26 08:53:48,236 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 43/100 | Train Loss: 0.0345 | Val rms_score: 0.5520
|
| 269 |
+
2025-09-26 08:54:06,049 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 44/100 | Train Loss: 0.0365 | Val rms_score: 0.5379
|
| 270 |
+
2025-09-26 08:54:24,347 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 45/100 | Train Loss: 0.0352 | Val rms_score: 0.5370
|
| 271 |
+
2025-09-26 08:54:41,345 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 46/100 | Train Loss: 0.0319 | Val rms_score: 0.5478
|
| 272 |
+
2025-09-26 08:54:59,469 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 47/100 | Train Loss: 0.0340 | Val rms_score: 0.5448
|
| 273 |
+
2025-09-26 08:55:16,795 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 48/100 | Train Loss: 0.0320 | Val rms_score: 0.5375
|
| 274 |
+
2025-09-26 08:55:34,590 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 49/100 | Train Loss: 0.0334 | Val rms_score: 0.5310
|
| 275 |
+
2025-09-26 08:55:51,419 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 50/100 | Train Loss: 0.0306 | Val rms_score: 0.5384
|
| 276 |
+
2025-09-26 08:56:08,624 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 51/100 | Train Loss: 0.0303 | Val rms_score: 0.5411
|
| 277 |
+
2025-09-26 08:56:25,533 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 52/100 | Train Loss: 0.0341 | Val rms_score: 0.5515
|
| 278 |
+
2025-09-26 08:56:43,084 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 53/100 | Train Loss: 0.0391 | Val rms_score: 0.5367
|
| 279 |
+
2025-09-26 08:57:00,812 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 54/100 | Train Loss: 0.0310 | Val rms_score: 0.5444
|
| 280 |
+
2025-09-26 08:57:17,156 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 55/100 | Train Loss: 0.0320 | Val rms_score: 0.5447
|
| 281 |
+
2025-09-26 08:57:34,947 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 56/100 | Train Loss: 0.0309 | Val rms_score: 0.5424
|
| 282 |
+
2025-09-26 08:57:53,054 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 57/100 | Train Loss: 0.0294 | Val rms_score: 0.5387
|
| 283 |
+
2025-09-26 08:58:10,333 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 58/100 | Train Loss: 0.0282 | Val rms_score: 0.5394
|
| 284 |
+
2025-09-26 08:58:29,415 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 59/100 | Train Loss: 0.0303 | Val rms_score: 0.5473
|
| 285 |
+
2025-09-26 08:58:46,921 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 60/100 | Train Loss: 0.0286 | Val rms_score: 0.5435
|
| 286 |
+
2025-09-26 08:59:04,572 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 61/100 | Train Loss: 0.0285 | Val rms_score: 0.5384
|
| 287 |
+
2025-09-26 08:59:22,130 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 62/100 | Train Loss: 0.0312 | Val rms_score: 0.5401
|
| 288 |
+
2025-09-26 08:59:39,641 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 63/100 | Train Loss: 0.0291 | Val rms_score: 0.5422
|
| 289 |
+
2025-09-26 08:59:57,294 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 64/100 | Train Loss: 0.0284 | Val rms_score: 0.5381
|
| 290 |
+
2025-09-26 09:00:14,589 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 65/100 | Train Loss: 0.0275 | Val rms_score: 0.5355
|
| 291 |
+
2025-09-26 09:00:32,435 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 66/100 | Train Loss: 0.0283 | Val rms_score: 0.5414
|
| 292 |
+
2025-09-26 09:00:49,874 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 67/100 | Train Loss: 0.0300 | Val rms_score: 0.5476
|
| 293 |
+
2025-09-26 09:01:06,472 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 68/100 | Train Loss: 0.0293 | Val rms_score: 0.5408
|
| 294 |
+
2025-09-26 09:01:24,032 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 69/100 | Train Loss: 0.0272 | Val rms_score: 0.5373
|
| 295 |
+
2025-09-26 09:01:41,054 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 70/100 | Train Loss: 0.0268 | Val rms_score: 0.5380
|
| 296 |
+
2025-09-26 09:01:58,482 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 71/100 | Train Loss: 0.0283 | Val rms_score: 0.5406
|
| 297 |
+
2025-09-26 09:02:16,375 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 72/100 | Train Loss: 0.0281 | Val rms_score: 0.5381
|
| 298 |
+
2025-09-26 09:02:32,594 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 73/100 | Train Loss: 0.0265 | Val rms_score: 0.5410
|
| 299 |
+
2025-09-26 09:02:51,284 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 74/100 | Train Loss: 0.0273 | Val rms_score: 0.5346
|
| 300 |
+
2025-09-26 09:03:08,338 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 75/100 | Train Loss: 0.0280 | Val rms_score: 0.5388
|
| 301 |
+
2025-09-26 09:03:26,099 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 76/100 | Train Loss: 0.0275 | Val rms_score: 0.5306
|
| 302 |
+
2025-09-26 09:03:42,826 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 77/100 | Train Loss: 0.0258 | Val rms_score: 0.5374
|
| 303 |
+
2025-09-26 09:04:00,509 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 78/100 | Train Loss: 0.0261 | Val rms_score: 0.5379
|
| 304 |
+
2025-09-26 09:04:18,210 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 79/100 | Train Loss: 0.0267 | Val rms_score: 0.5374
|
| 305 |
+
2025-09-26 09:04:35,060 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 80/100 | Train Loss: 0.0287 | Val rms_score: 0.5433
|
| 306 |
+
2025-09-26 09:04:52,448 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 81/100 | Train Loss: 0.0287 | Val rms_score: 0.5362
|
| 307 |
+
2025-09-26 09:05:09,476 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 82/100 | Train Loss: 0.0247 | Val rms_score: 0.5354
|
| 308 |
+
2025-09-26 09:05:26,831 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 83/100 | Train Loss: 0.0275 | Val rms_score: 0.5411
|
| 309 |
+
2025-09-26 09:05:44,576 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 84/100 | Train Loss: 0.0267 | Val rms_score: 0.5357
|
| 310 |
+
2025-09-26 09:06:01,259 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 85/100 | Train Loss: 0.0252 | Val rms_score: 0.5397
|
| 311 |
+
2025-09-26 09:06:18,317 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 86/100 | Train Loss: 0.0249 | Val rms_score: 0.5394
|
| 312 |
+
2025-09-26 09:06:35,549 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 87/100 | Train Loss: 0.0247 | Val rms_score: 0.5429
|
| 313 |
+
2025-09-26 09:06:52,302 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 88/100 | Train Loss: 0.0249 | Val rms_score: 0.5363
|
| 314 |
+
2025-09-26 09:07:10,876 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 89/100 | Train Loss: 0.0258 | Val rms_score: 0.5326
|
| 315 |
+
2025-09-26 09:07:27,941 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 90/100 | Train Loss: 0.0245 | Val rms_score: 0.5425
|
| 316 |
+
2025-09-26 09:07:45,367 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 91/100 | Train Loss: 0.0245 | Val rms_score: 0.5372
|
| 317 |
+
2025-09-26 09:08:03,053 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 92/100 | Train Loss: 0.0250 | Val rms_score: 0.5359
|
| 318 |
+
2025-09-26 09:08:19,881 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 93/100 | Train Loss: 0.0247 | Val rms_score: 0.5409
|
| 319 |
+
2025-09-26 09:08:37,290 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 94/100 | Train Loss: 0.0233 | Val rms_score: 0.5397
|
| 320 |
+
2025-09-26 09:08:54,892 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 95/100 | Train Loss: 0.0236 | Val rms_score: 0.5380
|
| 321 |
+
2025-09-26 09:09:12,506 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 96/100 | Train Loss: 0.0251 | Val rms_score: 0.5354
|
| 322 |
+
2025-09-26 09:09:30,004 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 97/100 | Train Loss: 0.0238 | Val rms_score: 0.5402
|
| 323 |
+
2025-09-26 09:09:47,577 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 98/100 | Train Loss: 0.0250 | Val rms_score: 0.5358
|
| 324 |
+
2025-09-26 09:10:04,215 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 99/100 | Train Loss: 0.0234 | Val rms_score: 0.5329
|
| 325 |
+
2025-09-26 09:10:21,397 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 100/100 | Train Loss: 0.0239 | Val rms_score: 0.5338
|
| 326 |
+
2025-09-26 09:10:22,550 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Test rms_score: 0.4980
|
| 327 |
+
2025-09-26 09:10:22,996 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Final Triplicate Test Results — Avg rms_score: 0.4685, Std Dev: 0.0225
|
logs_modchembert_regression_ModChemBERT-MLM/modchembert_deepchem_splits_run_adme_permeability_epochs100_batch_size32_20250926_091023.log
ADDED
|
@@ -0,0 +1,419 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
2025-09-26 09:10:23,009 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Running benchmark for dataset: adme_permeability
|
| 2 |
+
2025-09-26 09:10:23,009 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - dataset: adme_permeability, tasks: ['y'], epochs: 100, learning rate: 3e-05, transform: True
|
| 3 |
+
2025-09-26 09:10:23,014 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Starting triplicate run 1 for dataset adme_permeability at 2025-09-26_09-10-23
|
| 4 |
+
2025-09-26 09:10:38,602 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 1/100 | Train Loss: 1.0149 | Val rms_score: 0.5667
|
| 5 |
+
2025-09-26 09:10:38,602 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Global step of best model: 67
|
| 6 |
+
2025-09-26 09:10:39,443 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Best model saved at epoch 1 with val rms_score: 0.5667
|
| 7 |
+
2025-09-26 09:10:56,116 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 2/100 | Train Loss: 0.5699 | Val rms_score: 0.5435
|
| 8 |
+
2025-09-26 09:10:56,306 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Global step of best model: 134
|
| 9 |
+
2025-09-26 09:10:56,875 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Best model saved at epoch 2 with val rms_score: 0.5435
|
| 10 |
+
2025-09-26 09:11:13,889 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 3/100 | Train Loss: 0.0082 | Val rms_score: 0.5189
|
| 11 |
+
2025-09-26 09:11:14,072 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Global step of best model: 201
|
| 12 |
+
2025-09-26 09:11:14,608 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Best model saved at epoch 3 with val rms_score: 0.5189
|
| 13 |
+
2025-09-26 09:11:31,342 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 4/100 | Train Loss: 0.4552 | Val rms_score: 0.5086
|
| 14 |
+
2025-09-26 09:11:31,521 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Global step of best model: 268
|
| 15 |
+
2025-09-26 09:11:32,096 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Best model saved at epoch 4 with val rms_score: 0.5086
|
| 16 |
+
2025-09-26 09:11:47,444 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 5/100 | Train Loss: 0.3411 | Val rms_score: 0.4820
|
| 17 |
+
2025-09-26 09:11:47,592 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Global step of best model: 335
|
| 18 |
+
2025-09-26 09:11:48,140 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Best model saved at epoch 5 with val rms_score: 0.4820
|
| 19 |
+
2025-09-26 09:12:05,560 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 6/100 | Train Loss: 0.1250 | Val rms_score: 0.5246
|
| 20 |
+
2025-09-26 09:12:23,248 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 7/100 | Train Loss: 0.2612 | Val rms_score: 0.4746
|
| 21 |
+
2025-09-26 09:12:23,398 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Global step of best model: 469
|
| 22 |
+
2025-09-26 09:12:23,952 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Best model saved at epoch 7 with val rms_score: 0.4746
|
| 23 |
+
2025-09-26 09:12:40,899 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 8/100 | Train Loss: 0.1675 | Val rms_score: 0.4658
|
| 24 |
+
2025-09-26 09:12:41,180 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Global step of best model: 536
|
| 25 |
+
2025-09-26 09:12:41,721 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Best model saved at epoch 8 with val rms_score: 0.4658
|
| 26 |
+
2025-09-26 09:12:59,142 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 9/100 | Train Loss: 0.0990 | Val rms_score: 0.4657
|
| 27 |
+
2025-09-26 09:12:59,364 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Global step of best model: 603
|
| 28 |
+
2025-09-26 09:12:59,912 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Best model saved at epoch 9 with val rms_score: 0.4657
|
| 29 |
+
2025-09-26 09:13:17,261 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 10/100 | Train Loss: 0.1222 | Val rms_score: 0.4627
|
| 30 |
+
2025-09-26 09:13:17,445 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Global step of best model: 670
|
| 31 |
+
2025-09-26 09:13:18,006 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Best model saved at epoch 10 with val rms_score: 0.4627
|
| 32 |
+
2025-09-26 09:13:34,632 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 11/100 | Train Loss: 0.1077 | Val rms_score: 0.4649
|
| 33 |
+
2025-09-26 09:13:52,461 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 12/100 | Train Loss: 0.0703 | Val rms_score: 0.4753
|
| 34 |
+
2025-09-26 09:14:09,599 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 13/100 | Train Loss: 0.0877 | Val rms_score: 0.4547
|
| 35 |
+
2025-09-26 09:14:09,754 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Global step of best model: 871
|
| 36 |
+
2025-09-26 09:14:10,394 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Best model saved at epoch 13 with val rms_score: 0.4547
|
| 37 |
+
2025-09-26 09:14:28,003 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 14/100 | Train Loss: 0.0802 | Val rms_score: 0.4588
|
| 38 |
+
2025-09-26 09:14:43,955 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 15/100 | Train Loss: 0.0574 | Val rms_score: 0.4579
|
| 39 |
+
2025-09-26 09:15:01,218 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 16/100 | Train Loss: 0.0667 | Val rms_score: 0.4531
|
| 40 |
+
2025-09-26 09:15:01,672 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Global step of best model: 1072
|
| 41 |
+
2025-09-26 09:15:02,227 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Best model saved at epoch 16 with val rms_score: 0.4531
|
| 42 |
+
2025-09-26 09:15:19,998 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 17/100 | Train Loss: 0.0661 | Val rms_score: 0.4580
|
| 43 |
+
2025-09-26 09:15:37,505 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 18/100 | Train Loss: 0.0680 | Val rms_score: 0.4658
|
| 44 |
+
2025-09-26 09:15:54,059 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 19/100 | Train Loss: 0.0634 | Val rms_score: 0.4591
|
| 45 |
+
2025-09-26 09:16:10,408 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 20/100 | Train Loss: 0.0512 | Val rms_score: 0.4562
|
| 46 |
+
2025-09-26 09:16:27,688 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 21/100 | Train Loss: 0.0458 | Val rms_score: 0.4550
|
| 47 |
+
2025-09-26 09:16:45,774 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 22/100 | Train Loss: 0.0459 | Val rms_score: 0.4606
|
| 48 |
+
2025-09-26 09:17:02,456 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 23/100 | Train Loss: 0.0467 | Val rms_score: 0.4528
|
| 49 |
+
2025-09-26 09:17:02,613 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Global step of best model: 1541
|
| 50 |
+
2025-09-26 09:17:03,171 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Best model saved at epoch 23 with val rms_score: 0.4528
|
| 51 |
+
2025-09-26 09:17:20,583 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 24/100 | Train Loss: 0.0500 | Val rms_score: 0.4560
|
| 52 |
+
2025-09-26 09:17:37,458 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 25/100 | Train Loss: 0.0443 | Val rms_score: 0.4543
|
| 53 |
+
2025-09-26 09:17:55,144 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 26/100 | Train Loss: 0.0426 | Val rms_score: 0.4649
|
| 54 |
+
2025-09-26 09:18:12,758 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 27/100 | Train Loss: 0.0410 | Val rms_score: 0.4622
|
| 55 |
+
2025-09-26 09:18:29,978 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 28/100 | Train Loss: 0.0385 | Val rms_score: 0.4549
|
| 56 |
+
2025-09-26 09:18:45,746 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 29/100 | Train Loss: 0.0411 | Val rms_score: 0.4572
|
| 57 |
+
2025-09-26 09:19:03,846 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 30/100 | Train Loss: 0.0387 | Val rms_score: 0.4546
|
| 58 |
+
2025-09-26 09:19:21,149 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 31/100 | Train Loss: 0.0359 | Val rms_score: 0.4570
|
| 59 |
+
2025-09-26 09:19:39,138 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 32/100 | Train Loss: 0.0337 | Val rms_score: 0.4588
|
| 60 |
+
2025-09-26 09:19:56,052 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 33/100 | Train Loss: 0.0289 | Val rms_score: 0.4522
|
| 61 |
+
2025-09-26 09:19:56,213 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Global step of best model: 2211
|
| 62 |
+
2025-09-26 09:19:56,771 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Best model saved at epoch 33 with val rms_score: 0.4522
|
| 63 |
+
2025-09-26 09:20:13,883 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 34/100 | Train Loss: 0.0336 | Val rms_score: 0.4613
|
| 64 |
+
2025-09-26 09:20:31,081 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 35/100 | Train Loss: 0.0375 | Val rms_score: 0.4597
|
| 65 |
+
2025-09-26 09:20:48,579 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 36/100 | Train Loss: 0.0339 | Val rms_score: 0.4585
|
| 66 |
+
2025-09-26 09:21:06,318 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 37/100 | Train Loss: 0.0324 | Val rms_score: 0.4554
|
| 67 |
+
2025-09-26 09:21:21,588 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 38/100 | Train Loss: 0.0333 | Val rms_score: 0.4586
|
| 68 |
+
2025-09-26 09:21:37,900 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 39/100 | Train Loss: 0.0325 | Val rms_score: 0.4550
|
| 69 |
+
2025-09-26 09:21:54,661 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 40/100 | Train Loss: 0.0289 | Val rms_score: 0.4534
|
| 70 |
+
2025-09-26 09:22:11,969 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 41/100 | Train Loss: 0.0304 | Val rms_score: 0.4556
|
| 71 |
+
2025-09-26 09:22:29,726 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 42/100 | Train Loss: 0.0273 | Val rms_score: 0.4514
|
| 72 |
+
2025-09-26 09:22:29,875 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Global step of best model: 2814
|
| 73 |
+
2025-09-26 09:22:30,481 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Best model saved at epoch 42 with val rms_score: 0.4514
|
| 74 |
+
2025-09-26 09:22:48,107 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 43/100 | Train Loss: 0.0282 | Val rms_score: 0.4506
|
| 75 |
+
2025-09-26 09:22:48,259 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Global step of best model: 2881
|
| 76 |
+
2025-09-26 09:22:48,865 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Best model saved at epoch 43 with val rms_score: 0.4506
|
| 77 |
+
2025-09-26 09:23:05,336 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 44/100 | Train Loss: 0.0299 | Val rms_score: 0.4513
|
| 78 |
+
2025-09-26 09:23:22,803 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 45/100 | Train Loss: 0.0310 | Val rms_score: 0.4526
|
| 79 |
+
2025-09-26 09:23:40,344 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 46/100 | Train Loss: 0.0269 | Val rms_score: 0.4482
|
| 80 |
+
2025-09-26 09:23:41,034 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Global step of best model: 3082
|
| 81 |
+
2025-09-26 09:23:41,612 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Best model saved at epoch 46 with val rms_score: 0.4482
|
| 82 |
+
2025-09-26 09:23:59,375 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 47/100 | Train Loss: 0.0279 | Val rms_score: 0.4496
|
| 83 |
+
2025-09-26 09:24:16,613 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 48/100 | Train Loss: 0.0270 | Val rms_score: 0.4514
|
| 84 |
+
2025-09-26 09:24:34,110 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 49/100 | Train Loss: 0.0290 | Val rms_score: 0.4511
|
| 85 |
+
2025-09-26 09:24:51,545 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 50/100 | Train Loss: 0.0286 | Val rms_score: 0.4532
|
| 86 |
+
2025-09-26 09:25:08,829 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 51/100 | Train Loss: 0.0273 | Val rms_score: 0.4505
|
| 87 |
+
2025-09-26 09:25:25,928 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 52/100 | Train Loss: 0.0253 | Val rms_score: 0.4467
|
| 88 |
+
2025-09-26 09:25:26,138 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Global step of best model: 3484
|
| 89 |
+
2025-09-26 09:25:26,733 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Best model saved at epoch 52 with val rms_score: 0.4467
|
| 90 |
+
2025-09-26 09:25:43,704 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 53/100 | Train Loss: 0.0271 | Val rms_score: 0.4482
|
| 91 |
+
2025-09-26 09:26:01,322 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 54/100 | Train Loss: 0.0241 | Val rms_score: 0.4472
|
| 92 |
+
2025-09-26 09:26:18,708 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 55/100 | Train Loss: 0.0252 | Val rms_score: 0.4482
|
| 93 |
+
2025-09-26 09:26:35,958 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 56/100 | Train Loss: 0.0242 | Val rms_score: 0.4485
|
| 94 |
+
2025-09-26 09:26:53,615 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 57/100 | Train Loss: 0.0237 | Val rms_score: 0.4489
|
| 95 |
+
2025-09-26 09:27:09,140 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 58/100 | Train Loss: 0.0257 | Val rms_score: 0.4505
|
| 96 |
+
2025-09-26 09:27:26,518 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 59/100 | Train Loss: 0.0256 | Val rms_score: 0.4480
|
| 97 |
+
2025-09-26 09:27:45,052 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 60/100 | Train Loss: 0.0256 | Val rms_score: 0.4523
|
| 98 |
+
2025-09-26 09:28:02,485 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 61/100 | Train Loss: 0.0252 | Val rms_score: 0.4517
|
| 99 |
+
2025-09-26 09:28:20,415 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 62/100 | Train Loss: 0.0250 | Val rms_score: 0.4524
|
| 100 |
+
2025-09-26 09:28:36,262 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 63/100 | Train Loss: 0.0219 | Val rms_score: 0.4499
|
| 101 |
+
2025-09-26 09:28:53,395 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 64/100 | Train Loss: 0.0236 | Val rms_score: 0.4511
|
| 102 |
+
2025-09-26 09:29:10,474 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 65/100 | Train Loss: 0.0233 | Val rms_score: 0.4476
|
| 103 |
+
2025-09-26 09:29:28,174 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 66/100 | Train Loss: 0.0225 | Val rms_score: 0.4458
|
| 104 |
+
2025-09-26 09:29:28,629 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Global step of best model: 4422
|
| 105 |
+
2025-09-26 09:29:29,270 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Best model saved at epoch 66 with val rms_score: 0.4458
|
| 106 |
+
2025-09-26 09:29:46,935 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 67/100 | Train Loss: 0.0243 | Val rms_score: 0.4487
|
| 107 |
+
2025-09-26 09:30:02,975 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 68/100 | Train Loss: 0.0226 | Val rms_score: 0.4493
|
| 108 |
+
2025-09-26 09:30:19,006 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 69/100 | Train Loss: 0.0248 | Val rms_score: 0.4477
|
| 109 |
+
2025-09-26 09:30:35,658 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 70/100 | Train Loss: 0.0224 | Val rms_score: 0.4463
|
| 110 |
+
2025-09-26 09:30:53,243 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 71/100 | Train Loss: 0.0245 | Val rms_score: 0.4474
|
| 111 |
+
2025-09-26 09:31:11,515 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 72/100 | Train Loss: 0.0226 | Val rms_score: 0.4454
|
| 112 |
+
2025-09-26 09:31:11,673 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Global step of best model: 4824
|
| 113 |
+
2025-09-26 09:31:12,476 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Best model saved at epoch 72 with val rms_score: 0.4454
|
| 114 |
+
2025-09-26 09:31:29,755 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 73/100 | Train Loss: 0.0227 | Val rms_score: 0.4459
|
| 115 |
+
2025-09-26 09:31:46,235 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 74/100 | Train Loss: 0.0230 | Val rms_score: 0.4480
|
| 116 |
+
2025-09-26 09:32:04,118 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 75/100 | Train Loss: 0.0206 | Val rms_score: 0.4448
|
| 117 |
+
2025-09-26 09:32:04,271 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Global step of best model: 5025
|
| 118 |
+
2025-09-26 09:32:04,930 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Best model saved at epoch 75 with val rms_score: 0.4448
|
| 119 |
+
2025-09-26 09:32:22,796 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 76/100 | Train Loss: 0.0220 | Val rms_score: 0.4504
|
| 120 |
+
2025-09-26 09:32:40,233 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 77/100 | Train Loss: 0.0213 | Val rms_score: 0.4487
|
| 121 |
+
2025-09-26 09:32:57,960 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 78/100 | Train Loss: 0.0215 | Val rms_score: 0.4498
|
| 122 |
+
2025-09-26 09:33:15,983 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 79/100 | Train Loss: 0.0223 | Val rms_score: 0.4483
|
| 123 |
+
2025-09-26 09:33:32,836 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 80/100 | Train Loss: 0.0233 | Val rms_score: 0.4444
|
| 124 |
+
2025-09-26 09:33:33,009 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Global step of best model: 5360
|
| 125 |
+
2025-09-26 09:33:33,555 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Best model saved at epoch 80 with val rms_score: 0.4444
|
| 126 |
+
2025-09-26 09:33:51,674 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 81/100 | Train Loss: 0.0236 | Val rms_score: 0.4463
|
| 127 |
+
2025-09-26 09:34:08,266 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 82/100 | Train Loss: 0.0220 | Val rms_score: 0.4461
|
| 128 |
+
2025-09-26 09:34:26,330 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 83/100 | Train Loss: 0.0213 | Val rms_score: 0.4486
|
| 129 |
+
2025-09-26 09:34:43,646 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 84/100 | Train Loss: 0.0205 | Val rms_score: 0.4504
|
| 130 |
+
2025-09-26 09:35:00,955 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 85/100 | Train Loss: 0.0215 | Val rms_score: 0.4503
|
| 131 |
+
2025-09-26 09:35:18,780 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 86/100 | Train Loss: 0.0222 | Val rms_score: 0.4488
|
| 132 |
+
2025-09-26 09:35:35,141 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 87/100 | Train Loss: 0.0198 | Val rms_score: 0.4462
|
| 133 |
+
2025-09-26 09:35:52,678 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 88/100 | Train Loss: 0.0198 | Val rms_score: 0.4488
|
| 134 |
+
2025-09-26 09:36:09,100 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 89/100 | Train Loss: 0.0206 | Val rms_score: 0.4441
|
| 135 |
+
2025-09-26 09:36:09,299 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Global step of best model: 5963
|
| 136 |
+
2025-09-26 09:36:09,876 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Best model saved at epoch 89 with val rms_score: 0.4441
|
| 137 |
+
2025-09-26 09:36:28,382 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 90/100 | Train Loss: 0.0211 | Val rms_score: 0.4463
|
| 138 |
+
2025-09-26 09:36:45,839 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 91/100 | Train Loss: 0.0218 | Val rms_score: 0.4447
|
| 139 |
+
2025-09-26 09:37:03,427 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 92/100 | Train Loss: 0.0204 | Val rms_score: 0.4450
|
| 140 |
+
2025-09-26 09:37:21,451 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 93/100 | Train Loss: 0.0175 | Val rms_score: 0.4449
|
| 141 |
+
2025-09-26 09:37:37,529 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 94/100 | Train Loss: 0.0204 | Val rms_score: 0.4420
|
| 142 |
+
2025-09-26 09:37:37,681 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Global step of best model: 6298
|
| 143 |
+
2025-09-26 09:37:38,234 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Best model saved at epoch 94 with val rms_score: 0.4420
|
| 144 |
+
2025-09-26 09:37:56,094 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 95/100 | Train Loss: 0.0254 | Val rms_score: 0.4426
|
| 145 |
+
2025-09-26 09:38:12,138 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 96/100 | Train Loss: 0.0217 | Val rms_score: 0.4457
|
| 146 |
+
2025-09-26 09:38:28,489 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 97/100 | Train Loss: 0.0195 | Val rms_score: 0.4431
|
| 147 |
+
2025-09-26 09:38:46,134 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 98/100 | Train Loss: 0.0217 | Val rms_score: 0.4435
|
| 148 |
+
2025-09-26 09:39:02,340 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 99/100 | Train Loss: 0.0204 | Val rms_score: 0.4456
|
| 149 |
+
2025-09-26 09:39:19,621 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 100/100 | Train Loss: 0.0205 | Val rms_score: 0.4439
|
| 150 |
+
2025-09-26 09:39:20,811 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Test rms_score: 0.5316
|
| 151 |
+
2025-09-26 09:39:21,160 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Starting triplicate run 2 for dataset adme_permeability at 2025-09-26_09-39-21
|
| 152 |
+
2025-09-26 09:39:35,499 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 1/100 | Train Loss: 0.8060 | Val rms_score: 0.5636
|
| 153 |
+
2025-09-26 09:39:35,499 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Global step of best model: 67
|
| 154 |
+
2025-09-26 09:39:36,333 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Best model saved at epoch 1 with val rms_score: 0.5636
|
| 155 |
+
2025-09-26 09:39:53,265 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 2/100 | Train Loss: 0.6029 | Val rms_score: 0.5206
|
| 156 |
+
2025-09-26 09:39:53,444 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Global step of best model: 134
|
| 157 |
+
2025-09-26 09:39:54,059 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Best model saved at epoch 2 with val rms_score: 0.5206
|
| 158 |
+
2025-09-26 09:40:12,366 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 3/100 | Train Loss: 0.0479 | Val rms_score: 0.5002
|
| 159 |
+
2025-09-26 09:40:12,554 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Global step of best model: 201
|
| 160 |
+
2025-09-26 09:40:13,103 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Best model saved at epoch 3 with val rms_score: 0.5002
|
| 161 |
+
2025-09-26 09:40:30,452 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 4/100 | Train Loss: 0.3787 | Val rms_score: 0.5503
|
| 162 |
+
2025-09-26 09:40:47,520 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 5/100 | Train Loss: 0.3107 | Val rms_score: 0.4897
|
| 163 |
+
2025-09-26 09:40:47,702 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Global step of best model: 335
|
| 164 |
+
2025-09-26 09:40:48,258 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Best model saved at epoch 5 with val rms_score: 0.4897
|
| 165 |
+
2025-09-26 09:41:05,637 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 6/100 | Train Loss: 0.1084 | Val rms_score: 0.5038
|
| 166 |
+
2025-09-26 09:41:24,229 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 7/100 | Train Loss: 0.1866 | Val rms_score: 0.4879
|
| 167 |
+
2025-09-26 09:41:24,381 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Global step of best model: 469
|
| 168 |
+
2025-09-26 09:41:24,921 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Best model saved at epoch 7 with val rms_score: 0.4879
|
| 169 |
+
2025-09-26 09:41:40,797 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 8/100 | Train Loss: 0.1554 | Val rms_score: 0.4893
|
| 170 |
+
2025-09-26 09:41:58,205 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 9/100 | Train Loss: 0.0996 | Val rms_score: 0.4778
|
| 171 |
+
2025-09-26 09:41:58,368 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Global step of best model: 603
|
| 172 |
+
2025-09-26 09:41:58,981 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Best model saved at epoch 9 with val rms_score: 0.4778
|
| 173 |
+
2025-09-26 09:42:16,293 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 10/100 | Train Loss: 0.1119 | Val rms_score: 0.4683
|
| 174 |
+
2025-09-26 09:42:16,485 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Global step of best model: 670
|
| 175 |
+
2025-09-26 09:42:17,030 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Best model saved at epoch 10 with val rms_score: 0.4683
|
| 176 |
+
2025-09-26 09:42:34,418 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 11/100 | Train Loss: 0.1005 | Val rms_score: 0.4817
|
| 177 |
+
2025-09-26 09:42:52,528 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 12/100 | Train Loss: 0.0884 | Val rms_score: 0.4854
|
| 178 |
+
2025-09-26 09:43:08,085 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 13/100 | Train Loss: 0.0746 | Val rms_score: 0.4688
|
| 179 |
+
2025-09-26 09:43:25,624 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 14/100 | Train Loss: 0.0715 | Val rms_score: 0.4784
|
| 180 |
+
2025-09-26 09:43:42,349 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 15/100 | Train Loss: 0.0781 | Val rms_score: 0.4866
|
| 181 |
+
2025-09-26 09:43:59,175 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 16/100 | Train Loss: 0.0583 | Val rms_score: 0.4792
|
| 182 |
+
2025-09-26 09:44:17,472 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 17/100 | Train Loss: 0.0585 | Val rms_score: 0.4760
|
| 183 |
+
2025-09-26 09:44:33,896 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 18/100 | Train Loss: 0.0651 | Val rms_score: 0.4811
|
| 184 |
+
2025-09-26 09:44:51,789 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 19/100 | Train Loss: 0.0532 | Val rms_score: 0.4697
|
| 185 |
+
2025-09-26 09:45:09,129 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 20/100 | Train Loss: 0.0482 | Val rms_score: 0.4655
|
| 186 |
+
2025-09-26 09:45:09,293 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Global step of best model: 1340
|
| 187 |
+
2025-09-26 09:45:09,895 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Best model saved at epoch 20 with val rms_score: 0.4655
|
| 188 |
+
2025-09-26 09:45:27,258 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 21/100 | Train Loss: 0.0508 | Val rms_score: 0.4655
|
| 189 |
+
2025-09-26 09:45:45,522 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 22/100 | Train Loss: 0.0473 | Val rms_score: 0.4678
|
| 190 |
+
2025-09-26 09:46:01,084 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 23/100 | Train Loss: 0.0415 | Val rms_score: 0.4645
|
| 191 |
+
2025-09-26 09:46:01,234 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Global step of best model: 1541
|
| 192 |
+
2025-09-26 09:46:01,797 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Best model saved at epoch 23 with val rms_score: 0.4645
|
| 193 |
+
2025-09-26 09:46:19,804 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 24/100 | Train Loss: 0.0381 | Val rms_score: 0.4612
|
| 194 |
+
2025-09-26 09:46:19,961 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Global step of best model: 1608
|
| 195 |
+
2025-09-26 09:46:20,509 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Best model saved at epoch 24 with val rms_score: 0.4612
|
| 196 |
+
2025-09-26 09:46:37,410 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 25/100 | Train Loss: 0.0392 | Val rms_score: 0.4637
|
| 197 |
+
2025-09-26 09:46:54,990 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 26/100 | Train Loss: 0.0352 | Val rms_score: 0.4743
|
| 198 |
+
2025-09-26 09:47:12,509 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 27/100 | Train Loss: 0.0356 | Val rms_score: 0.4654
|
| 199 |
+
2025-09-26 09:47:30,189 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 28/100 | Train Loss: 0.0392 | Val rms_score: 0.4698
|
| 200 |
+
2025-09-26 09:47:47,535 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 29/100 | Train Loss: 0.0362 | Val rms_score: 0.4736
|
| 201 |
+
2025-09-26 09:48:05,790 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 30/100 | Train Loss: 0.0311 | Val rms_score: 0.4655
|
| 202 |
+
2025-09-26 09:48:23,514 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 31/100 | Train Loss: 0.0324 | Val rms_score: 0.4614
|
| 203 |
+
2025-09-26 09:48:42,145 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 32/100 | Train Loss: 0.0320 | Val rms_score: 0.4689
|
| 204 |
+
2025-09-26 09:48:59,762 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 33/100 | Train Loss: 0.0298 | Val rms_score: 0.4643
|
| 205 |
+
2025-09-26 09:49:17,325 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 34/100 | Train Loss: 0.0331 | Val rms_score: 0.4657
|
| 206 |
+
2025-09-26 09:49:35,338 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 35/100 | Train Loss: 0.0340 | Val rms_score: 0.4722
|
| 207 |
+
2025-09-26 09:49:53,366 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 36/100 | Train Loss: 0.0366 | Val rms_score: 0.4664
|
| 208 |
+
2025-09-26 09:50:11,854 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 37/100 | Train Loss: 0.0329 | Val rms_score: 0.4675
|
| 209 |
+
2025-09-26 09:50:30,168 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 38/100 | Train Loss: 0.0329 | Val rms_score: 0.4642
|
| 210 |
+
2025-09-26 09:50:48,098 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 39/100 | Train Loss: 0.0306 | Val rms_score: 0.4641
|
| 211 |
+
2025-09-26 09:51:06,028 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 40/100 | Train Loss: 0.0317 | Val rms_score: 0.4664
|
| 212 |
+
2025-09-26 09:51:23,377 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 41/100 | Train Loss: 0.0303 | Val rms_score: 0.4656
|
| 213 |
+
2025-09-26 09:51:41,896 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 42/100 | Train Loss: 0.0391 | Val rms_score: 0.4743
|
| 214 |
+
2025-09-26 09:51:58,038 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 43/100 | Train Loss: 0.0306 | Val rms_score: 0.4650
|
| 215 |
+
2025-09-26 09:52:15,530 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 44/100 | Train Loss: 0.0304 | Val rms_score: 0.4673
|
| 216 |
+
2025-09-26 09:52:33,096 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 45/100 | Train Loss: 0.0294 | Val rms_score: 0.4615
|
| 217 |
+
2025-09-26 09:52:50,805 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 46/100 | Train Loss: 0.0279 | Val rms_score: 0.4603
|
| 218 |
+
2025-09-26 09:52:51,281 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Global step of best model: 3082
|
| 219 |
+
2025-09-26 09:52:51,858 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Best model saved at epoch 46 with val rms_score: 0.4603
|
| 220 |
+
2025-09-26 09:53:09,612 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 47/100 | Train Loss: 0.0287 | Val rms_score: 0.4650
|
| 221 |
+
2025-09-26 09:53:28,122 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 48/100 | Train Loss: 0.0273 | Val rms_score: 0.4618
|
| 222 |
+
2025-09-26 09:53:45,122 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 49/100 | Train Loss: 0.0257 | Val rms_score: 0.4627
|
| 223 |
+
2025-09-26 09:54:02,928 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 50/100 | Train Loss: 0.0258 | Val rms_score: 0.4635
|
| 224 |
+
2025-09-26 09:54:19,725 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 51/100 | Train Loss: 0.0259 | Val rms_score: 0.4609
|
| 225 |
+
2025-09-26 09:54:38,070 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 52/100 | Train Loss: 0.0261 | Val rms_score: 0.4623
|
| 226 |
+
2025-09-26 09:54:54,456 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 53/100 | Train Loss: 0.0242 | Val rms_score: 0.4599
|
| 227 |
+
2025-09-26 09:54:54,613 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Global step of best model: 3551
|
| 228 |
+
2025-09-26 09:54:55,234 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Best model saved at epoch 53 with val rms_score: 0.4599
|
| 229 |
+
2025-09-26 09:55:13,066 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 54/100 | Train Loss: 0.0263 | Val rms_score: 0.4616
|
| 230 |
+
2025-09-26 09:55:29,129 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 55/100 | Train Loss: 0.0231 | Val rms_score: 0.4612
|
| 231 |
+
2025-09-26 09:55:46,618 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 56/100 | Train Loss: 0.0246 | Val rms_score: 0.4637
|
| 232 |
+
2025-09-26 09:56:04,466 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 57/100 | Train Loss: 0.0235 | Val rms_score: 0.4595
|
| 233 |
+
2025-09-26 09:56:04,641 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Global step of best model: 3819
|
| 234 |
+
2025-09-26 09:56:05,231 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Best model saved at epoch 57 with val rms_score: 0.4595
|
| 235 |
+
2025-09-26 09:56:23,308 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 58/100 | Train Loss: 0.0246 | Val rms_score: 0.4647
|
| 236 |
+
2025-09-26 09:56:40,159 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 59/100 | Train Loss: 0.0237 | Val rms_score: 0.4636
|
| 237 |
+
2025-09-26 09:56:58,734 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 60/100 | Train Loss: 0.0277 | Val rms_score: 0.4590
|
| 238 |
+
2025-09-26 09:56:58,900 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Global step of best model: 4020
|
| 239 |
+
2025-09-26 09:56:59,493 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Best model saved at epoch 60 with val rms_score: 0.4590
|
| 240 |
+
2025-09-26 09:57:17,319 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 61/100 | Train Loss: 0.0273 | Val rms_score: 0.4571
|
| 241 |
+
2025-09-26 09:57:17,834 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Global step of best model: 4087
|
| 242 |
+
2025-09-26 09:57:18,404 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Best model saved at epoch 61 with val rms_score: 0.4571
|
| 243 |
+
2025-09-26 09:57:36,385 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 62/100 | Train Loss: 0.0229 | Val rms_score: 0.4557
|
| 244 |
+
2025-09-26 09:57:36,609 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Global step of best model: 4154
|
| 245 |
+
2025-09-26 09:57:37,189 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Best model saved at epoch 62 with val rms_score: 0.4557
|
| 246 |
+
2025-09-26 09:57:54,511 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 63/100 | Train Loss: 0.0257 | Val rms_score: 0.4551
|
| 247 |
+
2025-09-26 09:57:54,701 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Global step of best model: 4221
|
| 248 |
+
2025-09-26 09:57:55,258 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Best model saved at epoch 63 with val rms_score: 0.4551
|
| 249 |
+
2025-09-26 09:58:12,165 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 64/100 | Train Loss: 0.0243 | Val rms_score: 0.4559
|
| 250 |
+
2025-09-26 09:58:29,215 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 65/100 | Train Loss: 0.0237 | Val rms_score: 0.4595
|
| 251 |
+
2025-09-26 09:58:45,435 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 66/100 | Train Loss: 0.0229 | Val rms_score: 0.4604
|
| 252 |
+
2025-09-26 09:59:03,406 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 67/100 | Train Loss: 0.0225 | Val rms_score: 0.4582
|
| 253 |
+
2025-09-26 09:59:21,028 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 68/100 | Train Loss: 0.0226 | Val rms_score: 0.4574
|
| 254 |
+
2025-09-26 09:59:38,596 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 69/100 | Train Loss: 0.0194 | Val rms_score: 0.4565
|
| 255 |
+
2025-09-26 09:59:55,226 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 70/100 | Train Loss: 0.0224 | Val rms_score: 0.4589
|
| 256 |
+
2025-09-26 10:00:11,293 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 71/100 | Train Loss: 0.0208 | Val rms_score: 0.4574
|
| 257 |
+
2025-09-26 10:00:29,163 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 72/100 | Train Loss: 0.0239 | Val rms_score: 0.4594
|
| 258 |
+
2025-09-26 10:00:45,428 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 73/100 | Train Loss: 0.0209 | Val rms_score: 0.4595
|
| 259 |
+
2025-09-26 10:01:02,859 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 74/100 | Train Loss: 0.0207 | Val rms_score: 0.4555
|
| 260 |
+
2025-09-26 10:01:21,212 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 75/100 | Train Loss: 0.0208 | Val rms_score: 0.4572
|
| 261 |
+
2025-09-26 10:01:38,875 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 76/100 | Train Loss: 0.0216 | Val rms_score: 0.4527
|
| 262 |
+
2025-09-26 10:01:39,388 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Global step of best model: 5092
|
| 263 |
+
2025-09-26 10:01:39,991 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Best model saved at epoch 76 with val rms_score: 0.4527
|
| 264 |
+
2025-09-26 10:01:57,239 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 77/100 | Train Loss: 0.0207 | Val rms_score: 0.4528
|
| 265 |
+
2025-09-26 10:02:14,848 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 78/100 | Train Loss: 0.0218 | Val rms_score: 0.4511
|
| 266 |
+
2025-09-26 10:02:15,014 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Global step of best model: 5226
|
| 267 |
+
2025-09-26 10:02:15,598 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Best model saved at epoch 78 with val rms_score: 0.4511
|
| 268 |
+
2025-09-26 10:02:32,402 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 79/100 | Train Loss: 0.0211 | Val rms_score: 0.4538
|
| 269 |
+
2025-09-26 10:02:50,120 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 80/100 | Train Loss: 0.0195 | Val rms_score: 0.4569
|
| 270 |
+
2025-09-26 10:03:07,349 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 81/100 | Train Loss: 0.0203 | Val rms_score: 0.4554
|
| 271 |
+
2025-09-26 10:03:25,581 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 82/100 | Train Loss: 0.0212 | Val rms_score: 0.4593
|
| 272 |
+
2025-09-26 10:03:41,598 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 83/100 | Train Loss: 0.0219 | Val rms_score: 0.4504
|
| 273 |
+
2025-09-26 10:03:41,772 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Global step of best model: 5561
|
| 274 |
+
2025-09-26 10:03:42,333 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Best model saved at epoch 83 with val rms_score: 0.4504
|
| 275 |
+
2025-09-26 10:03:58,972 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 84/100 | Train Loss: 0.0197 | Val rms_score: 0.4566
|
| 276 |
+
2025-09-26 10:04:16,643 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 85/100 | Train Loss: 0.0202 | Val rms_score: 0.4569
|
| 277 |
+
2025-09-26 10:04:33,944 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 86/100 | Train Loss: 0.0200 | Val rms_score: 0.4587
|
| 278 |
+
2025-09-26 10:04:51,055 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 87/100 | Train Loss: 0.0197 | Val rms_score: 0.4570
|
| 279 |
+
2025-09-26 10:05:07,044 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 88/100 | Train Loss: 0.0215 | Val rms_score: 0.4528
|
| 280 |
+
2025-09-26 10:05:23,868 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 89/100 | Train Loss: 0.0222 | Val rms_score: 0.4554
|
| 281 |
+
2025-09-26 10:05:42,386 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 90/100 | Train Loss: 0.0207 | Val rms_score: 0.4553
|
| 282 |
+
2025-09-26 10:05:59,596 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 91/100 | Train Loss: 0.0210 | Val rms_score: 0.4564
|
| 283 |
+
2025-09-26 10:06:17,513 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 92/100 | Train Loss: 0.0203 | Val rms_score: 0.4561
|
| 284 |
+
2025-09-26 10:06:35,435 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 93/100 | Train Loss: 0.0203 | Val rms_score: 0.4549
|
| 285 |
+
2025-09-26 10:06:52,622 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 94/100 | Train Loss: 0.0196 | Val rms_score: 0.4566
|
| 286 |
+
2025-09-26 10:07:09,228 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 95/100 | Train Loss: 0.0198 | Val rms_score: 0.4552
|
| 287 |
+
2025-09-26 10:07:26,312 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 96/100 | Train Loss: 0.0194 | Val rms_score: 0.4578
|
| 288 |
+
2025-09-26 10:07:44,213 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 97/100 | Train Loss: 0.0209 | Val rms_score: 0.4561
|
| 289 |
+
2025-09-26 10:08:01,613 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 98/100 | Train Loss: 0.0200 | Val rms_score: 0.4537
|
| 290 |
+
2025-09-26 10:08:18,979 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 99/100 | Train Loss: 0.0232 | Val rms_score: 0.4563
|
| 291 |
+
2025-09-26 10:08:34,985 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 100/100 | Train Loss: 0.0241 | Val rms_score: 0.4529
|
| 292 |
+
2025-09-26 10:08:36,190 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Test rms_score: 0.5473
|
| 293 |
+
2025-09-26 10:08:36,547 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Starting triplicate run 3 for dataset adme_permeability at 2025-09-26_10-08-36
|
| 294 |
+
2025-09-26 10:08:52,255 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 1/100 | Train Loss: 0.9440 | Val rms_score: 0.5537
|
| 295 |
+
2025-09-26 10:08:52,255 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Global step of best model: 67
|
| 296 |
+
2025-09-26 10:08:52,867 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Best model saved at epoch 1 with val rms_score: 0.5537
|
| 297 |
+
2025-09-26 10:09:10,488 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 2/100 | Train Loss: 0.5588 | Val rms_score: 0.5460
|
| 298 |
+
2025-09-26 10:09:10,687 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Global step of best model: 134
|
| 299 |
+
2025-09-26 10:09:11,218 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Best model saved at epoch 2 with val rms_score: 0.5460
|
| 300 |
+
2025-09-26 10:09:27,066 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 3/100 | Train Loss: 0.0771 | Val rms_score: 0.5169
|
| 301 |
+
2025-09-26 10:09:27,247 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Global step of best model: 201
|
| 302 |
+
2025-09-26 10:09:28,014 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Best model saved at epoch 3 with val rms_score: 0.5169
|
| 303 |
+
2025-09-26 10:09:45,459 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 4/100 | Train Loss: 0.4328 | Val rms_score: 0.4927
|
| 304 |
+
2025-09-26 10:09:45,635 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Global step of best model: 268
|
| 305 |
+
2025-09-26 10:09:46,190 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Best model saved at epoch 4 with val rms_score: 0.4927
|
| 306 |
+
2025-09-26 10:10:03,605 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 5/100 | Train Loss: 0.3339 | Val rms_score: 0.4818
|
| 307 |
+
2025-09-26 10:10:03,909 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Global step of best model: 335
|
| 308 |
+
2025-09-26 10:10:04,554 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Best model saved at epoch 5 with val rms_score: 0.4818
|
| 309 |
+
2025-09-26 10:10:22,263 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 6/100 | Train Loss: 0.0898 | Val rms_score: 0.4723
|
| 310 |
+
2025-09-26 10:10:22,778 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Global step of best model: 402
|
| 311 |
+
2025-09-26 10:10:23,353 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Best model saved at epoch 6 with val rms_score: 0.4723
|
| 312 |
+
2025-09-26 10:10:39,908 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 7/100 | Train Loss: 0.2006 | Val rms_score: 0.4609
|
| 313 |
+
2025-09-26 10:10:40,054 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Global step of best model: 469
|
| 314 |
+
2025-09-26 10:10:40,619 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Best model saved at epoch 7 with val rms_score: 0.4609
|
| 315 |
+
2025-09-26 10:10:58,298 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 8/100 | Train Loss: 0.1736 | Val rms_score: 0.4841
|
| 316 |
+
2025-09-26 10:11:14,941 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 9/100 | Train Loss: 0.1510 | Val rms_score: 0.4739
|
| 317 |
+
2025-09-26 10:11:32,458 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 10/100 | Train Loss: 0.1101 | Val rms_score: 0.4620
|
| 318 |
+
2025-09-26 10:11:49,956 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 11/100 | Train Loss: 0.1208 | Val rms_score: 0.5913
|
| 319 |
+
2025-09-26 10:12:06,738 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 12/100 | Train Loss: 0.1055 | Val rms_score: 0.4641
|
| 320 |
+
2025-09-26 10:12:23,199 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 13/100 | Train Loss: 0.1161 | Val rms_score: 0.4690
|
| 321 |
+
2025-09-26 10:12:39,652 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 14/100 | Train Loss: 0.1032 | Val rms_score: 0.4545
|
| 322 |
+
2025-09-26 10:12:39,818 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Global step of best model: 938
|
| 323 |
+
2025-09-26 10:12:40,389 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Best model saved at epoch 14 with val rms_score: 0.4545
|
| 324 |
+
2025-09-26 10:12:59,669 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 15/100 | Train Loss: 0.0797 | Val rms_score: 0.4584
|
| 325 |
+
2025-09-26 10:13:15,455 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 16/100 | Train Loss: 0.0686 | Val rms_score: 0.4561
|
| 326 |
+
2025-09-26 10:13:33,466 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 17/100 | Train Loss: 0.0757 | Val rms_score: 0.4668
|
| 327 |
+
2025-09-26 10:13:49,846 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 18/100 | Train Loss: 0.1341 | Val rms_score: 0.5418
|
| 328 |
+
2025-09-26 10:14:06,466 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 19/100 | Train Loss: 0.1651 | Val rms_score: 0.4612
|
| 329 |
+
2025-09-26 10:14:24,334 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 20/100 | Train Loss: 0.0789 | Val rms_score: 0.4535
|
| 330 |
+
2025-09-26 10:14:24,487 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Global step of best model: 1340
|
| 331 |
+
2025-09-26 10:14:25,048 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Best model saved at epoch 20 with val rms_score: 0.4535
|
| 332 |
+
2025-09-26 10:14:40,614 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 21/100 | Train Loss: 0.0703 | Val rms_score: 0.4570
|
| 333 |
+
2025-09-26 10:14:57,813 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 22/100 | Train Loss: 0.0648 | Val rms_score: 0.4526
|
| 334 |
+
2025-09-26 10:14:57,999 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Global step of best model: 1474
|
| 335 |
+
2025-09-26 10:14:58,567 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Best model saved at epoch 22 with val rms_score: 0.4526
|
| 336 |
+
2025-09-26 10:15:14,923 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 23/100 | Train Loss: 0.0534 | Val rms_score: 0.4487
|
| 337 |
+
2025-09-26 10:15:15,117 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Global step of best model: 1541
|
| 338 |
+
2025-09-26 10:15:15,660 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Best model saved at epoch 23 with val rms_score: 0.4487
|
| 339 |
+
2025-09-26 10:15:32,812 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 24/100 | Train Loss: 0.0469 | Val rms_score: 0.4539
|
| 340 |
+
2025-09-26 10:15:50,913 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 25/100 | Train Loss: 0.0480 | Val rms_score: 0.4465
|
| 341 |
+
2025-09-26 10:15:51,066 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Global step of best model: 1675
|
| 342 |
+
2025-09-26 10:15:51,695 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Best model saved at epoch 25 with val rms_score: 0.4465
|
| 343 |
+
2025-09-26 10:16:09,172 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 26/100 | Train Loss: 0.0445 | Val rms_score: 0.4500
|
| 344 |
+
2025-09-26 10:16:27,289 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 27/100 | Train Loss: 0.0471 | Val rms_score: 0.4531
|
| 345 |
+
2025-09-26 10:16:43,210 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 28/100 | Train Loss: 0.0445 | Val rms_score: 0.4504
|
| 346 |
+
2025-09-26 10:17:00,079 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 29/100 | Train Loss: 0.0412 | Val rms_score: 0.4526
|
| 347 |
+
2025-09-26 10:17:17,618 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 30/100 | Train Loss: 0.0355 | Val rms_score: 0.4525
|
| 348 |
+
2025-09-26 10:17:34,784 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 31/100 | Train Loss: 0.0394 | Val rms_score: 0.4524
|
| 349 |
+
2025-09-26 10:17:53,091 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 32/100 | Train Loss: 0.0389 | Val rms_score: 0.4517
|
| 350 |
+
2025-09-26 10:18:10,742 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 33/100 | Train Loss: 0.0380 | Val rms_score: 0.4540
|
| 351 |
+
2025-09-26 10:18:27,731 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 34/100 | Train Loss: 0.0394 | Val rms_score: 0.4551
|
| 352 |
+
2025-09-26 10:18:44,306 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 35/100 | Train Loss: 0.0339 | Val rms_score: 0.4545
|
| 353 |
+
2025-09-26 10:19:01,610 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 36/100 | Train Loss: 0.0319 | Val rms_score: 0.4525
|
| 354 |
+
2025-09-26 10:19:19,847 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 37/100 | Train Loss: 0.0312 | Val rms_score: 0.4498
|
| 355 |
+
2025-09-26 10:19:35,726 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 38/100 | Train Loss: 0.0329 | Val rms_score: 0.4519
|
| 356 |
+
2025-09-26 10:19:52,537 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 39/100 | Train Loss: 0.0349 | Val rms_score: 0.4520
|
| 357 |
+
2025-09-26 10:20:09,539 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 40/100 | Train Loss: 0.0329 | Val rms_score: 0.4520
|
| 358 |
+
2025-09-26 10:20:26,842 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 41/100 | Train Loss: 0.0309 | Val rms_score: 0.4534
|
| 359 |
+
2025-09-26 10:20:44,975 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 42/100 | Train Loss: 0.0315 | Val rms_score: 0.4542
|
| 360 |
+
2025-09-26 10:21:01,979 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 43/100 | Train Loss: 0.0319 | Val rms_score: 0.4538
|
| 361 |
+
2025-09-26 10:21:18,664 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 44/100 | Train Loss: 0.0304 | Val rms_score: 0.4506
|
| 362 |
+
2025-09-26 10:21:36,245 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 45/100 | Train Loss: 0.0293 | Val rms_score: 0.4506
|
| 363 |
+
2025-09-26 10:21:53,722 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 46/100 | Train Loss: 0.0303 | Val rms_score: 0.4540
|
| 364 |
+
2025-09-26 10:22:11,506 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 47/100 | Train Loss: 0.0284 | Val rms_score: 0.4518
|
| 365 |
+
2025-09-26 10:22:29,288 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 48/100 | Train Loss: 0.0277 | Val rms_score: 0.4520
|
| 366 |
+
2025-09-26 10:22:46,588 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 49/100 | Train Loss: 0.0268 | Val rms_score: 0.4497
|
| 367 |
+
2025-09-26 10:23:02,922 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 50/100 | Train Loss: 0.0312 | Val rms_score: 0.4513
|
| 368 |
+
2025-09-26 10:23:20,563 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 51/100 | Train Loss: 0.0255 | Val rms_score: 0.4540
|
| 369 |
+
2025-09-26 10:23:38,803 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 52/100 | Train Loss: 0.0303 | Val rms_score: 0.4506
|
| 370 |
+
2025-09-26 10:23:56,929 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 53/100 | Train Loss: 0.0276 | Val rms_score: 0.4521
|
| 371 |
+
2025-09-26 10:24:12,805 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 54/100 | Train Loss: 0.0315 | Val rms_score: 0.4524
|
| 372 |
+
2025-09-26 10:24:30,119 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 55/100 | Train Loss: 0.0279 | Val rms_score: 0.4512
|
| 373 |
+
2025-09-26 10:24:47,867 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 56/100 | Train Loss: 0.0257 | Val rms_score: 0.4502
|
| 374 |
+
2025-09-26 10:25:05,735 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 57/100 | Train Loss: 0.0294 | Val rms_score: 0.4516
|
| 375 |
+
2025-09-26 10:25:22,108 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 58/100 | Train Loss: 0.0283 | Val rms_score: 0.4526
|
| 376 |
+
2025-09-26 10:25:39,065 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 59/100 | Train Loss: 0.0264 | Val rms_score: 0.4522
|
| 377 |
+
2025-09-26 10:25:56,460 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 60/100 | Train Loss: 0.0287 | Val rms_score: 0.4524
|
| 378 |
+
2025-09-26 10:26:13,512 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 61/100 | Train Loss: 0.0283 | Val rms_score: 0.4508
|
| 379 |
+
2025-09-26 10:26:31,756 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 62/100 | Train Loss: 0.0273 | Val rms_score: 0.4509
|
| 380 |
+
2025-09-26 10:26:49,181 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 63/100 | Train Loss: 0.0240 | Val rms_score: 0.4512
|
| 381 |
+
2025-09-26 10:27:06,504 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 64/100 | Train Loss: 0.0250 | Val rms_score: 0.4509
|
| 382 |
+
2025-09-26 10:27:22,079 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 65/100 | Train Loss: 0.0249 | Val rms_score: 0.4496
|
| 383 |
+
2025-09-26 10:27:39,576 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 66/100 | Train Loss: 0.0245 | Val rms_score: 0.4518
|
| 384 |
+
2025-09-26 10:27:57,816 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 67/100 | Train Loss: 0.0239 | Val rms_score: 0.4480
|
| 385 |
+
2025-09-26 10:28:15,008 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 68/100 | Train Loss: 0.0243 | Val rms_score: 0.4497
|
| 386 |
+
2025-09-26 10:28:31,391 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 69/100 | Train Loss: 0.0263 | Val rms_score: 0.4522
|
| 387 |
+
2025-09-26 10:28:49,122 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 70/100 | Train Loss: 0.0241 | Val rms_score: 0.4492
|
| 388 |
+
2025-09-26 10:29:06,150 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 71/100 | Train Loss: 0.0236 | Val rms_score: 0.4509
|
| 389 |
+
2025-09-26 10:29:22,680 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 72/100 | Train Loss: 0.0241 | Val rms_score: 0.4517
|
| 390 |
+
2025-09-26 10:29:39,032 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 73/100 | Train Loss: 0.0238 | Val rms_score: 0.4503
|
| 391 |
+
2025-09-26 10:29:55,620 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 74/100 | Train Loss: 0.0244 | Val rms_score: 0.4520
|
| 392 |
+
2025-09-26 10:30:14,939 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 75/100 | Train Loss: 0.0234 | Val rms_score: 0.4529
|
| 393 |
+
2025-09-26 10:30:32,181 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 76/100 | Train Loss: 0.0232 | Val rms_score: 0.4529
|
| 394 |
+
2025-09-26 10:30:49,699 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 77/100 | Train Loss: 0.0213 | Val rms_score: 0.4539
|
| 395 |
+
2025-09-26 10:31:07,519 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 78/100 | Train Loss: 0.0233 | Val rms_score: 0.4519
|
| 396 |
+
2025-09-26 10:31:24,165 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 79/100 | Train Loss: 0.0225 | Val rms_score: 0.4513
|
| 397 |
+
2025-09-26 10:31:41,782 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 80/100 | Train Loss: 0.0234 | Val rms_score: 0.4528
|
| 398 |
+
2025-09-26 10:31:59,339 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 81/100 | Train Loss: 0.0242 | Val rms_score: 0.4532
|
| 399 |
+
2025-09-26 10:32:15,939 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 82/100 | Train Loss: 0.0243 | Val rms_score: 0.4517
|
| 400 |
+
2025-09-26 10:32:33,763 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 83/100 | Train Loss: 0.0239 | Val rms_score: 0.4528
|
| 401 |
+
2025-09-26 10:32:49,957 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 84/100 | Train Loss: 0.0220 | Val rms_score: 0.4511
|
| 402 |
+
2025-09-26 10:33:07,681 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 85/100 | Train Loss: 0.0217 | Val rms_score: 0.4512
|
| 403 |
+
2025-09-26 10:33:25,496 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 86/100 | Train Loss: 0.0223 | Val rms_score: 0.4545
|
| 404 |
+
2025-09-26 10:33:44,597 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 87/100 | Train Loss: 0.0224 | Val rms_score: 0.4525
|
| 405 |
+
2025-09-26 10:34:02,644 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 88/100 | Train Loss: 0.0223 | Val rms_score: 0.4529
|
| 406 |
+
2025-09-26 10:34:18,618 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 89/100 | Train Loss: 0.0218 | Val rms_score: 0.4516
|
| 407 |
+
2025-09-26 10:34:38,155 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 90/100 | Train Loss: 0.0223 | Val rms_score: 0.4525
|
| 408 |
+
2025-09-26 10:34:55,066 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 91/100 | Train Loss: 0.0222 | Val rms_score: 0.4511
|
| 409 |
+
2025-09-26 10:35:14,259 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 92/100 | Train Loss: 0.0221 | Val rms_score: 0.4517
|
| 410 |
+
2025-09-26 10:35:31,865 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 93/100 | Train Loss: 0.0215 | Val rms_score: 0.4517
|
| 411 |
+
2025-09-26 10:35:50,086 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 94/100 | Train Loss: 0.0211 | Val rms_score: 0.4518
|
| 412 |
+
2025-09-26 10:36:08,522 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 95/100 | Train Loss: 0.0222 | Val rms_score: 0.4516
|
| 413 |
+
2025-09-26 10:36:24,793 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 96/100 | Train Loss: 0.0223 | Val rms_score: 0.4515
|
| 414 |
+
2025-09-26 10:36:43,379 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 97/100 | Train Loss: 0.0211 | Val rms_score: 0.4512
|
| 415 |
+
2025-09-26 10:37:01,070 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 98/100 | Train Loss: 0.0219 | Val rms_score: 0.4538
|
| 416 |
+
2025-09-26 10:37:16,364 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 99/100 | Train Loss: 0.0219 | Val rms_score: 0.4502
|
| 417 |
+
2025-09-26 10:37:30,655 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 100/100 | Train Loss: 0.0204 | Val rms_score: 0.4505
|
| 418 |
+
2025-09-26 10:37:31,869 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Test rms_score: 0.5479
|
| 419 |
+
2025-09-26 10:37:32,259 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Final Triplicate Test Results — Avg rms_score: 0.5423, Std Dev: 0.0076
|
logs_modchembert_regression_ModChemBERT-MLM/modchembert_deepchem_splits_run_adme_ppb_h_epochs100_batch_size32_20250926_103732.log
ADDED
|
@@ -0,0 +1,333 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
2025-09-26 10:37:32,271 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Running benchmark for dataset: adme_ppb_h
|
| 2 |
+
2025-09-26 10:37:32,272 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - dataset: adme_ppb_h, tasks: ['y'], epochs: 100, learning rate: 3e-05, transform: True
|
| 3 |
+
2025-09-26 10:37:32,275 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Starting triplicate run 1 for dataset adme_ppb_h at 2025-09-26_10-37-32
|
| 4 |
+
2025-09-26 10:37:34,791 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 1/100 | Train Loss: 2.3625 | Val rms_score: 0.9001
|
| 5 |
+
2025-09-26 10:37:34,792 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Global step of best model: 5
|
| 6 |
+
2025-09-26 10:37:35,557 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Best model saved at epoch 1 with val rms_score: 0.9001
|
| 7 |
+
2025-09-26 10:37:37,993 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 2/100 | Train Loss: 0.9250 | Val rms_score: 0.5595
|
| 8 |
+
2025-09-26 10:37:38,174 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Global step of best model: 10
|
| 9 |
+
2025-09-26 10:37:38,729 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Best model saved at epoch 2 with val rms_score: 0.5595
|
| 10 |
+
2025-09-26 10:37:41,715 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 3/100 | Train Loss: 0.5250 | Val rms_score: 0.5249
|
| 11 |
+
2025-09-26 10:37:41,901 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Global step of best model: 15
|
| 12 |
+
2025-09-26 10:37:42,451 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Best model saved at epoch 3 with val rms_score: 0.5249
|
| 13 |
+
2025-09-26 10:37:44,806 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 4/100 | Train Loss: 0.5312 | Val rms_score: 0.5371
|
| 14 |
+
2025-09-26 10:37:46,931 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 5/100 | Train Loss: 0.4625 | Val rms_score: 0.5566
|
| 15 |
+
2025-09-26 10:37:49,495 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 6/100 | Train Loss: 0.4031 | Val rms_score: 0.5827
|
| 16 |
+
2025-09-26 10:37:52,521 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 7/100 | Train Loss: 0.3750 | Val rms_score: 0.5971
|
| 17 |
+
2025-09-26 10:37:54,531 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 8/100 | Train Loss: 0.3594 | Val rms_score: 0.5986
|
| 18 |
+
2025-09-26 10:37:57,152 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 9/100 | Train Loss: 0.3453 | Val rms_score: 0.5923
|
| 19 |
+
2025-09-26 10:37:59,714 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 10/100 | Train Loss: 0.3156 | Val rms_score: 0.5878
|
| 20 |
+
2025-09-26 10:38:01,939 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 11/100 | Train Loss: 0.3047 | Val rms_score: 0.5844
|
| 21 |
+
2025-09-26 10:38:04,650 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 12/100 | Train Loss: 0.2781 | Val rms_score: 0.5837
|
| 22 |
+
2025-09-26 10:38:06,977 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 13/100 | Train Loss: 0.2625 | Val rms_score: 0.5871
|
| 23 |
+
2025-09-26 10:38:09,337 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 14/100 | Train Loss: 0.2437 | Val rms_score: 0.5947
|
| 24 |
+
2025-09-26 10:38:11,641 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 15/100 | Train Loss: 0.2266 | Val rms_score: 0.6036
|
| 25 |
+
2025-09-26 10:38:13,619 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 16/100 | Train Loss: 0.2344 | Val rms_score: 0.6088
|
| 26 |
+
2025-09-26 10:38:16,785 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 17/100 | Train Loss: 0.2078 | Val rms_score: 0.6098
|
| 27 |
+
2025-09-26 10:38:19,428 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 18/100 | Train Loss: 0.1953 | Val rms_score: 0.6109
|
| 28 |
+
2025-09-26 10:38:21,962 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 19/100 | Train Loss: 0.1633 | Val rms_score: 0.6149
|
| 29 |
+
2025-09-26 10:38:24,527 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 20/100 | Train Loss: 0.1656 | Val rms_score: 0.6229
|
| 30 |
+
2025-09-26 10:38:26,810 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 21/100 | Train Loss: 0.1492 | Val rms_score: 0.6245
|
| 31 |
+
2025-09-26 10:38:29,614 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 22/100 | Train Loss: 0.1406 | Val rms_score: 0.6245
|
| 32 |
+
2025-09-26 10:38:31,828 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 23/100 | Train Loss: 0.1234 | Val rms_score: 0.6298
|
| 33 |
+
2025-09-26 10:38:34,202 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 24/100 | Train Loss: 0.1141 | Val rms_score: 0.6329
|
| 34 |
+
2025-09-26 10:38:36,486 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 25/100 | Train Loss: 0.0992 | Val rms_score: 0.6433
|
| 35 |
+
2025-09-26 10:38:38,719 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 26/100 | Train Loss: 0.0805 | Val rms_score: 0.6542
|
| 36 |
+
2025-09-26 10:38:41,331 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 27/100 | Train Loss: 0.0805 | Val rms_score: 0.6587
|
| 37 |
+
2025-09-26 10:38:43,883 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 28/100 | Train Loss: 0.0691 | Val rms_score: 0.6604
|
| 38 |
+
2025-09-26 10:38:46,223 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 29/100 | Train Loss: 0.0660 | Val rms_score: 0.6573
|
| 39 |
+
2025-09-26 10:38:48,466 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 30/100 | Train Loss: 0.0527 | Val rms_score: 0.6612
|
| 40 |
+
2025-09-26 10:38:50,284 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 31/100 | Train Loss: 0.0563 | Val rms_score: 0.6616
|
| 41 |
+
2025-09-26 10:38:53,197 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 32/100 | Train Loss: 0.0523 | Val rms_score: 0.6748
|
| 42 |
+
2025-09-26 10:38:55,663 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 33/100 | Train Loss: 0.0512 | Val rms_score: 0.6792
|
| 43 |
+
2025-09-26 10:38:58,111 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 34/100 | Train Loss: 0.0395 | Val rms_score: 0.6785
|
| 44 |
+
2025-09-26 10:39:00,381 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 35/100 | Train Loss: 0.0373 | Val rms_score: 0.6729
|
| 45 |
+
2025-09-26 10:39:02,661 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 36/100 | Train Loss: 0.0367 | Val rms_score: 0.6726
|
| 46 |
+
2025-09-26 10:39:05,352 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 37/100 | Train Loss: 0.0273 | Val rms_score: 0.6726
|
| 47 |
+
2025-09-26 10:39:07,242 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 38/100 | Train Loss: 0.0312 | Val rms_score: 0.6794
|
| 48 |
+
2025-09-26 10:39:09,055 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 39/100 | Train Loss: 0.0295 | Val rms_score: 0.6780
|
| 49 |
+
2025-09-26 10:39:11,357 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 40/100 | Train Loss: 0.0241 | Val rms_score: 0.6799
|
| 50 |
+
2025-09-26 10:39:13,692 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 41/100 | Train Loss: 0.0225 | Val rms_score: 0.6771
|
| 51 |
+
2025-09-26 10:39:16,462 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 42/100 | Train Loss: 0.0262 | Val rms_score: 0.6729
|
| 52 |
+
2025-09-26 10:39:19,092 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 43/100 | Train Loss: 0.0239 | Val rms_score: 0.6730
|
| 53 |
+
2025-09-26 10:39:21,357 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 44/100 | Train Loss: 0.0196 | Val rms_score: 0.6818
|
| 54 |
+
2025-09-26 10:39:23,719 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 45/100 | Train Loss: 0.0213 | Val rms_score: 0.6848
|
| 55 |
+
2025-09-26 10:39:25,870 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 46/100 | Train Loss: 0.0219 | Val rms_score: 0.6834
|
| 56 |
+
2025-09-26 10:39:28,365 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 47/100 | Train Loss: 0.0229 | Val rms_score: 0.6797
|
| 57 |
+
2025-09-26 10:39:30,626 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 48/100 | Train Loss: 0.0186 | Val rms_score: 0.6744
|
| 58 |
+
2025-09-26 10:39:33,027 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 49/100 | Train Loss: 0.0225 | Val rms_score: 0.6737
|
| 59 |
+
2025-09-26 10:39:35,416 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 50/100 | Train Loss: 0.0177 | Val rms_score: 0.6791
|
| 60 |
+
2025-09-26 10:39:37,769 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 51/100 | Train Loss: 0.0168 | Val rms_score: 0.6830
|
| 61 |
+
2025-09-26 10:39:40,562 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 52/100 | Train Loss: 0.0153 | Val rms_score: 0.6860
|
| 62 |
+
2025-09-26 10:39:42,907 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 53/100 | Train Loss: 0.0163 | Val rms_score: 0.6832
|
| 63 |
+
2025-09-26 10:39:44,698 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 54/100 | Train Loss: 0.0189 | Val rms_score: 0.6814
|
| 64 |
+
2025-09-26 10:39:46,450 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 55/100 | Train Loss: 0.0147 | Val rms_score: 0.6805
|
| 65 |
+
2025-09-26 10:39:49,099 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 56/100 | Train Loss: 0.0150 | Val rms_score: 0.6818
|
| 66 |
+
2025-09-26 10:39:52,192 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 57/100 | Train Loss: 0.0130 | Val rms_score: 0.6830
|
| 67 |
+
2025-09-26 10:39:54,677 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 58/100 | Train Loss: 0.0172 | Val rms_score: 0.6794
|
| 68 |
+
2025-09-26 10:39:56,982 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 59/100 | Train Loss: 0.0122 | Val rms_score: 0.6776
|
| 69 |
+
2025-09-26 10:39:59,408 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 60/100 | Train Loss: 0.0144 | Val rms_score: 0.6778
|
| 70 |
+
2025-09-26 10:40:01,728 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 61/100 | Train Loss: 0.0154 | Val rms_score: 0.6805
|
| 71 |
+
2025-09-26 10:40:03,855 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 62/100 | Train Loss: 0.0132 | Val rms_score: 0.6813
|
| 72 |
+
2025-09-26 10:40:05,887 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 63/100 | Train Loss: 0.0112 | Val rms_score: 0.6865
|
| 73 |
+
2025-09-26 10:40:08,495 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 64/100 | Train Loss: 0.0154 | Val rms_score: 0.6818
|
| 74 |
+
2025-09-26 10:40:10,980 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 65/100 | Train Loss: 0.0145 | Val rms_score: 0.6814
|
| 75 |
+
2025-09-26 10:40:13,444 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 66/100 | Train Loss: 0.0147 | Val rms_score: 0.6838
|
| 76 |
+
2025-09-26 10:40:16,174 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 67/100 | Train Loss: 0.0136 | Val rms_score: 0.6905
|
| 77 |
+
2025-09-26 10:40:18,529 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 68/100 | Train Loss: 0.0142 | Val rms_score: 0.6880
|
| 78 |
+
2025-09-26 10:40:20,947 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 69/100 | Train Loss: 0.0121 | Val rms_score: 0.6871
|
| 79 |
+
2025-09-26 10:40:22,571 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 70/100 | Train Loss: 0.0139 | Val rms_score: 0.6841
|
| 80 |
+
2025-09-26 10:40:25,168 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 71/100 | Train Loss: 0.0138 | Val rms_score: 0.6833
|
| 81 |
+
2025-09-26 10:40:28,211 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 72/100 | Train Loss: 0.0133 | Val rms_score: 0.6843
|
| 82 |
+
2025-09-26 10:40:30,917 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 73/100 | Train Loss: 0.0116 | Val rms_score: 0.6864
|
| 83 |
+
2025-09-26 10:40:33,485 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 74/100 | Train Loss: 0.0122 | Val rms_score: 0.6877
|
| 84 |
+
2025-09-26 10:40:35,890 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 75/100 | Train Loss: 0.0117 | Val rms_score: 0.6855
|
| 85 |
+
2025-09-26 10:40:38,305 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 76/100 | Train Loss: 0.0147 | Val rms_score: 0.6828
|
| 86 |
+
2025-09-26 10:40:40,808 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 77/100 | Train Loss: 0.0135 | Val rms_score: 0.6825
|
| 87 |
+
2025-09-26 10:40:42,733 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 78/100 | Train Loss: 0.0131 | Val rms_score: 0.6869
|
| 88 |
+
2025-09-26 10:40:45,313 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 79/100 | Train Loss: 0.0141 | Val rms_score: 0.6853
|
| 89 |
+
2025-09-26 10:40:47,760 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 80/100 | Train Loss: 0.0112 | Val rms_score: 0.6873
|
| 90 |
+
2025-09-26 10:40:50,118 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 81/100 | Train Loss: 0.0117 | Val rms_score: 0.6911
|
| 91 |
+
2025-09-26 10:40:52,805 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 82/100 | Train Loss: 0.0127 | Val rms_score: 0.6887
|
| 92 |
+
2025-09-26 10:40:55,151 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 83/100 | Train Loss: 0.0093 | Val rms_score: 0.6895
|
| 93 |
+
2025-09-26 10:40:57,445 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 84/100 | Train Loss: 0.0128 | Val rms_score: 0.6867
|
| 94 |
+
2025-09-26 10:40:59,165 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 85/100 | Train Loss: 0.0123 | Val rms_score: 0.6922
|
| 95 |
+
2025-09-26 10:41:01,477 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 86/100 | Train Loss: 0.0168 | Val rms_score: 0.6991
|
| 96 |
+
2025-09-26 10:41:04,137 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 87/100 | Train Loss: 0.0125 | Val rms_score: 0.7008
|
| 97 |
+
2025-09-26 10:41:06,675 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 88/100 | Train Loss: 0.0126 | Val rms_score: 0.6906
|
| 98 |
+
2025-09-26 10:41:08,986 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 89/100 | Train Loss: 0.0123 | Val rms_score: 0.6851
|
| 99 |
+
2025-09-26 10:41:11,282 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 90/100 | Train Loss: 0.0124 | Val rms_score: 0.6914
|
| 100 |
+
2025-09-26 10:41:14,061 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 91/100 | Train Loss: 0.0123 | Val rms_score: 0.6904
|
| 101 |
+
2025-09-26 10:41:16,009 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 92/100 | Train Loss: 0.0105 | Val rms_score: 0.6938
|
| 102 |
+
2025-09-26 10:41:18,682 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 93/100 | Train Loss: 0.0143 | Val rms_score: 0.6902
|
| 103 |
+
2025-09-26 10:41:21,415 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 94/100 | Train Loss: 0.0132 | Val rms_score: 0.6871
|
| 104 |
+
2025-09-26 10:41:23,724 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 95/100 | Train Loss: 0.0112 | Val rms_score: 0.6867
|
| 105 |
+
2025-09-26 10:41:26,411 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 96/100 | Train Loss: 0.0124 | Val rms_score: 0.6897
|
| 106 |
+
2025-09-26 10:41:29,202 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 97/100 | Train Loss: 0.0135 | Val rms_score: 0.6894
|
| 107 |
+
2025-09-26 10:41:31,868 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 98/100 | Train Loss: 0.0134 | Val rms_score: 0.6887
|
| 108 |
+
2025-09-26 10:41:33,772 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 99/100 | Train Loss: 0.0123 | Val rms_score: 0.6862
|
| 109 |
+
2025-09-26 10:41:36,192 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 100/100 | Train Loss: 0.0115 | Val rms_score: 0.6872
|
| 110 |
+
2025-09-26 10:41:36,594 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Test rms_score: 0.8550
|
| 111 |
+
2025-09-26 10:41:36,952 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Starting triplicate run 2 for dataset adme_ppb_h at 2025-09-26_10-41-36
|
| 112 |
+
2025-09-26 10:41:39,380 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 1/100 | Train Loss: 2.7750 | Val rms_score: 0.9486
|
| 113 |
+
2025-09-26 10:41:39,380 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Global step of best model: 5
|
| 114 |
+
2025-09-26 10:41:40,152 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Best model saved at epoch 1 with val rms_score: 0.9486
|
| 115 |
+
2025-09-26 10:41:42,959 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 2/100 | Train Loss: 0.7688 | Val rms_score: 0.6508
|
| 116 |
+
2025-09-26 10:41:43,145 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Global step of best model: 10
|
| 117 |
+
2025-09-26 10:41:43,717 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Best model saved at epoch 2 with val rms_score: 0.6508
|
| 118 |
+
2025-09-26 10:41:46,431 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 3/100 | Train Loss: 0.6000 | Val rms_score: 0.7326
|
| 119 |
+
2025-09-26 10:41:48,158 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 4/100 | Train Loss: 0.5844 | Val rms_score: 0.6405
|
| 120 |
+
2025-09-26 10:41:48,343 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Global step of best model: 20
|
| 121 |
+
2025-09-26 10:41:48,903 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Best model saved at epoch 4 with val rms_score: 0.6405
|
| 122 |
+
2025-09-26 10:41:51,249 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 5/100 | Train Loss: 0.3922 | Val rms_score: 0.6291
|
| 123 |
+
2025-09-26 10:41:51,443 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Global step of best model: 25
|
| 124 |
+
2025-09-26 10:41:52,032 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Best model saved at epoch 5 with val rms_score: 0.6291
|
| 125 |
+
2025-09-26 10:41:54,462 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 6/100 | Train Loss: 0.3594 | Val rms_score: 0.6480
|
| 126 |
+
2025-09-26 10:41:57,303 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 7/100 | Train Loss: 0.3828 | Val rms_score: 0.6372
|
| 127 |
+
2025-09-26 10:41:59,838 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 8/100 | Train Loss: 0.3156 | Val rms_score: 0.6209
|
| 128 |
+
2025-09-26 10:42:00,030 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Global step of best model: 40
|
| 129 |
+
2025-09-26 10:42:00,613 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Best model saved at epoch 8 with val rms_score: 0.6209
|
| 130 |
+
2025-09-26 10:42:03,610 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 9/100 | Train Loss: 0.2906 | Val rms_score: 0.6231
|
| 131 |
+
2025-09-26 10:42:06,122 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 10/100 | Train Loss: 0.2547 | Val rms_score: 0.6307
|
| 132 |
+
2025-09-26 10:42:08,684 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 11/100 | Train Loss: 0.2516 | Val rms_score: 0.6346
|
| 133 |
+
2025-09-26 10:42:11,797 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 12/100 | Train Loss: 0.2297 | Val rms_score: 0.6381
|
| 134 |
+
2025-09-26 10:42:14,575 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 13/100 | Train Loss: 0.2219 | Val rms_score: 0.6425
|
| 135 |
+
2025-09-26 10:42:17,155 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 14/100 | Train Loss: 0.2062 | Val rms_score: 0.6480
|
| 136 |
+
2025-09-26 10:42:19,796 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 15/100 | Train Loss: 0.1867 | Val rms_score: 0.6563
|
| 137 |
+
2025-09-26 10:42:21,461 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 16/100 | Train Loss: 0.1695 | Val rms_score: 0.6625
|
| 138 |
+
2025-09-26 10:42:24,057 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 17/100 | Train Loss: 0.1523 | Val rms_score: 0.6665
|
| 139 |
+
2025-09-26 10:42:26,378 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 18/100 | Train Loss: 0.1422 | Val rms_score: 0.6711
|
| 140 |
+
2025-09-26 10:42:28,873 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 19/100 | Train Loss: 0.1297 | Val rms_score: 0.6739
|
| 141 |
+
2025-09-26 10:42:31,238 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 20/100 | Train Loss: 0.1195 | Val rms_score: 0.6758
|
| 142 |
+
2025-09-26 10:42:33,577 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 21/100 | Train Loss: 0.1094 | Val rms_score: 0.6825
|
| 143 |
+
2025-09-26 10:42:36,325 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 22/100 | Train Loss: 0.1094 | Val rms_score: 0.6831
|
| 144 |
+
2025-09-26 10:42:38,731 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 23/100 | Train Loss: 0.0813 | Val rms_score: 0.6875
|
| 145 |
+
2025-09-26 10:42:40,404 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 24/100 | Train Loss: 0.0797 | Val rms_score: 0.6904
|
| 146 |
+
2025-09-26 10:42:42,011 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 25/100 | Train Loss: 0.0688 | Val rms_score: 0.6990
|
| 147 |
+
2025-09-26 10:42:44,521 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 26/100 | Train Loss: 0.0656 | Val rms_score: 0.7039
|
| 148 |
+
2025-09-26 10:42:47,393 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 27/100 | Train Loss: 0.0598 | Val rms_score: 0.7067
|
| 149 |
+
2025-09-26 10:42:49,870 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 28/100 | Train Loss: 0.0508 | Val rms_score: 0.7075
|
| 150 |
+
2025-09-26 10:42:52,224 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 29/100 | Train Loss: 0.0441 | Val rms_score: 0.7084
|
| 151 |
+
2025-09-26 10:42:54,627 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 30/100 | Train Loss: 0.0445 | Val rms_score: 0.7112
|
| 152 |
+
2025-09-26 10:42:57,049 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 31/100 | Train Loss: 0.0412 | Val rms_score: 0.7098
|
| 153 |
+
2025-09-26 10:42:59,537 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 32/100 | Train Loss: 0.0379 | Val rms_score: 0.7115
|
| 154 |
+
2025-09-26 10:43:01,733 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 33/100 | Train Loss: 0.0344 | Val rms_score: 0.7115
|
| 155 |
+
2025-09-26 10:43:04,135 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 34/100 | Train Loss: 0.0305 | Val rms_score: 0.7108
|
| 156 |
+
2025-09-26 10:43:06,443 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 35/100 | Train Loss: 0.0311 | Val rms_score: 0.7084
|
| 157 |
+
2025-09-26 10:43:08,808 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 36/100 | Train Loss: 0.0309 | Val rms_score: 0.7072
|
| 158 |
+
2025-09-26 10:43:11,595 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 37/100 | Train Loss: 0.0281 | Val rms_score: 0.7074
|
| 159 |
+
2025-09-26 10:43:13,962 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 38/100 | Train Loss: 0.0229 | Val rms_score: 0.7079
|
| 160 |
+
2025-09-26 10:43:16,345 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 39/100 | Train Loss: 0.0252 | Val rms_score: 0.7100
|
| 161 |
+
2025-09-26 10:43:18,157 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 40/100 | Train Loss: 0.0213 | Val rms_score: 0.7135
|
| 162 |
+
2025-09-26 10:43:20,611 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 41/100 | Train Loss: 0.0217 | Val rms_score: 0.7122
|
| 163 |
+
2025-09-26 10:43:23,320 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 42/100 | Train Loss: 0.0191 | Val rms_score: 0.7130
|
| 164 |
+
2025-09-26 10:43:25,640 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 43/100 | Train Loss: 0.0206 | Val rms_score: 0.7128
|
| 165 |
+
2025-09-26 10:43:28,092 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 44/100 | Train Loss: 0.0233 | Val rms_score: 0.7106
|
| 166 |
+
2025-09-26 10:43:30,260 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 45/100 | Train Loss: 0.0220 | Val rms_score: 0.7100
|
| 167 |
+
2025-09-26 10:43:32,630 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 46/100 | Train Loss: 0.0191 | Val rms_score: 0.7153
|
| 168 |
+
2025-09-26 10:43:35,559 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 47/100 | Train Loss: 0.0186 | Val rms_score: 0.7246
|
| 169 |
+
2025-09-26 10:43:37,237 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 48/100 | Train Loss: 0.0169 | Val rms_score: 0.7280
|
| 170 |
+
2025-09-26 10:43:39,597 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 49/100 | Train Loss: 0.0139 | Val rms_score: 0.7235
|
| 171 |
+
2025-09-26 10:43:41,965 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 50/100 | Train Loss: 0.0177 | Val rms_score: 0.7183
|
| 172 |
+
2025-09-26 10:43:44,272 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 51/100 | Train Loss: 0.0168 | Val rms_score: 0.7196
|
| 173 |
+
2025-09-26 10:43:47,002 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 52/100 | Train Loss: 0.0178 | Val rms_score: 0.7206
|
| 174 |
+
2025-09-26 10:43:49,417 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 53/100 | Train Loss: 0.0191 | Val rms_score: 0.7205
|
| 175 |
+
2025-09-26 10:43:51,876 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 54/100 | Train Loss: 0.0139 | Val rms_score: 0.7203
|
| 176 |
+
2025-09-26 10:43:54,212 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 55/100 | Train Loss: 0.0163 | Val rms_score: 0.7190
|
| 177 |
+
2025-09-26 10:43:56,013 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 56/100 | Train Loss: 0.0137 | Val rms_score: 0.7212
|
| 178 |
+
2025-09-26 10:43:58,821 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 57/100 | Train Loss: 0.0146 | Val rms_score: 0.7240
|
| 179 |
+
2025-09-26 10:44:01,281 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 58/100 | Train Loss: 0.0102 | Val rms_score: 0.7237
|
| 180 |
+
2025-09-26 10:44:03,852 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 59/100 | Train Loss: 0.0164 | Val rms_score: 0.7224
|
| 181 |
+
2025-09-26 10:44:06,235 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 60/100 | Train Loss: 0.0172 | Val rms_score: 0.7207
|
| 182 |
+
2025-09-26 10:44:08,712 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 61/100 | Train Loss: 0.0146 | Val rms_score: 0.7171
|
| 183 |
+
2025-09-26 10:44:11,389 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 62/100 | Train Loss: 0.0117 | Val rms_score: 0.7113
|
| 184 |
+
2025-09-26 10:44:13,494 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 63/100 | Train Loss: 0.0153 | Val rms_score: 0.7171
|
| 185 |
+
2025-09-26 10:44:15,115 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 64/100 | Train Loss: 0.0135 | Val rms_score: 0.7212
|
| 186 |
+
2025-09-26 10:44:17,738 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 65/100 | Train Loss: 0.0146 | Val rms_score: 0.7217
|
| 187 |
+
2025-09-26 10:44:20,472 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 66/100 | Train Loss: 0.0154 | Val rms_score: 0.7226
|
| 188 |
+
2025-09-26 10:44:23,070 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 67/100 | Train Loss: 0.0129 | Val rms_score: 0.7215
|
| 189 |
+
2025-09-26 10:44:25,451 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 68/100 | Train Loss: 0.0136 | Val rms_score: 0.7217
|
| 190 |
+
2025-09-26 10:44:27,929 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 69/100 | Train Loss: 0.0112 | Val rms_score: 0.7193
|
| 191 |
+
2025-09-26 10:44:30,233 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 70/100 | Train Loss: 0.0119 | Val rms_score: 0.7196
|
| 192 |
+
2025-09-26 10:44:32,365 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 71/100 | Train Loss: 0.0104 | Val rms_score: 0.7205
|
| 193 |
+
2025-09-26 10:44:35,003 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 72/100 | Train Loss: 0.0145 | Val rms_score: 0.7215
|
| 194 |
+
2025-09-26 10:44:37,439 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 73/100 | Train Loss: 0.0117 | Val rms_score: 0.7224
|
| 195 |
+
2025-09-26 10:44:40,099 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 74/100 | Train Loss: 0.0128 | Val rms_score: 0.7249
|
| 196 |
+
2025-09-26 10:44:42,487 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 75/100 | Train Loss: 0.0154 | Val rms_score: 0.7260
|
| 197 |
+
2025-09-26 10:44:44,797 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 76/100 | Train Loss: 0.0164 | Val rms_score: 0.7290
|
| 198 |
+
2025-09-26 10:44:47,481 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 77/100 | Train Loss: 0.0125 | Val rms_score: 0.7272
|
| 199 |
+
2025-09-26 10:44:49,850 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 78/100 | Train Loss: 0.0102 | Val rms_score: 0.7238
|
| 200 |
+
2025-09-26 10:44:51,741 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 79/100 | Train Loss: 0.0139 | Val rms_score: 0.7227
|
| 201 |
+
2025-09-26 10:44:54,299 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 80/100 | Train Loss: 0.0126 | Val rms_score: 0.7243
|
| 202 |
+
2025-09-26 10:44:56,612 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 81/100 | Train Loss: 0.0145 | Val rms_score: 0.7234
|
| 203 |
+
2025-09-26 10:44:59,254 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 82/100 | Train Loss: 0.0113 | Val rms_score: 0.7213
|
| 204 |
+
2025-09-26 10:45:01,685 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 83/100 | Train Loss: 0.0122 | Val rms_score: 0.7212
|
| 205 |
+
2025-09-26 10:45:04,074 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 84/100 | Train Loss: 0.0111 | Val rms_score: 0.7243
|
| 206 |
+
2025-09-26 10:45:06,391 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 85/100 | Train Loss: 0.0120 | Val rms_score: 0.7205
|
| 207 |
+
2025-09-26 10:45:08,573 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 86/100 | Train Loss: 0.0108 | Val rms_score: 0.7200
|
| 208 |
+
2025-09-26 10:45:11,217 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 87/100 | Train Loss: 0.0109 | Val rms_score: 0.7200
|
| 209 |
+
2025-09-26 10:45:13,555 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 88/100 | Train Loss: 0.0117 | Val rms_score: 0.7240
|
| 210 |
+
2025-09-26 10:45:15,998 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 89/100 | Train Loss: 0.0117 | Val rms_score: 0.7263
|
| 211 |
+
2025-09-26 10:45:18,563 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 90/100 | Train Loss: 0.0130 | Val rms_score: 0.7245
|
| 212 |
+
2025-09-26 10:45:21,298 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 91/100 | Train Loss: 0.0118 | Val rms_score: 0.7260
|
| 213 |
+
2025-09-26 10:45:24,281 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 92/100 | Train Loss: 0.0121 | Val rms_score: 0.7219
|
| 214 |
+
2025-09-26 10:45:26,263 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 93/100 | Train Loss: 0.0087 | Val rms_score: 0.7236
|
| 215 |
+
2025-09-26 10:45:28,541 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 94/100 | Train Loss: 0.0129 | Val rms_score: 0.7277
|
| 216 |
+
2025-09-26 10:45:30,823 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 95/100 | Train Loss: 0.0109 | Val rms_score: 0.7263
|
| 217 |
+
2025-09-26 10:45:33,134 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 96/100 | Train Loss: 0.0120 | Val rms_score: 0.7245
|
| 218 |
+
2025-09-26 10:45:35,968 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 97/100 | Train Loss: 0.0125 | Val rms_score: 0.7235
|
| 219 |
+
2025-09-26 10:45:38,508 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 98/100 | Train Loss: 0.0108 | Val rms_score: 0.7221
|
| 220 |
+
2025-09-26 10:45:40,931 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 99/100 | Train Loss: 0.0109 | Val rms_score: 0.7219
|
| 221 |
+
2025-09-26 10:45:42,946 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 100/100 | Train Loss: 0.0103 | Val rms_score: 0.7254
|
| 222 |
+
2025-09-26 10:45:43,220 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Test rms_score: 0.7644
|
| 223 |
+
2025-09-26 10:45:43,571 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Starting triplicate run 3 for dataset adme_ppb_h at 2025-09-26_10-45-43
|
| 224 |
+
2025-09-26 10:45:45,731 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 1/100 | Train Loss: 3.2250 | Val rms_score: 1.0991
|
| 225 |
+
2025-09-26 10:45:45,731 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Global step of best model: 5
|
| 226 |
+
2025-09-26 10:45:46,356 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Best model saved at epoch 1 with val rms_score: 1.0991
|
| 227 |
+
2025-09-26 10:45:48,921 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 2/100 | Train Loss: 0.9375 | Val rms_score: 0.6430
|
| 228 |
+
2025-09-26 10:45:49,106 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Global step of best model: 10
|
| 229 |
+
2025-09-26 10:45:49,669 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Best model saved at epoch 2 with val rms_score: 0.6430
|
| 230 |
+
2025-09-26 10:45:52,230 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 3/100 | Train Loss: 0.6750 | Val rms_score: 0.7089
|
| 231 |
+
2025-09-26 10:45:54,510 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 4/100 | Train Loss: 0.5875 | Val rms_score: 0.5912
|
| 232 |
+
2025-09-26 10:45:54,702 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Global step of best model: 20
|
| 233 |
+
2025-09-26 10:45:55,269 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Best model saved at epoch 4 with val rms_score: 0.5912
|
| 234 |
+
2025-09-26 10:45:57,994 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 5/100 | Train Loss: 0.4313 | Val rms_score: 0.5587
|
| 235 |
+
2025-09-26 10:45:58,184 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Global step of best model: 25
|
| 236 |
+
2025-09-26 10:45:58,767 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Best model saved at epoch 5 with val rms_score: 0.5587
|
| 237 |
+
2025-09-26 10:46:00,893 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 6/100 | Train Loss: 0.3609 | Val rms_score: 0.5935
|
| 238 |
+
2025-09-26 10:46:03,660 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 7/100 | Train Loss: 0.3531 | Val rms_score: 0.6131
|
| 239 |
+
2025-09-26 10:46:06,347 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 8/100 | Train Loss: 0.3641 | Val rms_score: 0.6099
|
| 240 |
+
2025-09-26 10:46:08,541 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 9/100 | Train Loss: 0.3109 | Val rms_score: 0.6059
|
| 241 |
+
2025-09-26 10:46:11,024 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 10/100 | Train Loss: 0.2781 | Val rms_score: 0.6105
|
| 242 |
+
2025-09-26 10:46:13,399 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 11/100 | Train Loss: 0.2703 | Val rms_score: 0.6135
|
| 243 |
+
2025-09-26 10:46:16,195 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 12/100 | Train Loss: 0.2484 | Val rms_score: 0.6137
|
| 244 |
+
2025-09-26 10:46:17,942 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 13/100 | Train Loss: 0.2344 | Val rms_score: 0.6164
|
| 245 |
+
2025-09-26 10:46:20,020 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 14/100 | Train Loss: 0.2141 | Val rms_score: 0.6181
|
| 246 |
+
2025-09-26 10:46:22,221 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 15/100 | Train Loss: 0.1984 | Val rms_score: 0.6198
|
| 247 |
+
2025-09-26 10:46:24,530 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 16/100 | Train Loss: 0.2094 | Val rms_score: 0.6228
|
| 248 |
+
2025-09-26 10:46:27,270 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 17/100 | Train Loss: 0.1883 | Val rms_score: 0.6272
|
| 249 |
+
2025-09-26 10:46:29,980 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 18/100 | Train Loss: 0.1891 | Val rms_score: 0.6306
|
| 250 |
+
2025-09-26 10:46:32,353 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 19/100 | Train Loss: 0.1516 | Val rms_score: 0.6334
|
| 251 |
+
2025-09-26 10:46:34,841 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 20/100 | Train Loss: 0.1422 | Val rms_score: 0.6392
|
| 252 |
+
2025-09-26 10:46:36,845 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 21/100 | Train Loss: 0.1328 | Val rms_score: 0.6433
|
| 253 |
+
2025-09-26 10:46:38,841 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 22/100 | Train Loss: 0.1156 | Val rms_score: 0.6518
|
| 254 |
+
2025-09-26 10:46:41,156 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 23/100 | Train Loss: 0.1023 | Val rms_score: 0.6586
|
| 255 |
+
2025-09-26 10:46:43,471 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 24/100 | Train Loss: 0.0949 | Val rms_score: 0.6672
|
| 256 |
+
2025-09-26 10:46:45,792 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 25/100 | Train Loss: 0.0914 | Val rms_score: 0.6687
|
| 257 |
+
2025-09-26 10:46:48,184 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 26/100 | Train Loss: 0.0891 | Val rms_score: 0.6745
|
| 258 |
+
2025-09-26 10:46:50,893 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 27/100 | Train Loss: 0.0691 | Val rms_score: 0.6770
|
| 259 |
+
2025-09-26 10:46:53,318 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 28/100 | Train Loss: 0.0703 | Val rms_score: 0.6826
|
| 260 |
+
2025-09-26 10:46:55,655 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 29/100 | Train Loss: 0.0664 | Val rms_score: 0.6858
|
| 261 |
+
2025-09-26 10:46:57,686 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 30/100 | Train Loss: 0.0586 | Val rms_score: 0.6883
|
| 262 |
+
2025-09-26 10:46:59,981 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 31/100 | Train Loss: 0.0484 | Val rms_score: 0.6892
|
| 263 |
+
2025-09-26 10:47:02,759 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 32/100 | Train Loss: 0.0488 | Val rms_score: 0.6933
|
| 264 |
+
2025-09-26 10:47:05,160 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 33/100 | Train Loss: 0.0457 | Val rms_score: 0.6959
|
| 265 |
+
2025-09-26 10:47:07,447 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 34/100 | Train Loss: 0.0484 | Val rms_score: 0.6976
|
| 266 |
+
2025-09-26 10:47:09,800 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 35/100 | Train Loss: 0.0469 | Val rms_score: 0.6946
|
| 267 |
+
2025-09-26 10:47:12,108 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 36/100 | Train Loss: 0.0330 | Val rms_score: 0.6952
|
| 268 |
+
2025-09-26 10:47:14,346 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 37/100 | Train Loss: 0.0340 | Val rms_score: 0.7032
|
| 269 |
+
2025-09-26 10:47:16,349 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 38/100 | Train Loss: 0.0396 | Val rms_score: 0.7038
|
| 270 |
+
2025-09-26 10:47:18,987 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 39/100 | Train Loss: 0.0348 | Val rms_score: 0.7002
|
| 271 |
+
2025-09-26 10:47:21,661 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 40/100 | Train Loss: 0.0340 | Val rms_score: 0.6960
|
| 272 |
+
2025-09-26 10:47:24,188 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 41/100 | Train Loss: 0.0268 | Val rms_score: 0.6974
|
| 273 |
+
2025-09-26 10:47:27,089 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 42/100 | Train Loss: 0.0297 | Val rms_score: 0.6977
|
| 274 |
+
2025-09-26 10:47:29,533 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 43/100 | Train Loss: 0.0254 | Val rms_score: 0.6957
|
| 275 |
+
2025-09-26 10:47:31,993 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 44/100 | Train Loss: 0.0283 | Val rms_score: 0.6963
|
| 276 |
+
2025-09-26 10:47:33,850 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 45/100 | Train Loss: 0.0309 | Val rms_score: 0.6977
|
| 277 |
+
2025-09-26 10:47:36,140 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 46/100 | Train Loss: 0.0254 | Val rms_score: 0.6997
|
| 278 |
+
2025-09-26 10:47:38,782 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 47/100 | Train Loss: 0.0303 | Val rms_score: 0.6973
|
| 279 |
+
2025-09-26 10:47:41,097 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 48/100 | Train Loss: 0.0168 | Val rms_score: 0.6922
|
| 280 |
+
2025-09-26 10:47:43,488 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 49/100 | Train Loss: 0.0228 | Val rms_score: 0.6870
|
| 281 |
+
2025-09-26 10:47:45,955 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 50/100 | Train Loss: 0.0218 | Val rms_score: 0.6957
|
| 282 |
+
2025-09-26 10:47:48,494 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 51/100 | Train Loss: 0.0264 | Val rms_score: 0.7023
|
| 283 |
+
2025-09-26 10:47:50,592 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 52/100 | Train Loss: 0.0252 | Val rms_score: 0.6942
|
| 284 |
+
2025-09-26 10:47:52,451 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 53/100 | Train Loss: 0.0246 | Val rms_score: 0.6857
|
| 285 |
+
2025-09-26 10:47:55,158 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 54/100 | Train Loss: 0.0244 | Val rms_score: 0.6885
|
| 286 |
+
2025-09-26 10:47:57,634 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 55/100 | Train Loss: 0.0204 | Val rms_score: 0.7016
|
| 287 |
+
2025-09-26 10:48:00,012 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 56/100 | Train Loss: 0.0209 | Val rms_score: 0.7072
|
| 288 |
+
2025-09-26 10:48:02,887 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 57/100 | Train Loss: 0.0198 | Val rms_score: 0.7025
|
| 289 |
+
2025-09-26 10:48:05,368 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 58/100 | Train Loss: 0.0215 | Val rms_score: 0.6978
|
| 290 |
+
2025-09-26 10:48:07,789 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 59/100 | Train Loss: 0.0187 | Val rms_score: 0.6952
|
| 291 |
+
2025-09-26 10:48:09,769 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 60/100 | Train Loss: 0.0187 | Val rms_score: 0.6901
|
| 292 |
+
2025-09-26 10:48:11,902 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 61/100 | Train Loss: 0.0237 | Val rms_score: 0.6925
|
| 293 |
+
2025-09-26 10:48:14,706 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 62/100 | Train Loss: 0.0227 | Val rms_score: 0.6940
|
| 294 |
+
2025-09-26 10:48:17,368 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 63/100 | Train Loss: 0.0168 | Val rms_score: 0.6927
|
| 295 |
+
2025-09-26 10:48:19,629 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 64/100 | Train Loss: 0.0162 | Val rms_score: 0.6906
|
| 296 |
+
2025-09-26 10:48:22,038 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 65/100 | Train Loss: 0.0219 | Val rms_score: 0.6882
|
| 297 |
+
2025-09-26 10:48:24,528 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 66/100 | Train Loss: 0.0139 | Val rms_score: 0.6912
|
| 298 |
+
2025-09-26 10:48:27,147 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 67/100 | Train Loss: 0.0174 | Val rms_score: 0.6908
|
| 299 |
+
2025-09-26 10:48:28,764 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 68/100 | Train Loss: 0.0143 | Val rms_score: 0.6912
|
| 300 |
+
2025-09-26 10:48:30,977 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 69/100 | Train Loss: 0.0179 | Val rms_score: 0.6904
|
| 301 |
+
2025-09-26 10:48:33,474 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 70/100 | Train Loss: 0.0186 | Val rms_score: 0.6945
|
| 302 |
+
2025-09-26 10:48:35,832 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 71/100 | Train Loss: 0.0167 | Val rms_score: 0.6966
|
| 303 |
+
2025-09-26 10:48:38,597 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 72/100 | Train Loss: 0.0176 | Val rms_score: 0.7028
|
| 304 |
+
2025-09-26 10:48:41,015 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 73/100 | Train Loss: 0.0187 | Val rms_score: 0.7017
|
| 305 |
+
2025-09-26 10:48:43,345 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 74/100 | Train Loss: 0.0164 | Val rms_score: 0.6918
|
| 306 |
+
2025-09-26 10:48:45,684 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 75/100 | Train Loss: 0.0168 | Val rms_score: 0.6852
|
| 307 |
+
2025-09-26 10:48:47,414 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 76/100 | Train Loss: 0.0144 | Val rms_score: 0.6911
|
| 308 |
+
2025-09-26 10:48:50,059 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 77/100 | Train Loss: 0.0174 | Val rms_score: 0.6953
|
| 309 |
+
2025-09-26 10:48:52,449 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 78/100 | Train Loss: 0.0136 | Val rms_score: 0.6922
|
| 310 |
+
2025-09-26 10:48:54,729 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 79/100 | Train Loss: 0.0176 | Val rms_score: 0.6919
|
| 311 |
+
2025-09-26 10:48:57,030 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 80/100 | Train Loss: 0.0119 | Val rms_score: 0.6949
|
| 312 |
+
2025-09-26 10:48:59,286 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 81/100 | Train Loss: 0.0133 | Val rms_score: 0.7000
|
| 313 |
+
2025-09-26 10:49:02,012 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 82/100 | Train Loss: 0.0174 | Val rms_score: 0.6992
|
| 314 |
+
2025-09-26 10:49:04,271 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 83/100 | Train Loss: 0.0171 | Val rms_score: 0.6968
|
| 315 |
+
2025-09-26 10:49:06,167 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 84/100 | Train Loss: 0.0127 | Val rms_score: 0.6947
|
| 316 |
+
2025-09-26 10:49:08,578 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 85/100 | Train Loss: 0.0183 | Val rms_score: 0.6949
|
| 317 |
+
2025-09-26 10:49:10,946 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 86/100 | Train Loss: 0.0140 | Val rms_score: 0.6968
|
| 318 |
+
2025-09-26 10:49:13,821 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 87/100 | Train Loss: 0.0146 | Val rms_score: 0.6940
|
| 319 |
+
2025-09-26 10:49:16,171 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 88/100 | Train Loss: 0.0176 | Val rms_score: 0.6941
|
| 320 |
+
2025-09-26 10:49:17,953 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 89/100 | Train Loss: 0.0160 | Val rms_score: 0.6904
|
| 321 |
+
2025-09-26 10:49:20,628 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 90/100 | Train Loss: 0.0137 | Val rms_score: 0.6873
|
| 322 |
+
2025-09-26 10:49:23,433 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 91/100 | Train Loss: 0.0152 | Val rms_score: 0.6868
|
| 323 |
+
2025-09-26 10:49:25,999 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 92/100 | Train Loss: 0.0153 | Val rms_score: 0.6868
|
| 324 |
+
2025-09-26 10:49:28,374 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 93/100 | Train Loss: 0.0121 | Val rms_score: 0.6884
|
| 325 |
+
2025-09-26 10:49:30,695 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 94/100 | Train Loss: 0.0097 | Val rms_score: 0.6901
|
| 326 |
+
2025-09-26 10:49:33,065 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 95/100 | Train Loss: 0.0125 | Val rms_score: 0.6889
|
| 327 |
+
2025-09-26 10:49:35,338 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 96/100 | Train Loss: 0.0130 | Val rms_score: 0.6875
|
| 328 |
+
2025-09-26 10:49:38,063 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 97/100 | Train Loss: 0.0134 | Val rms_score: 0.6926
|
| 329 |
+
2025-09-26 10:49:40,491 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 98/100 | Train Loss: 0.0129 | Val rms_score: 0.6978
|
| 330 |
+
2025-09-26 10:49:42,803 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 99/100 | Train Loss: 0.0129 | Val rms_score: 0.6952
|
| 331 |
+
2025-09-26 10:49:44,577 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 100/100 | Train Loss: 0.0125 | Val rms_score: 0.6896
|
| 332 |
+
2025-09-26 10:49:44,890 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Test rms_score: 0.7927
|
| 333 |
+
2025-09-26 10:49:45,269 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Final Triplicate Test Results — Avg rms_score: 0.8041, Std Dev: 0.0378
|
logs_modchembert_regression_ModChemBERT-MLM/modchembert_deepchem_splits_run_adme_ppb_r_epochs100_batch_size32_20250926_104945.log
ADDED
|
@@ -0,0 +1,389 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
2025-09-26 10:49:45,278 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Running benchmark for dataset: adme_ppb_r
|
| 2 |
+
2025-09-26 10:49:45,278 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - dataset: adme_ppb_r, tasks: ['y'], epochs: 100, learning rate: 3e-05, transform: True
|
| 3 |
+
2025-09-26 10:49:45,291 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Starting triplicate run 1 for dataset adme_ppb_r at 2025-09-26_10-49-45
|
| 4 |
+
2025-09-26 10:49:47,460 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 1/100 | Train Loss: 2.9375 | Val rms_score: 0.6280
|
| 5 |
+
2025-09-26 10:49:47,460 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Global step of best model: 5
|
| 6 |
+
2025-09-26 10:49:48,180 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Best model saved at epoch 1 with val rms_score: 0.6280
|
| 7 |
+
2025-09-26 10:49:50,847 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 2/100 | Train Loss: 0.9250 | Val rms_score: 0.5080
|
| 8 |
+
2025-09-26 10:49:51,025 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Global step of best model: 10
|
| 9 |
+
2025-09-26 10:49:51,583 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Best model saved at epoch 2 with val rms_score: 0.5080
|
| 10 |
+
2025-09-26 10:49:54,472 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 3/100 | Train Loss: 0.5938 | Val rms_score: 0.5055
|
| 11 |
+
2025-09-26 10:49:54,670 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Global step of best model: 15
|
| 12 |
+
2025-09-26 10:49:55,266 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Best model saved at epoch 3 with val rms_score: 0.5055
|
| 13 |
+
2025-09-26 10:49:57,604 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 4/100 | Train Loss: 0.5781 | Val rms_score: 0.5046
|
| 14 |
+
2025-09-26 10:49:57,791 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Global step of best model: 20
|
| 15 |
+
2025-09-26 10:49:58,436 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Best model saved at epoch 4 with val rms_score: 0.5046
|
| 16 |
+
2025-09-26 10:50:00,575 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 5/100 | Train Loss: 0.5687 | Val rms_score: 0.4752
|
| 17 |
+
2025-09-26 10:50:00,767 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Global step of best model: 25
|
| 18 |
+
2025-09-26 10:50:01,336 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Best model saved at epoch 5 with val rms_score: 0.4752
|
| 19 |
+
2025-09-26 10:50:04,175 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 6/100 | Train Loss: 0.5437 | Val rms_score: 0.4251
|
| 20 |
+
2025-09-26 10:50:04,657 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Global step of best model: 30
|
| 21 |
+
2025-09-26 10:50:05,253 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Best model saved at epoch 6 with val rms_score: 0.4251
|
| 22 |
+
2025-09-26 10:50:07,659 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 7/100 | Train Loss: 0.4813 | Val rms_score: 0.3954
|
| 23 |
+
2025-09-26 10:50:07,855 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Global step of best model: 35
|
| 24 |
+
2025-09-26 10:50:08,422 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Best model saved at epoch 7 with val rms_score: 0.3954
|
| 25 |
+
2025-09-26 10:50:11,198 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 8/100 | Train Loss: 0.4437 | Val rms_score: 0.3885
|
| 26 |
+
2025-09-26 10:50:11,386 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Global step of best model: 40
|
| 27 |
+
2025-09-26 10:50:11,980 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Best model saved at epoch 8 with val rms_score: 0.3885
|
| 28 |
+
2025-09-26 10:50:14,520 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 9/100 | Train Loss: 0.4344 | Val rms_score: 0.3802
|
| 29 |
+
2025-09-26 10:50:14,780 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Global step of best model: 45
|
| 30 |
+
2025-09-26 10:50:15,383 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Best model saved at epoch 9 with val rms_score: 0.3802
|
| 31 |
+
2025-09-26 10:50:17,812 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 10/100 | Train Loss: 0.4125 | Val rms_score: 0.3770
|
| 32 |
+
2025-09-26 10:50:18,003 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Global step of best model: 50
|
| 33 |
+
2025-09-26 10:50:18,554 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Best model saved at epoch 10 with val rms_score: 0.3770
|
| 34 |
+
2025-09-26 10:50:21,352 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 11/100 | Train Loss: 0.4219 | Val rms_score: 0.3791
|
| 35 |
+
2025-09-26 10:50:24,347 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 12/100 | Train Loss: 0.4656 | Val rms_score: 0.3798
|
| 36 |
+
2025-09-26 10:50:26,943 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 13/100 | Train Loss: 0.4313 | Val rms_score: 0.3864
|
| 37 |
+
2025-09-26 10:50:29,556 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 14/100 | Train Loss: 0.3344 | Val rms_score: 0.3919
|
| 38 |
+
2025-09-26 10:50:31,859 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 15/100 | Train Loss: 0.3000 | Val rms_score: 0.3584
|
| 39 |
+
2025-09-26 10:50:32,060 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Global step of best model: 75
|
| 40 |
+
2025-09-26 10:50:32,686 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Best model saved at epoch 15 with val rms_score: 0.3584
|
| 41 |
+
2025-09-26 10:50:35,528 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 16/100 | Train Loss: 0.2891 | Val rms_score: 0.3423
|
| 42 |
+
2025-09-26 10:50:36,032 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Global step of best model: 80
|
| 43 |
+
2025-09-26 10:50:36,606 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Best model saved at epoch 16 with val rms_score: 0.3423
|
| 44 |
+
2025-09-26 10:50:38,947 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 17/100 | Train Loss: 0.2547 | Val rms_score: 0.3473
|
| 45 |
+
2025-09-26 10:50:41,211 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 18/100 | Train Loss: 0.2562 | Val rms_score: 0.3600
|
| 46 |
+
2025-09-26 10:50:43,543 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 19/100 | Train Loss: 0.2234 | Val rms_score: 0.3868
|
| 47 |
+
2025-09-26 10:50:46,029 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 20/100 | Train Loss: 0.1891 | Val rms_score: 0.4114
|
| 48 |
+
2025-09-26 10:50:48,301 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 21/100 | Train Loss: 0.2875 | Val rms_score: 0.3908
|
| 49 |
+
2025-09-26 10:50:50,216 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 22/100 | Train Loss: 0.1297 | Val rms_score: 0.3833
|
| 50 |
+
2025-09-26 10:50:52,652 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 23/100 | Train Loss: 0.1336 | Val rms_score: 0.4090
|
| 51 |
+
2025-09-26 10:50:54,987 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 24/100 | Train Loss: 0.1055 | Val rms_score: 0.4612
|
| 52 |
+
2025-09-26 10:50:57,344 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 25/100 | Train Loss: 0.1062 | Val rms_score: 0.4710
|
| 53 |
+
2025-09-26 10:50:59,722 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 26/100 | Train Loss: 0.0895 | Val rms_score: 0.4372
|
| 54 |
+
2025-09-26 10:51:02,301 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 27/100 | Train Loss: 0.0875 | Val rms_score: 0.4281
|
| 55 |
+
2025-09-26 10:51:04,666 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 28/100 | Train Loss: 0.0594 | Val rms_score: 0.4336
|
| 56 |
+
2025-09-26 10:51:07,055 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 29/100 | Train Loss: 0.0527 | Val rms_score: 0.4425
|
| 57 |
+
2025-09-26 10:51:08,936 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 30/100 | Train Loss: 0.0852 | Val rms_score: 0.4457
|
| 58 |
+
2025-09-26 10:51:10,653 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 31/100 | Train Loss: 0.0443 | Val rms_score: 0.4465
|
| 59 |
+
2025-09-26 10:51:13,632 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 32/100 | Train Loss: 0.0375 | Val rms_score: 0.4488
|
| 60 |
+
2025-09-26 10:51:16,106 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 33/100 | Train Loss: 0.0346 | Val rms_score: 0.4476
|
| 61 |
+
2025-09-26 10:51:18,594 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 34/100 | Train Loss: 0.0305 | Val rms_score: 0.4502
|
| 62 |
+
2025-09-26 10:51:20,990 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 35/100 | Train Loss: 0.0283 | Val rms_score: 0.4452
|
| 63 |
+
2025-09-26 10:51:23,335 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 36/100 | Train Loss: 0.0258 | Val rms_score: 0.4340
|
| 64 |
+
2025-09-26 10:51:26,112 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 37/100 | Train Loss: 0.0206 | Val rms_score: 0.4283
|
| 65 |
+
2025-09-26 10:51:27,803 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 38/100 | Train Loss: 0.0256 | Val rms_score: 0.4252
|
| 66 |
+
2025-09-26 10:51:29,466 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 39/100 | Train Loss: 0.0301 | Val rms_score: 0.4258
|
| 67 |
+
2025-09-26 10:51:31,942 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 40/100 | Train Loss: 0.0191 | Val rms_score: 0.4269
|
| 68 |
+
2025-09-26 10:51:34,191 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 41/100 | Train Loss: 0.0223 | Val rms_score: 0.4317
|
| 69 |
+
2025-09-26 10:51:36,908 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 42/100 | Train Loss: 0.0227 | Val rms_score: 0.4335
|
| 70 |
+
2025-09-26 10:51:39,281 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 43/100 | Train Loss: 0.0180 | Val rms_score: 0.4331
|
| 71 |
+
2025-09-26 10:51:41,647 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 44/100 | Train Loss: 0.0210 | Val rms_score: 0.4321
|
| 72 |
+
2025-09-26 10:51:43,972 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 45/100 | Train Loss: 0.0212 | Val rms_score: 0.4339
|
| 73 |
+
2025-09-26 10:51:46,302 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 46/100 | Train Loss: 0.0183 | Val rms_score: 0.4358
|
| 74 |
+
2025-09-26 10:51:48,654 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 47/100 | Train Loss: 0.0169 | Val rms_score: 0.4360
|
| 75 |
+
2025-09-26 10:51:51,004 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 48/100 | Train Loss: 0.0201 | Val rms_score: 0.4317
|
| 76 |
+
2025-09-26 10:51:53,233 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 49/100 | Train Loss: 0.0167 | Val rms_score: 0.4293
|
| 77 |
+
2025-09-26 10:51:55,632 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 50/100 | Train Loss: 0.0187 | Val rms_score: 0.4276
|
| 78 |
+
2025-09-26 10:51:58,030 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 51/100 | Train Loss: 0.0185 | Val rms_score: 0.4241
|
| 79 |
+
2025-09-26 10:52:00,716 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 52/100 | Train Loss: 0.0169 | Val rms_score: 0.4226
|
| 80 |
+
2025-09-26 10:52:03,062 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 53/100 | Train Loss: 0.0188 | Val rms_score: 0.4289
|
| 81 |
+
2025-09-26 10:52:05,404 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 54/100 | Train Loss: 0.0203 | Val rms_score: 0.4352
|
| 82 |
+
2025-09-26 10:52:07,107 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 55/100 | Train Loss: 0.0147 | Val rms_score: 0.4327
|
| 83 |
+
2025-09-26 10:52:09,385 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 56/100 | Train Loss: 0.0200 | Val rms_score: 0.4234
|
| 84 |
+
2025-09-26 10:52:11,529 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 57/100 | Train Loss: 0.0153 | Val rms_score: 0.4171
|
| 85 |
+
2025-09-26 10:52:13,962 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 58/100 | Train Loss: 0.0196 | Val rms_score: 0.4156
|
| 86 |
+
2025-09-26 10:52:16,343 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 59/100 | Train Loss: 0.0178 | Val rms_score: 0.4199
|
| 87 |
+
2025-09-26 10:52:18,770 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 60/100 | Train Loss: 0.0173 | Val rms_score: 0.4295
|
| 88 |
+
2025-09-26 10:52:21,087 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 61/100 | Train Loss: 0.0203 | Val rms_score: 0.4245
|
| 89 |
+
2025-09-26 10:52:23,839 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 62/100 | Train Loss: 0.0166 | Val rms_score: 0.4184
|
| 90 |
+
2025-09-26 10:52:25,645 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 63/100 | Train Loss: 0.0174 | Val rms_score: 0.4189
|
| 91 |
+
2025-09-26 10:52:27,665 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 64/100 | Train Loss: 0.0170 | Val rms_score: 0.4259
|
| 92 |
+
2025-09-26 10:52:30,165 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 65/100 | Train Loss: 0.0199 | Val rms_score: 0.4289
|
| 93 |
+
2025-09-26 10:52:32,650 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 66/100 | Train Loss: 0.0157 | Val rms_score: 0.4259
|
| 94 |
+
2025-09-26 10:52:35,400 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 67/100 | Train Loss: 0.0135 | Val rms_score: 0.4205
|
| 95 |
+
2025-09-26 10:52:37,732 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 68/100 | Train Loss: 0.0138 | Val rms_score: 0.4190
|
| 96 |
+
2025-09-26 10:52:40,030 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 69/100 | Train Loss: 0.0166 | Val rms_score: 0.4225
|
| 97 |
+
2025-09-26 10:52:42,396 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 70/100 | Train Loss: 0.0155 | Val rms_score: 0.4246
|
| 98 |
+
2025-09-26 10:52:44,309 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 71/100 | Train Loss: 0.0146 | Val rms_score: 0.4281
|
| 99 |
+
2025-09-26 10:52:46,913 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 72/100 | Train Loss: 0.0135 | Val rms_score: 0.4291
|
| 100 |
+
2025-09-26 10:52:49,236 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 73/100 | Train Loss: 0.0146 | Val rms_score: 0.4253
|
| 101 |
+
2025-09-26 10:52:51,531 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 74/100 | Train Loss: 0.0154 | Val rms_score: 0.4242
|
| 102 |
+
2025-09-26 10:52:53,969 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 75/100 | Train Loss: 0.0123 | Val rms_score: 0.4234
|
| 103 |
+
2025-09-26 10:52:56,467 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 76/100 | Train Loss: 0.0118 | Val rms_score: 0.4219
|
| 104 |
+
2025-09-26 10:52:59,118 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 77/100 | Train Loss: 0.0154 | Val rms_score: 0.4259
|
| 105 |
+
2025-09-26 10:53:01,540 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 78/100 | Train Loss: 0.0143 | Val rms_score: 0.4331
|
| 106 |
+
2025-09-26 10:53:03,271 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 79/100 | Train Loss: 0.0156 | Val rms_score: 0.4354
|
| 107 |
+
2025-09-26 10:53:05,580 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 80/100 | Train Loss: 0.0136 | Val rms_score: 0.4320
|
| 108 |
+
2025-09-26 10:53:07,886 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 81/100 | Train Loss: 0.0150 | Val rms_score: 0.4279
|
| 109 |
+
2025-09-26 10:53:10,596 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 82/100 | Train Loss: 0.0143 | Val rms_score: 0.4246
|
| 110 |
+
2025-09-26 10:53:12,886 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 83/100 | Train Loss: 0.0136 | Val rms_score: 0.4251
|
| 111 |
+
2025-09-26 10:53:15,261 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 84/100 | Train Loss: 0.0129 | Val rms_score: 0.4244
|
| 112 |
+
2025-09-26 10:53:17,575 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 85/100 | Train Loss: 0.0160 | Val rms_score: 0.4261
|
| 113 |
+
2025-09-26 10:53:19,860 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 86/100 | Train Loss: 0.0126 | Val rms_score: 0.4276
|
| 114 |
+
2025-09-26 10:53:22,347 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 87/100 | Train Loss: 0.0133 | Val rms_score: 0.4297
|
| 115 |
+
2025-09-26 10:53:24,775 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 88/100 | Train Loss: 0.0145 | Val rms_score: 0.4279
|
| 116 |
+
2025-09-26 10:53:27,864 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 89/100 | Train Loss: 0.0151 | Val rms_score: 0.4286
|
| 117 |
+
2025-09-26 10:53:30,168 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 90/100 | Train Loss: 0.0157 | Val rms_score: 0.4280
|
| 118 |
+
2025-09-26 10:53:32,580 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 91/100 | Train Loss: 0.0160 | Val rms_score: 0.4249
|
| 119 |
+
2025-09-26 10:53:35,393 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 92/100 | Train Loss: 0.0129 | Val rms_score: 0.4244
|
| 120 |
+
2025-09-26 10:53:37,395 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 93/100 | Train Loss: 0.0119 | Val rms_score: 0.4255
|
| 121 |
+
2025-09-26 10:53:39,690 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 94/100 | Train Loss: 0.0162 | Val rms_score: 0.4261
|
| 122 |
+
2025-09-26 10:53:42,069 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 95/100 | Train Loss: 0.0127 | Val rms_score: 0.4245
|
| 123 |
+
2025-09-26 10:53:44,316 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 96/100 | Train Loss: 0.0123 | Val rms_score: 0.4233
|
| 124 |
+
2025-09-26 10:53:46,903 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 97/100 | Train Loss: 0.0141 | Val rms_score: 0.4206
|
| 125 |
+
2025-09-26 10:53:49,221 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 98/100 | Train Loss: 0.0137 | Val rms_score: 0.4204
|
| 126 |
+
2025-09-26 10:53:51,669 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 99/100 | Train Loss: 0.0134 | Val rms_score: 0.4229
|
| 127 |
+
2025-09-26 10:53:54,203 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 100/100 | Train Loss: 0.0136 | Val rms_score: 0.4214
|
| 128 |
+
2025-09-26 10:53:54,539 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Test rms_score: 0.7305
|
| 129 |
+
2025-09-26 10:53:54,921 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Starting triplicate run 2 for dataset adme_ppb_r at 2025-09-26_10-53-54
|
| 130 |
+
2025-09-26 10:53:57,319 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 1/100 | Train Loss: 4.0250 | Val rms_score: 0.6272
|
| 131 |
+
2025-09-26 10:53:57,319 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Global step of best model: 5
|
| 132 |
+
2025-09-26 10:53:57,964 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Best model saved at epoch 1 with val rms_score: 0.6272
|
| 133 |
+
2025-09-26 10:54:00,974 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 2/100 | Train Loss: 1.0125 | Val rms_score: 0.5343
|
| 134 |
+
2025-09-26 10:54:01,162 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Global step of best model: 10
|
| 135 |
+
2025-09-26 10:54:01,734 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Best model saved at epoch 2 with val rms_score: 0.5343
|
| 136 |
+
2025-09-26 10:54:04,094 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 3/100 | Train Loss: 0.6500 | Val rms_score: 0.5753
|
| 137 |
+
2025-09-26 10:54:06,568 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 4/100 | Train Loss: 0.5844 | Val rms_score: 0.5682
|
| 138 |
+
2025-09-26 10:54:08,854 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 5/100 | Train Loss: 0.6875 | Val rms_score: 0.4641
|
| 139 |
+
2025-09-26 10:54:09,042 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Global step of best model: 25
|
| 140 |
+
2025-09-26 10:54:09,627 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Best model saved at epoch 5 with val rms_score: 0.4641
|
| 141 |
+
2025-09-26 10:54:11,257 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 6/100 | Train Loss: 0.6406 | Val rms_score: 0.3789
|
| 142 |
+
2025-09-26 10:54:11,835 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Global step of best model: 30
|
| 143 |
+
2025-09-26 10:54:12,432 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Best model saved at epoch 6 with val rms_score: 0.3789
|
| 144 |
+
2025-09-26 10:54:15,105 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 7/100 | Train Loss: 0.5750 | Val rms_score: 0.3845
|
| 145 |
+
2025-09-26 10:54:17,936 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 8/100 | Train Loss: 0.5375 | Val rms_score: 0.3791
|
| 146 |
+
2025-09-26 10:54:20,584 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 9/100 | Train Loss: 0.4531 | Val rms_score: 0.3738
|
| 147 |
+
2025-09-26 10:54:20,840 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Global step of best model: 45
|
| 148 |
+
2025-09-26 10:54:21,403 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Best model saved at epoch 9 with val rms_score: 0.3738
|
| 149 |
+
2025-09-26 10:54:23,782 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 10/100 | Train Loss: 0.6125 | Val rms_score: 0.3726
|
| 150 |
+
2025-09-26 10:54:23,986 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Global step of best model: 50
|
| 151 |
+
2025-09-26 10:54:24,577 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Best model saved at epoch 10 with val rms_score: 0.3726
|
| 152 |
+
2025-09-26 10:54:26,727 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 11/100 | Train Loss: 0.4031 | Val rms_score: 0.3703
|
| 153 |
+
2025-09-26 10:54:27,217 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Global step of best model: 55
|
| 154 |
+
2025-09-26 10:54:27,793 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Best model saved at epoch 11 with val rms_score: 0.3703
|
| 155 |
+
2025-09-26 10:54:30,545 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 12/100 | Train Loss: 0.4094 | Val rms_score: 0.3605
|
| 156 |
+
2025-09-26 10:54:30,827 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Global step of best model: 60
|
| 157 |
+
2025-09-26 10:54:31,431 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Best model saved at epoch 12 with val rms_score: 0.3605
|
| 158 |
+
2025-09-26 10:54:34,048 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 13/100 | Train Loss: 0.3125 | Val rms_score: 0.3482
|
| 159 |
+
2025-09-26 10:54:34,258 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Global step of best model: 65
|
| 160 |
+
2025-09-26 10:54:34,897 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Best model saved at epoch 13 with val rms_score: 0.3482
|
| 161 |
+
2025-09-26 10:54:37,632 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 14/100 | Train Loss: 0.3109 | Val rms_score: 0.3402
|
| 162 |
+
2025-09-26 10:54:37,826 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Global step of best model: 70
|
| 163 |
+
2025-09-26 10:54:38,447 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Best model saved at epoch 14 with val rms_score: 0.3402
|
| 164 |
+
2025-09-26 10:54:40,911 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 15/100 | Train Loss: 0.2750 | Val rms_score: 0.3390
|
| 165 |
+
2025-09-26 10:54:41,106 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Global step of best model: 75
|
| 166 |
+
2025-09-26 10:54:41,673 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Best model saved at epoch 15 with val rms_score: 0.3390
|
| 167 |
+
2025-09-26 10:54:44,178 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 16/100 | Train Loss: 0.3094 | Val rms_score: 0.3375
|
| 168 |
+
2025-09-26 10:54:44,689 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Global step of best model: 80
|
| 169 |
+
2025-09-26 10:54:45,248 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Best model saved at epoch 16 with val rms_score: 0.3375
|
| 170 |
+
2025-09-26 10:54:47,908 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 17/100 | Train Loss: 0.2297 | Val rms_score: 0.3382
|
| 171 |
+
2025-09-26 10:54:50,256 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 18/100 | Train Loss: 0.2078 | Val rms_score: 0.3352
|
| 172 |
+
2025-09-26 10:54:50,453 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Global step of best model: 90
|
| 173 |
+
2025-09-26 10:54:51,041 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Best model saved at epoch 18 with val rms_score: 0.3352
|
| 174 |
+
2025-09-26 10:54:52,905 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 19/100 | Train Loss: 0.1883 | Val rms_score: 0.3363
|
| 175 |
+
2025-09-26 10:54:55,346 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 20/100 | Train Loss: 0.1734 | Val rms_score: 0.3387
|
| 176 |
+
2025-09-26 10:54:57,599 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 21/100 | Train Loss: 0.2016 | Val rms_score: 0.3426
|
| 177 |
+
2025-09-26 10:54:59,500 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 22/100 | Train Loss: 0.1492 | Val rms_score: 0.3467
|
| 178 |
+
2025-09-26 10:55:01,756 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 23/100 | Train Loss: 0.1375 | Val rms_score: 0.3535
|
| 179 |
+
2025-09-26 10:55:04,021 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 24/100 | Train Loss: 0.1242 | Val rms_score: 0.3530
|
| 180 |
+
2025-09-26 10:55:06,461 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 25/100 | Train Loss: 0.1102 | Val rms_score: 0.3564
|
| 181 |
+
2025-09-26 10:55:08,954 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 26/100 | Train Loss: 0.1172 | Val rms_score: 0.3582
|
| 182 |
+
2025-09-26 10:55:11,578 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 27/100 | Train Loss: 0.1141 | Val rms_score: 0.3582
|
| 183 |
+
2025-09-26 10:55:13,876 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 28/100 | Train Loss: 0.0906 | Val rms_score: 0.3600
|
| 184 |
+
2025-09-26 10:55:16,097 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 29/100 | Train Loss: 0.0754 | Val rms_score: 0.3726
|
| 185 |
+
2025-09-26 10:55:17,908 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 30/100 | Train Loss: 0.0754 | Val rms_score: 0.3687
|
| 186 |
+
2025-09-26 10:55:20,391 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 31/100 | Train Loss: 0.0605 | Val rms_score: 0.3725
|
| 187 |
+
2025-09-26 10:55:23,086 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 32/100 | Train Loss: 0.0578 | Val rms_score: 0.3781
|
| 188 |
+
2025-09-26 10:55:25,402 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 33/100 | Train Loss: 0.0586 | Val rms_score: 0.3881
|
| 189 |
+
2025-09-26 10:55:27,782 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 34/100 | Train Loss: 0.0484 | Val rms_score: 0.4007
|
| 190 |
+
2025-09-26 10:55:30,435 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 35/100 | Train Loss: 0.0410 | Val rms_score: 0.3992
|
| 191 |
+
2025-09-26 10:55:32,908 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 36/100 | Train Loss: 0.0400 | Val rms_score: 0.3975
|
| 192 |
+
2025-09-26 10:55:34,912 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 37/100 | Train Loss: 0.0512 | Val rms_score: 0.4010
|
| 193 |
+
2025-09-26 10:55:37,022 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 38/100 | Train Loss: 0.0387 | Val rms_score: 0.4046
|
| 194 |
+
2025-09-26 10:55:39,531 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 39/100 | Train Loss: 0.0287 | Val rms_score: 0.4111
|
| 195 |
+
2025-09-26 10:55:42,158 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 40/100 | Train Loss: 0.0279 | Val rms_score: 0.4167
|
| 196 |
+
2025-09-26 10:55:44,791 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 41/100 | Train Loss: 0.0316 | Val rms_score: 0.4140
|
| 197 |
+
2025-09-26 10:55:47,566 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 42/100 | Train Loss: 0.0383 | Val rms_score: 0.4107
|
| 198 |
+
2025-09-26 10:55:49,952 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 43/100 | Train Loss: 0.0271 | Val rms_score: 0.4104
|
| 199 |
+
2025-09-26 10:55:52,346 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 44/100 | Train Loss: 0.0285 | Val rms_score: 0.4169
|
| 200 |
+
2025-09-26 10:55:54,132 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 45/100 | Train Loss: 0.0299 | Val rms_score: 0.4156
|
| 201 |
+
2025-09-26 10:55:56,513 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 46/100 | Train Loss: 0.0264 | Val rms_score: 0.4142
|
| 202 |
+
2025-09-26 10:55:59,494 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 47/100 | Train Loss: 0.0285 | Val rms_score: 0.4081
|
| 203 |
+
2025-09-26 10:56:01,918 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 48/100 | Train Loss: 0.0219 | Val rms_score: 0.4045
|
| 204 |
+
2025-09-26 10:56:04,215 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 49/100 | Train Loss: 0.0209 | Val rms_score: 0.4084
|
| 205 |
+
2025-09-26 10:56:06,925 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 50/100 | Train Loss: 0.0239 | Val rms_score: 0.4111
|
| 206 |
+
2025-09-26 10:56:09,173 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 51/100 | Train Loss: 0.0194 | Val rms_score: 0.4118
|
| 207 |
+
2025-09-26 10:56:11,091 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 52/100 | Train Loss: 0.0229 | Val rms_score: 0.4126
|
| 208 |
+
2025-09-26 10:56:13,084 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 53/100 | Train Loss: 0.0215 | Val rms_score: 0.4154
|
| 209 |
+
2025-09-26 10:56:15,602 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 54/100 | Train Loss: 0.0205 | Val rms_score: 0.4127
|
| 210 |
+
2025-09-26 10:56:18,090 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 55/100 | Train Loss: 0.0247 | Val rms_score: 0.4124
|
| 211 |
+
2025-09-26 10:56:20,429 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 56/100 | Train Loss: 0.0242 | Val rms_score: 0.4114
|
| 212 |
+
2025-09-26 10:56:23,100 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 57/100 | Train Loss: 0.0214 | Val rms_score: 0.4081
|
| 213 |
+
2025-09-26 10:56:25,390 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 58/100 | Train Loss: 0.0183 | Val rms_score: 0.4072
|
| 214 |
+
2025-09-26 10:56:27,751 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 59/100 | Train Loss: 0.0196 | Val rms_score: 0.4063
|
| 215 |
+
2025-09-26 10:56:29,620 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 60/100 | Train Loss: 0.0214 | Val rms_score: 0.4074
|
| 216 |
+
2025-09-26 10:56:32,113 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 61/100 | Train Loss: 0.0246 | Val rms_score: 0.4098
|
| 217 |
+
2025-09-26 10:56:34,956 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 62/100 | Train Loss: 0.0199 | Val rms_score: 0.4093
|
| 218 |
+
2025-09-26 10:56:37,178 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 63/100 | Train Loss: 0.0156 | Val rms_score: 0.4065
|
| 219 |
+
2025-09-26 10:56:39,547 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 64/100 | Train Loss: 0.0160 | Val rms_score: 0.4085
|
| 220 |
+
2025-09-26 10:56:41,973 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 65/100 | Train Loss: 0.0229 | Val rms_score: 0.4073
|
| 221 |
+
2025-09-26 10:56:44,367 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 66/100 | Train Loss: 0.0172 | Val rms_score: 0.4077
|
| 222 |
+
2025-09-26 10:56:47,051 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 67/100 | Train Loss: 0.0172 | Val rms_score: 0.4075
|
| 223 |
+
2025-09-26 10:56:48,961 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 68/100 | Train Loss: 0.0182 | Val rms_score: 0.4075
|
| 224 |
+
2025-09-26 10:56:51,309 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 69/100 | Train Loss: 0.0195 | Val rms_score: 0.4040
|
| 225 |
+
2025-09-26 10:56:53,658 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 70/100 | Train Loss: 0.0193 | Val rms_score: 0.4024
|
| 226 |
+
2025-09-26 10:56:56,004 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 71/100 | Train Loss: 0.0155 | Val rms_score: 0.4027
|
| 227 |
+
2025-09-26 10:56:58,703 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 72/100 | Train Loss: 0.0194 | Val rms_score: 0.4014
|
| 228 |
+
2025-09-26 10:57:01,052 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 73/100 | Train Loss: 0.0135 | Val rms_score: 0.4017
|
| 229 |
+
2025-09-26 10:57:03,348 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 74/100 | Train Loss: 0.0223 | Val rms_score: 0.4028
|
| 230 |
+
2025-09-26 10:57:05,845 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 75/100 | Train Loss: 0.0163 | Val rms_score: 0.4018
|
| 231 |
+
2025-09-26 10:57:07,808 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 76/100 | Train Loss: 0.0139 | Val rms_score: 0.4005
|
| 232 |
+
2025-09-26 10:57:10,675 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 77/100 | Train Loss: 0.0119 | Val rms_score: 0.3992
|
| 233 |
+
2025-09-26 10:57:12,934 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 78/100 | Train Loss: 0.0174 | Val rms_score: 0.3987
|
| 234 |
+
2025-09-26 10:57:15,357 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 79/100 | Train Loss: 0.0173 | Val rms_score: 0.3981
|
| 235 |
+
2025-09-26 10:57:17,716 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 80/100 | Train Loss: 0.0228 | Val rms_score: 0.3994
|
| 236 |
+
2025-09-26 10:57:20,056 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 81/100 | Train Loss: 0.0184 | Val rms_score: 0.4018
|
| 237 |
+
2025-09-26 10:57:22,772 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 82/100 | Train Loss: 0.0151 | Val rms_score: 0.4023
|
| 238 |
+
2025-09-26 10:57:24,445 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 83/100 | Train Loss: 0.0155 | Val rms_score: 0.4034
|
| 239 |
+
2025-09-26 10:57:27,230 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 84/100 | Train Loss: 0.0149 | Val rms_score: 0.4061
|
| 240 |
+
2025-09-26 10:57:29,799 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 85/100 | Train Loss: 0.0121 | Val rms_score: 0.4089
|
| 241 |
+
2025-09-26 10:57:31,910 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 86/100 | Train Loss: 0.0113 | Val rms_score: 0.4112
|
| 242 |
+
2025-09-26 10:57:34,535 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 87/100 | Train Loss: 0.0139 | Val rms_score: 0.4111
|
| 243 |
+
2025-09-26 10:57:36,945 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 88/100 | Train Loss: 0.0184 | Val rms_score: 0.4086
|
| 244 |
+
2025-09-26 10:57:39,839 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 89/100 | Train Loss: 0.0152 | Val rms_score: 0.4069
|
| 245 |
+
2025-09-26 10:57:41,595 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 90/100 | Train Loss: 0.0155 | Val rms_score: 0.4087
|
| 246 |
+
2025-09-26 10:57:44,225 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 91/100 | Train Loss: 0.0168 | Val rms_score: 0.4104
|
| 247 |
+
2025-09-26 10:57:47,345 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 92/100 | Train Loss: 0.0183 | Val rms_score: 0.4128
|
| 248 |
+
2025-09-26 10:57:49,806 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 93/100 | Train Loss: 0.0204 | Val rms_score: 0.4123
|
| 249 |
+
2025-09-26 10:57:52,119 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 94/100 | Train Loss: 0.0150 | Val rms_score: 0.4083
|
| 250 |
+
2025-09-26 10:57:54,491 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 95/100 | Train Loss: 0.0131 | Val rms_score: 0.4080
|
| 251 |
+
2025-09-26 10:57:56,921 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 96/100 | Train Loss: 0.0161 | Val rms_score: 0.4070
|
| 252 |
+
2025-09-26 10:57:59,613 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 97/100 | Train Loss: 0.0145 | Val rms_score: 0.4068
|
| 253 |
+
2025-09-26 10:58:01,532 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 98/100 | Train Loss: 0.0138 | Val rms_score: 0.4065
|
| 254 |
+
2025-09-26 10:58:03,891 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 99/100 | Train Loss: 0.0138 | Val rms_score: 0.4069
|
| 255 |
+
2025-09-26 10:58:06,160 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 100/100 | Train Loss: 0.0128 | Val rms_score: 0.4070
|
| 256 |
+
2025-09-26 10:58:06,623 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Test rms_score: 0.8018
|
| 257 |
+
2025-09-26 10:58:06,973 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Starting triplicate run 3 for dataset adme_ppb_r at 2025-09-26_10-58-06
|
| 258 |
+
2025-09-26 10:58:09,156 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 1/100 | Train Loss: 3.0500 | Val rms_score: 0.5530
|
| 259 |
+
2025-09-26 10:58:09,157 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Global step of best model: 5
|
| 260 |
+
2025-09-26 10:58:09,881 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Best model saved at epoch 1 with val rms_score: 0.5530
|
| 261 |
+
2025-09-26 10:58:12,466 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 2/100 | Train Loss: 0.5687 | Val rms_score: 0.6406
|
| 262 |
+
2025-09-26 10:58:15,002 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 3/100 | Train Loss: 0.7750 | Val rms_score: 0.5458
|
| 263 |
+
2025-09-26 10:58:15,183 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Global step of best model: 15
|
| 264 |
+
2025-09-26 10:58:15,743 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Best model saved at epoch 3 with val rms_score: 0.5458
|
| 265 |
+
2025-09-26 10:58:18,187 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 4/100 | Train Loss: 0.5125 | Val rms_score: 0.4641
|
| 266 |
+
2025-09-26 10:58:18,374 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Global step of best model: 20
|
| 267 |
+
2025-09-26 10:58:18,983 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Best model saved at epoch 4 with val rms_score: 0.4641
|
| 268 |
+
2025-09-26 10:58:21,569 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 5/100 | Train Loss: 0.5031 | Val rms_score: 0.4565
|
| 269 |
+
2025-09-26 10:58:21,754 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Global step of best model: 25
|
| 270 |
+
2025-09-26 10:58:22,352 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Best model saved at epoch 5 with val rms_score: 0.4565
|
| 271 |
+
2025-09-26 10:58:24,769 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 6/100 | Train Loss: 0.5062 | Val rms_score: 0.4280
|
| 272 |
+
2025-09-26 10:58:25,290 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Global step of best model: 30
|
| 273 |
+
2025-09-26 10:58:25,831 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Best model saved at epoch 6 with val rms_score: 0.4280
|
| 274 |
+
2025-09-26 10:58:28,634 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 7/100 | Train Loss: 0.4125 | Val rms_score: 0.3994
|
| 275 |
+
2025-09-26 10:58:28,821 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Global step of best model: 35
|
| 276 |
+
2025-09-26 10:58:29,434 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Best model saved at epoch 7 with val rms_score: 0.3994
|
| 277 |
+
2025-09-26 10:58:31,838 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 8/100 | Train Loss: 0.4281 | Val rms_score: 0.3781
|
| 278 |
+
2025-09-26 10:58:32,069 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Global step of best model: 40
|
| 279 |
+
2025-09-26 10:58:32,661 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Best model saved at epoch 8 with val rms_score: 0.3781
|
| 280 |
+
2025-09-26 10:58:35,525 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 9/100 | Train Loss: 0.3516 | Val rms_score: 0.3675
|
| 281 |
+
2025-09-26 10:58:35,711 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Global step of best model: 45
|
| 282 |
+
2025-09-26 10:58:36,356 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Best model saved at epoch 9 with val rms_score: 0.3675
|
| 283 |
+
2025-09-26 10:58:38,873 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 10/100 | Train Loss: 0.3484 | Val rms_score: 0.3642
|
| 284 |
+
2025-09-26 10:58:39,067 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Global step of best model: 50
|
| 285 |
+
2025-09-26 10:58:39,626 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Best model saved at epoch 10 with val rms_score: 0.3642
|
| 286 |
+
2025-09-26 10:58:42,370 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 11/100 | Train Loss: 0.3359 | Val rms_score: 0.3591
|
| 287 |
+
2025-09-26 10:58:42,876 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Global step of best model: 55
|
| 288 |
+
2025-09-26 10:58:43,485 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Best model saved at epoch 11 with val rms_score: 0.3591
|
| 289 |
+
2025-09-26 10:58:46,162 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 12/100 | Train Loss: 0.5750 | Val rms_score: 0.3566
|
| 290 |
+
2025-09-26 10:58:46,354 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Global step of best model: 60
|
| 291 |
+
2025-09-26 10:58:46,920 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Best model saved at epoch 12 with val rms_score: 0.3566
|
| 292 |
+
2025-09-26 10:58:48,531 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 13/100 | Train Loss: 0.2969 | Val rms_score: 0.3521
|
| 293 |
+
2025-09-26 10:58:48,727 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Global step of best model: 65
|
| 294 |
+
2025-09-26 10:58:49,323 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Best model saved at epoch 13 with val rms_score: 0.3521
|
| 295 |
+
2025-09-26 10:58:52,140 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 14/100 | Train Loss: 0.3187 | Val rms_score: 0.3368
|
| 296 |
+
2025-09-26 10:58:52,329 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Global step of best model: 70
|
| 297 |
+
2025-09-26 10:58:52,898 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Best model saved at epoch 14 with val rms_score: 0.3368
|
| 298 |
+
2025-09-26 10:58:55,457 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 15/100 | Train Loss: 0.2250 | Val rms_score: 0.3287
|
| 299 |
+
2025-09-26 10:58:55,651 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Global step of best model: 75
|
| 300 |
+
2025-09-26 10:58:56,358 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Best model saved at epoch 15 with val rms_score: 0.3287
|
| 301 |
+
2025-09-26 10:58:58,953 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 16/100 | Train Loss: 0.2203 | Val rms_score: 0.3252
|
| 302 |
+
2025-09-26 10:58:59,482 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Global step of best model: 80
|
| 303 |
+
2025-09-26 10:59:00,104 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Best model saved at epoch 16 with val rms_score: 0.3252
|
| 304 |
+
2025-09-26 10:59:02,619 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 17/100 | Train Loss: 0.1891 | Val rms_score: 0.3274
|
| 305 |
+
2025-09-26 10:59:04,449 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 18/100 | Train Loss: 0.1602 | Val rms_score: 0.3281
|
| 306 |
+
2025-09-26 10:59:06,270 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 19/100 | Train Loss: 0.1656 | Val rms_score: 0.3306
|
| 307 |
+
2025-09-26 10:59:09,277 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 20/100 | Train Loss: 0.1352 | Val rms_score: 0.3315
|
| 308 |
+
2025-09-26 10:59:11,889 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 21/100 | Train Loss: 0.1359 | Val rms_score: 0.3356
|
| 309 |
+
2025-09-26 10:59:14,529 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 22/100 | Train Loss: 0.1375 | Val rms_score: 0.3417
|
| 310 |
+
2025-09-26 10:59:16,859 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 23/100 | Train Loss: 0.1297 | Val rms_score: 0.3452
|
| 311 |
+
2025-09-26 10:59:19,207 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 24/100 | Train Loss: 0.1125 | Val rms_score: 0.3498
|
| 312 |
+
2025-09-26 10:59:21,797 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 25/100 | Train Loss: 0.1008 | Val rms_score: 0.3541
|
| 313 |
+
2025-09-26 10:59:23,536 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 26/100 | Train Loss: 0.0984 | Val rms_score: 0.3618
|
| 314 |
+
2025-09-26 10:59:26,219 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 27/100 | Train Loss: 0.0867 | Val rms_score: 0.3688
|
| 315 |
+
2025-09-26 10:59:28,485 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 28/100 | Train Loss: 0.0855 | Val rms_score: 0.3705
|
| 316 |
+
2025-09-26 10:59:30,943 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 29/100 | Train Loss: 0.1313 | Val rms_score: 0.3715
|
| 317 |
+
2025-09-26 10:59:33,395 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 30/100 | Train Loss: 0.0742 | Val rms_score: 0.3725
|
| 318 |
+
2025-09-26 10:59:35,736 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 31/100 | Train Loss: 0.0684 | Val rms_score: 0.3710
|
| 319 |
+
2025-09-26 10:59:38,479 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 32/100 | Train Loss: 0.0680 | Val rms_score: 0.3757
|
| 320 |
+
2025-09-26 10:59:40,157 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 33/100 | Train Loss: 0.0625 | Val rms_score: 0.3826
|
| 321 |
+
2025-09-26 10:59:41,641 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 34/100 | Train Loss: 0.1406 | Val rms_score: 0.3856
|
| 322 |
+
2025-09-26 10:59:44,500 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 35/100 | Train Loss: 0.0559 | Val rms_score: 0.3925
|
| 323 |
+
2025-09-26 10:59:47,441 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 36/100 | Train Loss: 0.0484 | Val rms_score: 0.3940
|
| 324 |
+
2025-09-26 10:59:50,692 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 37/100 | Train Loss: 0.0820 | Val rms_score: 0.3958
|
| 325 |
+
2025-09-26 10:59:53,176 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 38/100 | Train Loss: 0.0461 | Val rms_score: 0.4026
|
| 326 |
+
2025-09-26 10:59:55,567 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 39/100 | Train Loss: 0.0459 | Val rms_score: 0.4052
|
| 327 |
+
2025-09-26 10:59:57,640 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 40/100 | Train Loss: 0.0328 | Val rms_score: 0.4123
|
| 328 |
+
2025-09-26 11:00:00,089 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 41/100 | Train Loss: 0.0350 | Val rms_score: 0.4123
|
| 329 |
+
2025-09-26 11:00:02,717 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 42/100 | Train Loss: 0.0336 | Val rms_score: 0.4121
|
| 330 |
+
2025-09-26 11:00:05,029 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 43/100 | Train Loss: 0.0301 | Val rms_score: 0.4126
|
| 331 |
+
2025-09-26 11:00:07,567 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 44/100 | Train Loss: 0.0346 | Val rms_score: 0.4111
|
| 332 |
+
2025-09-26 11:00:10,021 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 45/100 | Train Loss: 0.0398 | Val rms_score: 0.4092
|
| 333 |
+
2025-09-26 11:00:12,402 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 46/100 | Train Loss: 0.0258 | Val rms_score: 0.4080
|
| 334 |
+
2025-09-26 11:00:15,033 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 47/100 | Train Loss: 0.0277 | Val rms_score: 0.4049
|
| 335 |
+
2025-09-26 11:00:16,681 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 48/100 | Train Loss: 0.0258 | Val rms_score: 0.4055
|
| 336 |
+
2025-09-26 11:00:18,937 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 49/100 | Train Loss: 0.0254 | Val rms_score: 0.4064
|
| 337 |
+
2025-09-26 11:00:21,305 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 50/100 | Train Loss: 0.0246 | Val rms_score: 0.4044
|
| 338 |
+
2025-09-26 11:00:23,921 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 51/100 | Train Loss: 0.0279 | Val rms_score: 0.4039
|
| 339 |
+
2025-09-26 11:00:26,645 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 52/100 | Train Loss: 0.0250 | Val rms_score: 0.4047
|
| 340 |
+
2025-09-26 11:00:28,920 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 53/100 | Train Loss: 0.0240 | Val rms_score: 0.4026
|
| 341 |
+
2025-09-26 11:00:31,157 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 54/100 | Train Loss: 0.0229 | Val rms_score: 0.4002
|
| 342 |
+
2025-09-26 11:00:33,487 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 55/100 | Train Loss: 0.0200 | Val rms_score: 0.4021
|
| 343 |
+
2025-09-26 11:00:35,213 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 56/100 | Train Loss: 0.0235 | Val rms_score: 0.4026
|
| 344 |
+
2025-09-26 11:00:37,210 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 57/100 | Train Loss: 0.0201 | Val rms_score: 0.4098
|
| 345 |
+
2025-09-26 11:00:39,754 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 58/100 | Train Loss: 0.0266 | Val rms_score: 0.4073
|
| 346 |
+
2025-09-26 11:00:42,167 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 59/100 | Train Loss: 0.0217 | Val rms_score: 0.4055
|
| 347 |
+
2025-09-26 11:00:44,659 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 60/100 | Train Loss: 0.0226 | Val rms_score: 0.4067
|
| 348 |
+
2025-09-26 11:00:47,100 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 61/100 | Train Loss: 0.0243 | Val rms_score: 0.4091
|
| 349 |
+
2025-09-26 11:00:49,769 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 62/100 | Train Loss: 0.0221 | Val rms_score: 0.4089
|
| 350 |
+
2025-09-26 11:00:52,086 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 63/100 | Train Loss: 0.0235 | Val rms_score: 0.4106
|
| 351 |
+
2025-09-26 11:00:53,863 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 64/100 | Train Loss: 0.0179 | Val rms_score: 0.4102
|
| 352 |
+
2025-09-26 11:00:56,603 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 65/100 | Train Loss: 0.0198 | Val rms_score: 0.4066
|
| 353 |
+
2025-09-26 11:00:59,308 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 66/100 | Train Loss: 0.0196 | Val rms_score: 0.4064
|
| 354 |
+
2025-09-26 11:01:02,157 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 67/100 | Train Loss: 0.0167 | Val rms_score: 0.4044
|
| 355 |
+
2025-09-26 11:01:04,597 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 68/100 | Train Loss: 0.0152 | Val rms_score: 0.4049
|
| 356 |
+
2025-09-26 11:01:06,963 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 69/100 | Train Loss: 0.0184 | Val rms_score: 0.4048
|
| 357 |
+
2025-09-26 11:01:09,427 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 70/100 | Train Loss: 0.0209 | Val rms_score: 0.4031
|
| 358 |
+
2025-09-26 11:01:11,702 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 71/100 | Train Loss: 0.0148 | Val rms_score: 0.4048
|
| 359 |
+
2025-09-26 11:01:14,728 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 72/100 | Train Loss: 0.0177 | Val rms_score: 0.4040
|
| 360 |
+
2025-09-26 11:01:17,152 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 73/100 | Train Loss: 0.0191 | Val rms_score: 0.4030
|
| 361 |
+
2025-09-26 11:01:19,410 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 74/100 | Train Loss: 0.0219 | Val rms_score: 0.4048
|
| 362 |
+
2025-09-26 11:01:21,701 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 75/100 | Train Loss: 0.0191 | Val rms_score: 0.4037
|
| 363 |
+
2025-09-26 11:01:24,052 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 76/100 | Train Loss: 0.0173 | Val rms_score: 0.4054
|
| 364 |
+
2025-09-26 11:01:26,783 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 77/100 | Train Loss: 0.0193 | Val rms_score: 0.4094
|
| 365 |
+
2025-09-26 11:01:29,095 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 78/100 | Train Loss: 0.0187 | Val rms_score: 0.4084
|
| 366 |
+
2025-09-26 11:01:32,003 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 79/100 | Train Loss: 0.0168 | Val rms_score: 0.4081
|
| 367 |
+
2025-09-26 11:01:35,049 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 80/100 | Train Loss: 0.0199 | Val rms_score: 0.4055
|
| 368 |
+
2025-09-26 11:01:38,022 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 81/100 | Train Loss: 0.0177 | Val rms_score: 0.4046
|
| 369 |
+
2025-09-26 11:01:41,099 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 82/100 | Train Loss: 0.0197 | Val rms_score: 0.4057
|
| 370 |
+
2025-09-26 11:01:43,982 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 83/100 | Train Loss: 0.0170 | Val rms_score: 0.4116
|
| 371 |
+
2025-09-26 11:01:47,011 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 84/100 | Train Loss: 0.0186 | Val rms_score: 0.4181
|
| 372 |
+
2025-09-26 11:01:50,081 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 85/100 | Train Loss: 0.0188 | Val rms_score: 0.4177
|
| 373 |
+
2025-09-26 11:01:53,176 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 86/100 | Train Loss: 0.0186 | Val rms_score: 0.4186
|
| 374 |
+
2025-09-26 11:01:56,219 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 87/100 | Train Loss: 0.0207 | Val rms_score: 0.4168
|
| 375 |
+
2025-09-26 11:01:59,220 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 88/100 | Train Loss: 0.0163 | Val rms_score: 0.4137
|
| 376 |
+
2025-09-26 11:02:02,127 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 89/100 | Train Loss: 0.0145 | Val rms_score: 0.4119
|
| 377 |
+
2025-09-26 11:02:05,084 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 90/100 | Train Loss: 0.0144 | Val rms_score: 0.4098
|
| 378 |
+
2025-09-26 11:02:07,473 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 91/100 | Train Loss: 0.0132 | Val rms_score: 0.4068
|
| 379 |
+
2025-09-26 11:02:10,863 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 92/100 | Train Loss: 0.0126 | Val rms_score: 0.4049
|
| 380 |
+
2025-09-26 11:02:13,774 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 93/100 | Train Loss: 0.0170 | Val rms_score: 0.4033
|
| 381 |
+
2025-09-26 11:02:16,668 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 94/100 | Train Loss: 0.0170 | Val rms_score: 0.4003
|
| 382 |
+
2025-09-26 11:02:19,233 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 95/100 | Train Loss: 0.0186 | Val rms_score: 0.3980
|
| 383 |
+
2025-09-26 11:02:21,593 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 96/100 | Train Loss: 0.0195 | Val rms_score: 0.3993
|
| 384 |
+
2025-09-26 11:02:24,961 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 97/100 | Train Loss: 0.0177 | Val rms_score: 0.3973
|
| 385 |
+
2025-09-26 11:02:27,916 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 98/100 | Train Loss: 0.0160 | Val rms_score: 0.3955
|
| 386 |
+
2025-09-26 11:02:30,579 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 99/100 | Train Loss: 0.0169 | Val rms_score: 0.3950
|
| 387 |
+
2025-09-26 11:02:33,194 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 100/100 | Train Loss: 0.0139 | Val rms_score: 0.3943
|
| 388 |
+
2025-09-26 11:02:33,708 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Test rms_score: 0.8225
|
| 389 |
+
2025-09-26 11:02:34,176 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Final Triplicate Test Results — Avg rms_score: 0.7849, Std Dev: 0.0394
|
logs_modchembert_regression_ModChemBERT-MLM/modchembert_deepchem_splits_run_adme_solubility_epochs100_batch_size32_20250926_110234.log
ADDED
|
@@ -0,0 +1,357 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
2025-09-26 11:02:34,178 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Running benchmark for dataset: adme_solubility
|
| 2 |
+
2025-09-26 11:02:34,178 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - dataset: adme_solubility, tasks: ['y'], epochs: 100, learning rate: 3e-05, transform: True
|
| 3 |
+
2025-09-26 11:02:34,182 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Starting triplicate run 1 for dataset adme_solubility at 2025-09-26_11-02-34
|
| 4 |
+
2025-09-26 11:02:47,317 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 1/100 | Train Loss: 1.1091 | Val rms_score: 0.4554
|
| 5 |
+
2025-09-26 11:02:47,317 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Global step of best model: 55
|
| 6 |
+
2025-09-26 11:02:47,916 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Best model saved at epoch 1 with val rms_score: 0.4554
|
| 7 |
+
2025-09-26 11:03:02,820 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 2/100 | Train Loss: 0.6000 | Val rms_score: 0.4460
|
| 8 |
+
2025-09-26 11:03:03,012 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Global step of best model: 110
|
| 9 |
+
2025-09-26 11:03:03,580 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Best model saved at epoch 2 with val rms_score: 0.4460
|
| 10 |
+
2025-09-26 11:03:18,727 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 3/100 | Train Loss: 0.6318 | Val rms_score: 0.4932
|
| 11 |
+
2025-09-26 11:03:33,622 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 4/100 | Train Loss: 0.5250 | Val rms_score: 0.4972
|
| 12 |
+
2025-09-26 11:03:48,740 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 5/100 | Train Loss: 0.4250 | Val rms_score: 0.4993
|
| 13 |
+
2025-09-26 11:04:03,137 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 6/100 | Train Loss: 0.2958 | Val rms_score: 0.5062
|
| 14 |
+
2025-09-26 11:04:18,741 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 7/100 | Train Loss: 0.2057 | Val rms_score: 0.4519
|
| 15 |
+
2025-09-26 11:04:32,866 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 8/100 | Train Loss: 0.1477 | Val rms_score: 0.4412
|
| 16 |
+
2025-09-26 11:04:33,020 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Global step of best model: 440
|
| 17 |
+
2025-09-26 11:04:33,620 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Best model saved at epoch 8 with val rms_score: 0.4412
|
| 18 |
+
2025-09-26 11:04:49,150 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 9/100 | Train Loss: 0.1159 | Val rms_score: 0.4349
|
| 19 |
+
2025-09-26 11:04:49,366 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Global step of best model: 495
|
| 20 |
+
2025-09-26 11:04:49,949 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Best model saved at epoch 9 with val rms_score: 0.4349
|
| 21 |
+
2025-09-26 11:05:03,212 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 10/100 | Train Loss: 0.1031 | Val rms_score: 0.4616
|
| 22 |
+
2025-09-26 11:05:18,451 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 11/100 | Train Loss: 0.0781 | Val rms_score: 0.4471
|
| 23 |
+
2025-09-26 11:05:33,434 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 12/100 | Train Loss: 0.0795 | Val rms_score: 0.4732
|
| 24 |
+
2025-09-26 11:05:48,096 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 13/100 | Train Loss: 0.0734 | Val rms_score: 0.4725
|
| 25 |
+
2025-09-26 11:06:02,621 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 14/100 | Train Loss: 0.0634 | Val rms_score: 0.4612
|
| 26 |
+
2025-09-26 11:06:18,009 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 15/100 | Train Loss: 0.0631 | Val rms_score: 0.4470
|
| 27 |
+
2025-09-26 11:06:32,081 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 16/100 | Train Loss: 0.0616 | Val rms_score: 0.4521
|
| 28 |
+
2025-09-26 11:06:47,249 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 17/100 | Train Loss: 0.0589 | Val rms_score: 0.4469
|
| 29 |
+
2025-09-26 11:06:59,985 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 18/100 | Train Loss: 0.0577 | Val rms_score: 0.4574
|
| 30 |
+
2025-09-26 11:07:16,923 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 19/100 | Train Loss: 0.0497 | Val rms_score: 0.4485
|
| 31 |
+
2025-09-26 11:07:30,937 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 20/100 | Train Loss: 0.0554 | Val rms_score: 0.4563
|
| 32 |
+
2025-09-26 11:07:45,494 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 21/100 | Train Loss: 0.0523 | Val rms_score: 0.4581
|
| 33 |
+
2025-09-26 11:07:59,954 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 22/100 | Train Loss: 0.0371 | Val rms_score: 0.4472
|
| 34 |
+
2025-09-26 11:08:15,406 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 23/100 | Train Loss: 0.0392 | Val rms_score: 0.4489
|
| 35 |
+
2025-09-26 11:08:29,925 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 24/100 | Train Loss: 0.0424 | Val rms_score: 0.4425
|
| 36 |
+
2025-09-26 11:08:44,122 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 25/100 | Train Loss: 0.0423 | Val rms_score: 0.4478
|
| 37 |
+
2025-09-26 11:08:59,343 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 26/100 | Train Loss: 0.0427 | Val rms_score: 0.4372
|
| 38 |
+
2025-09-26 11:09:13,846 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 27/100 | Train Loss: 0.0392 | Val rms_score: 0.4354
|
| 39 |
+
2025-09-26 11:09:28,896 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 28/100 | Train Loss: 0.0418 | Val rms_score: 0.4551
|
| 40 |
+
2025-09-26 11:09:42,443 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 29/100 | Train Loss: 0.0381 | Val rms_score: 0.4481
|
| 41 |
+
2025-09-26 11:09:57,721 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 30/100 | Train Loss: 0.0353 | Val rms_score: 0.4353
|
| 42 |
+
2025-09-26 11:10:11,933 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 31/100 | Train Loss: 0.0303 | Val rms_score: 0.4391
|
| 43 |
+
2025-09-26 11:10:26,946 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 32/100 | Train Loss: 0.0334 | Val rms_score: 0.4508
|
| 44 |
+
2025-09-26 11:10:41,503 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 33/100 | Train Loss: 0.0365 | Val rms_score: 0.4424
|
| 45 |
+
2025-09-26 11:10:56,995 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 34/100 | Train Loss: 0.0334 | Val rms_score: 0.4413
|
| 46 |
+
2025-09-26 11:11:11,424 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 35/100 | Train Loss: 0.0336 | Val rms_score: 0.4303
|
| 47 |
+
2025-09-26 11:11:11,597 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Global step of best model: 1925
|
| 48 |
+
2025-09-26 11:11:12,321 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Best model saved at epoch 35 with val rms_score: 0.4303
|
| 49 |
+
2025-09-26 11:11:27,604 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 36/100 | Train Loss: 0.0312 | Val rms_score: 0.4350
|
| 50 |
+
2025-09-26 11:11:43,353 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 37/100 | Train Loss: 0.0295 | Val rms_score: 0.4358
|
| 51 |
+
2025-09-26 11:11:56,605 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 38/100 | Train Loss: 0.0331 | Val rms_score: 0.4271
|
| 52 |
+
2025-09-26 11:11:56,765 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Global step of best model: 2090
|
| 53 |
+
2025-09-26 11:11:57,323 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Best model saved at epoch 38 with val rms_score: 0.4271
|
| 54 |
+
2025-09-26 11:12:10,292 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 39/100 | Train Loss: 0.0300 | Val rms_score: 0.4464
|
| 55 |
+
2025-09-26 11:12:25,643 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 40/100 | Train Loss: 0.0277 | Val rms_score: 0.4356
|
| 56 |
+
2025-09-26 11:12:39,809 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 41/100 | Train Loss: 0.0283 | Val rms_score: 0.4273
|
| 57 |
+
2025-09-26 11:12:55,512 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 42/100 | Train Loss: 0.0293 | Val rms_score: 0.4318
|
| 58 |
+
2025-09-26 11:13:10,872 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 43/100 | Train Loss: 0.0270 | Val rms_score: 0.4353
|
| 59 |
+
2025-09-26 11:13:26,028 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 44/100 | Train Loss: 0.0275 | Val rms_score: 0.4324
|
| 60 |
+
2025-09-26 11:13:41,066 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 45/100 | Train Loss: 0.0264 | Val rms_score: 0.4320
|
| 61 |
+
2025-09-26 11:13:55,160 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 46/100 | Train Loss: 0.0257 | Val rms_score: 0.4333
|
| 62 |
+
2025-09-26 11:14:10,682 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 47/100 | Train Loss: 0.0270 | Val rms_score: 0.4283
|
| 63 |
+
2025-09-26 11:14:25,096 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 48/100 | Train Loss: 0.0262 | Val rms_score: 0.4278
|
| 64 |
+
2025-09-26 11:14:40,218 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 49/100 | Train Loss: 0.0253 | Val rms_score: 0.4365
|
| 65 |
+
2025-09-26 11:14:53,134 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 50/100 | Train Loss: 0.0256 | Val rms_score: 0.4373
|
| 66 |
+
2025-09-26 11:15:08,480 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 51/100 | Train Loss: 0.0273 | Val rms_score: 0.4232
|
| 67 |
+
2025-09-26 11:15:08,984 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Global step of best model: 2805
|
| 68 |
+
2025-09-26 11:15:09,570 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Best model saved at epoch 51 with val rms_score: 0.4232
|
| 69 |
+
2025-09-26 11:15:21,948 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 52/100 | Train Loss: 0.0234 | Val rms_score: 0.4322
|
| 70 |
+
2025-09-26 11:15:37,236 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 53/100 | Train Loss: 0.0234 | Val rms_score: 0.4254
|
| 71 |
+
2025-09-26 11:15:51,095 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 54/100 | Train Loss: 0.0249 | Val rms_score: 0.4296
|
| 72 |
+
2025-09-26 11:16:07,075 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 55/100 | Train Loss: 0.0231 | Val rms_score: 0.4299
|
| 73 |
+
2025-09-26 11:16:21,462 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 56/100 | Train Loss: 0.0243 | Val rms_score: 0.4290
|
| 74 |
+
2025-09-26 11:16:36,845 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 57/100 | Train Loss: 0.0231 | Val rms_score: 0.4311
|
| 75 |
+
2025-09-26 11:16:51,591 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 58/100 | Train Loss: 0.0229 | Val rms_score: 0.4268
|
| 76 |
+
2025-09-26 11:17:06,394 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 59/100 | Train Loss: 0.0226 | Val rms_score: 0.4274
|
| 77 |
+
2025-09-26 11:17:21,143 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 60/100 | Train Loss: 0.0200 | Val rms_score: 0.4312
|
| 78 |
+
2025-09-26 11:17:35,580 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 61/100 | Train Loss: 0.0224 | Val rms_score: 0.4429
|
| 79 |
+
2025-09-26 11:17:51,294 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 62/100 | Train Loss: 0.0213 | Val rms_score: 0.4283
|
| 80 |
+
2025-09-26 11:18:05,367 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 63/100 | Train Loss: 0.0223 | Val rms_score: 0.4226
|
| 81 |
+
2025-09-26 11:18:05,529 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Global step of best model: 3465
|
| 82 |
+
2025-09-26 11:18:06,194 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Best model saved at epoch 63 with val rms_score: 0.4226
|
| 83 |
+
2025-09-26 11:18:21,581 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 64/100 | Train Loss: 0.0222 | Val rms_score: 0.4255
|
| 84 |
+
2025-09-26 11:18:36,021 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 65/100 | Train Loss: 0.0217 | Val rms_score: 0.4242
|
| 85 |
+
2025-09-26 11:18:51,308 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 66/100 | Train Loss: 0.0245 | Val rms_score: 0.4352
|
| 86 |
+
2025-09-26 11:19:05,930 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 67/100 | Train Loss: 0.0210 | Val rms_score: 0.4229
|
| 87 |
+
2025-09-26 11:19:20,974 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 68/100 | Train Loss: 0.0204 | Val rms_score: 0.4156
|
| 88 |
+
2025-09-26 11:19:21,128 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Global step of best model: 3740
|
| 89 |
+
2025-09-26 11:19:21,702 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Best model saved at epoch 68 with val rms_score: 0.4156
|
| 90 |
+
2025-09-26 11:19:36,285 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 69/100 | Train Loss: 0.0209 | Val rms_score: 0.4299
|
| 91 |
+
2025-09-26 11:19:51,683 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 70/100 | Train Loss: 0.0228 | Val rms_score: 0.4230
|
| 92 |
+
2025-09-26 11:20:05,479 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 71/100 | Train Loss: 0.0297 | Val rms_score: 0.4306
|
| 93 |
+
2025-09-26 11:20:21,150 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 72/100 | Train Loss: 0.0207 | Val rms_score: 0.4260
|
| 94 |
+
2025-09-26 11:20:35,717 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 73/100 | Train Loss: 0.0206 | Val rms_score: 0.4316
|
| 95 |
+
2025-09-26 11:20:50,994 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 74/100 | Train Loss: 0.0190 | Val rms_score: 0.4253
|
| 96 |
+
2025-09-26 11:21:05,153 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 75/100 | Train Loss: 0.0208 | Val rms_score: 0.4226
|
| 97 |
+
2025-09-26 11:21:20,322 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 76/100 | Train Loss: 0.0189 | Val rms_score: 0.4174
|
| 98 |
+
2025-09-26 11:21:33,538 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 77/100 | Train Loss: 0.0187 | Val rms_score: 0.4320
|
| 99 |
+
2025-09-26 11:21:48,603 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 78/100 | Train Loss: 0.0193 | Val rms_score: 0.4305
|
| 100 |
+
2025-09-26 11:22:02,754 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 79/100 | Train Loss: 0.0201 | Val rms_score: 0.4263
|
| 101 |
+
2025-09-26 11:22:18,263 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 80/100 | Train Loss: 0.0202 | Val rms_score: 0.4236
|
| 102 |
+
2025-09-26 11:22:33,357 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 81/100 | Train Loss: 0.0192 | Val rms_score: 0.4285
|
| 103 |
+
2025-09-26 11:22:48,660 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 82/100 | Train Loss: 0.0187 | Val rms_score: 0.4239
|
| 104 |
+
2025-09-26 11:23:03,761 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 83/100 | Train Loss: 0.0199 | Val rms_score: 0.4328
|
| 105 |
+
2025-09-26 11:23:17,882 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 84/100 | Train Loss: 0.0183 | Val rms_score: 0.4273
|
| 106 |
+
2025-09-26 11:23:33,098 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 85/100 | Train Loss: 0.0187 | Val rms_score: 0.4269
|
| 107 |
+
2025-09-26 11:23:46,265 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 86/100 | Train Loss: 0.0178 | Val rms_score: 0.4289
|
| 108 |
+
2025-09-26 11:24:01,995 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 87/100 | Train Loss: 0.0186 | Val rms_score: 0.4281
|
| 109 |
+
2025-09-26 11:24:16,667 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 88/100 | Train Loss: 0.0182 | Val rms_score: 0.4369
|
| 110 |
+
2025-09-26 11:24:32,017 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 89/100 | Train Loss: 0.0175 | Val rms_score: 0.4348
|
| 111 |
+
2025-09-26 11:24:46,357 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 90/100 | Train Loss: 0.0169 | Val rms_score: 0.4276
|
| 112 |
+
2025-09-26 11:25:02,664 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 91/100 | Train Loss: 0.0222 | Val rms_score: 0.4222
|
| 113 |
+
2025-09-26 11:25:17,608 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 92/100 | Train Loss: 0.0180 | Val rms_score: 0.4269
|
| 114 |
+
2025-09-26 11:25:32,949 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 93/100 | Train Loss: 0.0191 | Val rms_score: 0.4233
|
| 115 |
+
2025-09-26 11:25:45,796 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 94/100 | Train Loss: 0.0181 | Val rms_score: 0.4251
|
| 116 |
+
2025-09-26 11:26:00,801 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 95/100 | Train Loss: 0.0157 | Val rms_score: 0.4279
|
| 117 |
+
2025-09-26 11:26:16,352 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 96/100 | Train Loss: 0.0171 | Val rms_score: 0.4238
|
| 118 |
+
2025-09-26 11:26:32,227 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 97/100 | Train Loss: 0.0174 | Val rms_score: 0.4319
|
| 119 |
+
2025-09-26 11:26:46,145 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 98/100 | Train Loss: 0.0187 | Val rms_score: 0.4269
|
| 120 |
+
2025-09-26 11:27:01,754 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 99/100 | Train Loss: 0.0172 | Val rms_score: 0.4263
|
| 121 |
+
2025-09-26 11:27:16,452 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 100/100 | Train Loss: 0.0165 | Val rms_score: 0.4214
|
| 122 |
+
2025-09-26 11:27:17,257 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Test rms_score: 0.4984
|
| 123 |
+
2025-09-26 11:27:17,636 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Starting triplicate run 2 for dataset adme_solubility at 2025-09-26_11-27-17
|
| 124 |
+
2025-09-26 11:27:32,645 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 1/100 | Train Loss: 0.7909 | Val rms_score: 0.4880
|
| 125 |
+
2025-09-26 11:27:32,645 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Global step of best model: 55
|
| 126 |
+
2025-09-26 11:27:33,262 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Best model saved at epoch 1 with val rms_score: 0.4880
|
| 127 |
+
2025-09-26 11:27:47,453 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 2/100 | Train Loss: 0.6562 | Val rms_score: 0.4351
|
| 128 |
+
2025-09-26 11:27:47,675 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Global step of best model: 110
|
| 129 |
+
2025-09-26 11:27:48,230 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Best model saved at epoch 2 with val rms_score: 0.4351
|
| 130 |
+
2025-09-26 11:28:03,890 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 3/100 | Train Loss: 0.5091 | Val rms_score: 0.4351
|
| 131 |
+
2025-09-26 11:28:04,078 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Global step of best model: 165
|
| 132 |
+
2025-09-26 11:28:04,645 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Best model saved at epoch 3 with val rms_score: 0.4351
|
| 133 |
+
2025-09-26 11:28:20,014 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 4/100 | Train Loss: 0.4188 | Val rms_score: 0.5232
|
| 134 |
+
2025-09-26 11:28:35,744 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 5/100 | Train Loss: 0.2898 | Val rms_score: 0.4219
|
| 135 |
+
2025-09-26 11:28:35,906 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Global step of best model: 275
|
| 136 |
+
2025-09-26 11:28:36,487 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Best model saved at epoch 5 with val rms_score: 0.4219
|
| 137 |
+
2025-09-26 11:28:51,832 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 6/100 | Train Loss: 0.1812 | Val rms_score: 0.4418
|
| 138 |
+
2025-09-26 11:29:07,221 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 7/100 | Train Loss: 0.1364 | Val rms_score: 0.4543
|
| 139 |
+
2025-09-26 11:29:22,375 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 8/100 | Train Loss: 0.1031 | Val rms_score: 0.4366
|
| 140 |
+
2025-09-26 11:29:37,344 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 9/100 | Train Loss: 0.0898 | Val rms_score: 0.4363
|
| 141 |
+
2025-09-26 11:29:52,173 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 10/100 | Train Loss: 0.0734 | Val rms_score: 0.4293
|
| 142 |
+
2025-09-26 11:30:07,365 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 11/100 | Train Loss: 0.0855 | Val rms_score: 0.4184
|
| 143 |
+
2025-09-26 11:30:07,851 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Global step of best model: 605
|
| 144 |
+
2025-09-26 11:30:08,485 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Best model saved at epoch 11 with val rms_score: 0.4184
|
| 145 |
+
2025-09-26 11:30:23,238 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 12/100 | Train Loss: 0.0619 | Val rms_score: 0.4207
|
| 146 |
+
2025-09-26 11:30:37,839 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 13/100 | Train Loss: 0.0518 | Val rms_score: 0.4280
|
| 147 |
+
2025-09-26 11:30:52,173 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 14/100 | Train Loss: 0.0509 | Val rms_score: 0.4163
|
| 148 |
+
2025-09-26 11:30:52,386 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Global step of best model: 770
|
| 149 |
+
2025-09-26 11:30:52,947 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Best model saved at epoch 14 with val rms_score: 0.4163
|
| 150 |
+
2025-09-26 11:31:08,405 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 15/100 | Train Loss: 0.0553 | Val rms_score: 0.4333
|
| 151 |
+
2025-09-26 11:31:22,606 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 16/100 | Train Loss: 0.0523 | Val rms_score: 0.4220
|
| 152 |
+
2025-09-26 11:31:35,934 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 17/100 | Train Loss: 0.0538 | Val rms_score: 0.4334
|
| 153 |
+
2025-09-26 11:31:50,315 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 18/100 | Train Loss: 0.0506 | Val rms_score: 0.4310
|
| 154 |
+
2025-09-26 11:32:06,719 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 19/100 | Train Loss: 0.0420 | Val rms_score: 0.4151
|
| 155 |
+
2025-09-26 11:32:06,872 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Global step of best model: 1045
|
| 156 |
+
2025-09-26 11:32:07,612 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Best model saved at epoch 19 with val rms_score: 0.4151
|
| 157 |
+
2025-09-26 11:32:21,949 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 20/100 | Train Loss: 0.0435 | Val rms_score: 0.4268
|
| 158 |
+
2025-09-26 11:32:36,950 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 21/100 | Train Loss: 0.0418 | Val rms_score: 0.4232
|
| 159 |
+
2025-09-26 11:32:51,498 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 22/100 | Train Loss: 0.0369 | Val rms_score: 0.4226
|
| 160 |
+
2025-09-26 11:33:06,959 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 23/100 | Train Loss: 0.0375 | Val rms_score: 0.4165
|
| 161 |
+
2025-09-26 11:33:20,109 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 24/100 | Train Loss: 0.0375 | Val rms_score: 0.4211
|
| 162 |
+
2025-09-26 11:33:35,391 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 25/100 | Train Loss: 0.0362 | Val rms_score: 0.4173
|
| 163 |
+
2025-09-26 11:33:49,612 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 26/100 | Train Loss: 0.0359 | Val rms_score: 0.4141
|
| 164 |
+
2025-09-26 11:33:50,119 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Global step of best model: 1430
|
| 165 |
+
2025-09-26 11:33:50,699 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Best model saved at epoch 26 with val rms_score: 0.4141
|
| 166 |
+
2025-09-26 11:34:06,062 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 27/100 | Train Loss: 0.0331 | Val rms_score: 0.4303
|
| 167 |
+
2025-09-26 11:34:20,699 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 28/100 | Train Loss: 0.0338 | Val rms_score: 0.4186
|
| 168 |
+
2025-09-26 11:34:35,883 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 29/100 | Train Loss: 0.0294 | Val rms_score: 0.4118
|
| 169 |
+
2025-09-26 11:34:36,041 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Global step of best model: 1595
|
| 170 |
+
2025-09-26 11:34:36,634 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Best model saved at epoch 29 with val rms_score: 0.4118
|
| 171 |
+
2025-09-26 11:34:51,585 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 30/100 | Train Loss: 0.0308 | Val rms_score: 0.4152
|
| 172 |
+
2025-09-26 11:35:06,632 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 31/100 | Train Loss: 0.0260 | Val rms_score: 0.4131
|
| 173 |
+
2025-09-26 11:35:21,802 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 32/100 | Train Loss: 0.0310 | Val rms_score: 0.4244
|
| 174 |
+
2025-09-26 11:35:37,117 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 33/100 | Train Loss: 0.0316 | Val rms_score: 0.4126
|
| 175 |
+
2025-09-26 11:35:51,696 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 34/100 | Train Loss: 0.0288 | Val rms_score: 0.4119
|
| 176 |
+
2025-09-26 11:36:05,844 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 35/100 | Train Loss: 0.0308 | Val rms_score: 0.4155
|
| 177 |
+
2025-09-26 11:36:20,930 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 36/100 | Train Loss: 0.0263 | Val rms_score: 0.4252
|
| 178 |
+
2025-09-26 11:36:35,089 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 37/100 | Train Loss: 0.0279 | Val rms_score: 0.4173
|
| 179 |
+
2025-09-26 11:36:50,336 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 38/100 | Train Loss: 0.0283 | Val rms_score: 0.4315
|
| 180 |
+
2025-09-26 11:37:04,760 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 39/100 | Train Loss: 0.0278 | Val rms_score: 0.4142
|
| 181 |
+
2025-09-26 11:37:20,064 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 40/100 | Train Loss: 0.0280 | Val rms_score: 0.4252
|
| 182 |
+
2025-09-26 11:37:34,013 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 41/100 | Train Loss: 0.0270 | Val rms_score: 0.4238
|
| 183 |
+
2025-09-26 11:37:49,911 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 42/100 | Train Loss: 0.0225 | Val rms_score: 0.4274
|
| 184 |
+
2025-09-26 11:38:03,937 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 43/100 | Train Loss: 0.0239 | Val rms_score: 0.4169
|
| 185 |
+
2025-09-26 11:38:19,394 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 44/100 | Train Loss: 0.0223 | Val rms_score: 0.4247
|
| 186 |
+
2025-09-26 11:38:32,194 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 45/100 | Train Loss: 0.0244 | Val rms_score: 0.4280
|
| 187 |
+
2025-09-26 11:38:47,448 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 46/100 | Train Loss: 0.0223 | Val rms_score: 0.4200
|
| 188 |
+
2025-09-26 11:39:01,973 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 47/100 | Train Loss: 0.0223 | Val rms_score: 0.4134
|
| 189 |
+
2025-09-26 11:39:17,366 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 48/100 | Train Loss: 0.0245 | Val rms_score: 0.4160
|
| 190 |
+
2025-09-26 11:39:32,606 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 49/100 | Train Loss: 0.0224 | Val rms_score: 0.4211
|
| 191 |
+
2025-09-26 11:39:45,996 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 50/100 | Train Loss: 0.0216 | Val rms_score: 0.4221
|
| 192 |
+
2025-09-26 11:40:01,391 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 51/100 | Train Loss: 0.0299 | Val rms_score: 0.4211
|
| 193 |
+
2025-09-26 11:40:16,220 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 52/100 | Train Loss: 0.0232 | Val rms_score: 0.4201
|
| 194 |
+
2025-09-26 11:40:31,944 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 53/100 | Train Loss: 0.0245 | Val rms_score: 0.4182
|
| 195 |
+
2025-09-26 11:40:46,199 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 54/100 | Train Loss: 0.0223 | Val rms_score: 0.4220
|
| 196 |
+
2025-09-26 11:41:02,388 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 55/100 | Train Loss: 0.0223 | Val rms_score: 0.4270
|
| 197 |
+
2025-09-26 11:41:16,403 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 56/100 | Train Loss: 0.0223 | Val rms_score: 0.4232
|
| 198 |
+
2025-09-26 11:41:31,681 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 57/100 | Train Loss: 0.0211 | Val rms_score: 0.4191
|
| 199 |
+
2025-09-26 11:41:45,695 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 58/100 | Train Loss: 0.0206 | Val rms_score: 0.4204
|
| 200 |
+
2025-09-26 11:42:01,115 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 59/100 | Train Loss: 0.0203 | Val rms_score: 0.4199
|
| 201 |
+
2025-09-26 11:42:15,025 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 60/100 | Train Loss: 0.0196 | Val rms_score: 0.4148
|
| 202 |
+
2025-09-26 11:42:30,129 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 61/100 | Train Loss: 0.0193 | Val rms_score: 0.4221
|
| 203 |
+
2025-09-26 11:42:44,395 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 62/100 | Train Loss: 0.0207 | Val rms_score: 0.4132
|
| 204 |
+
2025-09-26 11:42:59,689 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 63/100 | Train Loss: 0.0199 | Val rms_score: 0.4204
|
| 205 |
+
2025-09-26 11:43:14,190 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 64/100 | Train Loss: 0.0208 | Val rms_score: 0.4157
|
| 206 |
+
2025-09-26 11:43:29,770 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 65/100 | Train Loss: 0.0183 | Val rms_score: 0.4259
|
| 207 |
+
2025-09-26 11:43:43,968 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 66/100 | Train Loss: 0.0198 | Val rms_score: 0.4135
|
| 208 |
+
2025-09-26 11:43:59,980 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 67/100 | Train Loss: 0.0175 | Val rms_score: 0.4210
|
| 209 |
+
2025-09-26 11:44:14,467 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 68/100 | Train Loss: 0.0204 | Val rms_score: 0.4293
|
| 210 |
+
2025-09-26 11:44:29,961 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 69/100 | Train Loss: 0.0192 | Val rms_score: 0.4165
|
| 211 |
+
2025-09-26 11:44:45,019 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 70/100 | Train Loss: 0.0187 | Val rms_score: 0.4181
|
| 212 |
+
2025-09-26 11:45:00,850 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 71/100 | Train Loss: 0.0209 | Val rms_score: 0.4180
|
| 213 |
+
2025-09-26 11:45:16,017 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 72/100 | Train Loss: 0.0197 | Val rms_score: 0.4199
|
| 214 |
+
2025-09-26 11:45:32,424 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 73/100 | Train Loss: 0.0201 | Val rms_score: 0.4214
|
| 215 |
+
2025-09-26 11:45:47,178 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 74/100 | Train Loss: 0.0189 | Val rms_score: 0.4151
|
| 216 |
+
2025-09-26 11:46:02,666 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 75/100 | Train Loss: 0.0162 | Val rms_score: 0.4171
|
| 217 |
+
2025-09-26 11:46:16,522 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 76/100 | Train Loss: 0.0170 | Val rms_score: 0.4277
|
| 218 |
+
2025-09-26 11:46:31,871 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 77/100 | Train Loss: 0.0186 | Val rms_score: 0.4280
|
| 219 |
+
2025-09-26 11:46:46,489 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 78/100 | Train Loss: 0.0164 | Val rms_score: 0.4173
|
| 220 |
+
2025-09-26 11:47:02,154 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 79/100 | Train Loss: 0.0191 | Val rms_score: 0.4234
|
| 221 |
+
2025-09-26 11:47:16,826 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 80/100 | Train Loss: 0.0164 | Val rms_score: 0.4171
|
| 222 |
+
2025-09-26 11:47:32,221 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 81/100 | Train Loss: 0.0186 | Val rms_score: 0.4165
|
| 223 |
+
2025-09-26 11:47:47,006 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 82/100 | Train Loss: 0.0154 | Val rms_score: 0.4150
|
| 224 |
+
2025-09-26 11:48:02,434 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 83/100 | Train Loss: 0.0176 | Val rms_score: 0.4210
|
| 225 |
+
2025-09-26 11:48:16,750 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 84/100 | Train Loss: 0.0155 | Val rms_score: 0.4187
|
| 226 |
+
2025-09-26 11:48:32,357 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 85/100 | Train Loss: 0.0165 | Val rms_score: 0.4205
|
| 227 |
+
2025-09-26 11:48:45,315 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 86/100 | Train Loss: 0.0182 | Val rms_score: 0.4172
|
| 228 |
+
2025-09-26 11:49:01,221 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 87/100 | Train Loss: 0.0170 | Val rms_score: 0.4181
|
| 229 |
+
2025-09-26 11:49:15,856 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 88/100 | Train Loss: 0.0151 | Val rms_score: 0.4193
|
| 230 |
+
2025-09-26 11:49:31,417 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 89/100 | Train Loss: 0.0161 | Val rms_score: 0.4198
|
| 231 |
+
2025-09-26 11:49:46,114 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 90/100 | Train Loss: 0.0161 | Val rms_score: 0.4245
|
| 232 |
+
2025-09-26 11:50:02,607 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 91/100 | Train Loss: 0.0127 | Val rms_score: 0.4176
|
| 233 |
+
2025-09-26 11:50:17,693 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 92/100 | Train Loss: 0.0162 | Val rms_score: 0.4152
|
| 234 |
+
2025-09-26 11:50:33,457 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 93/100 | Train Loss: 0.0165 | Val rms_score: 0.4155
|
| 235 |
+
2025-09-26 11:50:47,998 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 94/100 | Train Loss: 0.0158 | Val rms_score: 0.4204
|
| 236 |
+
2025-09-26 11:51:03,466 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 95/100 | Train Loss: 0.0159 | Val rms_score: 0.4217
|
| 237 |
+
2025-09-26 11:51:18,114 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 96/100 | Train Loss: 0.0167 | Val rms_score: 0.4192
|
| 238 |
+
2025-09-26 11:51:34,001 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 97/100 | Train Loss: 0.0160 | Val rms_score: 0.4167
|
| 239 |
+
2025-09-26 11:51:48,305 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 98/100 | Train Loss: 0.0141 | Val rms_score: 0.4242
|
| 240 |
+
2025-09-26 11:52:03,583 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 99/100 | Train Loss: 0.0148 | Val rms_score: 0.4235
|
| 241 |
+
2025-09-26 11:52:18,061 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 100/100 | Train Loss: 0.0155 | Val rms_score: 0.4245
|
| 242 |
+
2025-09-26 11:52:18,889 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Test rms_score: 0.5304
|
| 243 |
+
2025-09-26 11:52:19,291 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Starting triplicate run 3 for dataset adme_solubility at 2025-09-26_11-52-19
|
| 244 |
+
2025-09-26 11:52:34,077 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 1/100 | Train Loss: 1.0591 | Val rms_score: 0.4891
|
| 245 |
+
2025-09-26 11:52:34,077 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Global step of best model: 55
|
| 246 |
+
2025-09-26 11:52:34,694 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Best model saved at epoch 1 with val rms_score: 0.4891
|
| 247 |
+
2025-09-26 11:52:49,212 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 2/100 | Train Loss: 0.8125 | Val rms_score: 0.4967
|
| 248 |
+
2025-09-26 11:53:04,107 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 3/100 | Train Loss: 0.6409 | Val rms_score: 0.4433
|
| 249 |
+
2025-09-26 11:53:04,291 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Global step of best model: 165
|
| 250 |
+
2025-09-26 11:53:04,870 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Best model saved at epoch 3 with val rms_score: 0.4433
|
| 251 |
+
2025-09-26 11:53:19,635 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 4/100 | Train Loss: 0.5031 | Val rms_score: 0.4758
|
| 252 |
+
2025-09-26 11:53:35,063 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 5/100 | Train Loss: 0.4091 | Val rms_score: 0.4632
|
| 253 |
+
2025-09-26 11:53:49,322 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 6/100 | Train Loss: 0.2687 | Val rms_score: 0.4821
|
| 254 |
+
2025-09-26 11:54:04,676 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 7/100 | Train Loss: 0.2045 | Val rms_score: 0.4357
|
| 255 |
+
2025-09-26 11:54:04,830 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Global step of best model: 385
|
| 256 |
+
2025-09-26 11:54:05,380 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Best model saved at epoch 7 with val rms_score: 0.4357
|
| 257 |
+
2025-09-26 11:54:19,879 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 8/100 | Train Loss: 0.1617 | Val rms_score: 0.4827
|
| 258 |
+
2025-09-26 11:54:35,042 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 9/100 | Train Loss: 0.1284 | Val rms_score: 0.4621
|
| 259 |
+
2025-09-26 11:54:49,618 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 10/100 | Train Loss: 0.1087 | Val rms_score: 0.5110
|
| 260 |
+
2025-09-26 11:55:05,117 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 11/100 | Train Loss: 0.0953 | Val rms_score: 0.4542
|
| 261 |
+
2025-09-26 11:55:19,790 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 12/100 | Train Loss: 0.0898 | Val rms_score: 0.4525
|
| 262 |
+
2025-09-26 11:55:34,954 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 13/100 | Train Loss: 0.0682 | Val rms_score: 0.4563
|
| 263 |
+
2025-09-26 11:55:49,131 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 14/100 | Train Loss: 0.0648 | Val rms_score: 0.4645
|
| 264 |
+
2025-09-26 11:56:03,805 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 15/100 | Train Loss: 0.0666 | Val rms_score: 0.4628
|
| 265 |
+
2025-09-26 11:56:18,555 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 16/100 | Train Loss: 0.0605 | Val rms_score: 0.4617
|
| 266 |
+
2025-09-26 11:56:34,014 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 17/100 | Train Loss: 0.0603 | Val rms_score: 0.4493
|
| 267 |
+
2025-09-26 11:56:48,983 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 18/100 | Train Loss: 0.0500 | Val rms_score: 0.4274
|
| 268 |
+
2025-09-26 11:56:49,142 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Global step of best model: 990
|
| 269 |
+
2025-09-26 11:56:49,908 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Best model saved at epoch 18 with val rms_score: 0.4274
|
| 270 |
+
2025-09-26 11:57:06,309 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 19/100 | Train Loss: 0.0479 | Val rms_score: 0.4427
|
| 271 |
+
2025-09-26 11:57:20,567 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 20/100 | Train Loss: 0.0472 | Val rms_score: 0.4336
|
| 272 |
+
2025-09-26 11:57:36,417 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 21/100 | Train Loss: 0.0432 | Val rms_score: 0.4373
|
| 273 |
+
2025-09-26 11:57:51,464 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 22/100 | Train Loss: 0.0463 | Val rms_score: 0.4295
|
| 274 |
+
2025-09-26 11:58:06,868 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 23/100 | Train Loss: 0.0446 | Val rms_score: 0.4389
|
| 275 |
+
2025-09-26 11:58:20,867 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 24/100 | Train Loss: 0.0420 | Val rms_score: 0.4515
|
| 276 |
+
2025-09-26 11:58:36,793 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 25/100 | Train Loss: 0.0406 | Val rms_score: 0.4346
|
| 277 |
+
2025-09-26 11:58:51,466 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 26/100 | Train Loss: 0.0370 | Val rms_score: 0.4318
|
| 278 |
+
2025-09-26 11:59:07,226 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 27/100 | Train Loss: 0.0352 | Val rms_score: 0.4201
|
| 279 |
+
2025-09-26 11:59:07,380 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Global step of best model: 1485
|
| 280 |
+
2025-09-26 11:59:08,097 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Best model saved at epoch 27 with val rms_score: 0.4201
|
| 281 |
+
2025-09-26 11:59:22,823 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 28/100 | Train Loss: 0.0389 | Val rms_score: 0.4235
|
| 282 |
+
2025-09-26 11:59:38,316 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 29/100 | Train Loss: 0.0359 | Val rms_score: 0.4408
|
| 283 |
+
2025-09-26 11:59:52,606 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 30/100 | Train Loss: 0.0362 | Val rms_score: 0.4082
|
| 284 |
+
2025-09-26 11:59:52,809 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Global step of best model: 1650
|
| 285 |
+
2025-09-26 11:59:53,414 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Best model saved at epoch 30 with val rms_score: 0.4082
|
| 286 |
+
2025-09-26 12:00:09,251 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 31/100 | Train Loss: 0.0295 | Val rms_score: 0.4209
|
| 287 |
+
2025-09-26 12:00:23,940 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 32/100 | Train Loss: 0.0327 | Val rms_score: 0.4298
|
| 288 |
+
2025-09-26 12:00:39,566 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 33/100 | Train Loss: 0.0314 | Val rms_score: 0.4312
|
| 289 |
+
2025-09-26 12:00:53,397 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 34/100 | Train Loss: 0.0318 | Val rms_score: 0.4290
|
| 290 |
+
2025-09-26 12:01:08,841 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 35/100 | Train Loss: 0.0327 | Val rms_score: 0.4310
|
| 291 |
+
2025-09-26 12:01:23,230 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 36/100 | Train Loss: 0.0267 | Val rms_score: 0.4171
|
| 292 |
+
2025-09-26 12:01:40,063 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 37/100 | Train Loss: 0.0308 | Val rms_score: 0.4325
|
| 293 |
+
2025-09-26 12:01:55,075 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 38/100 | Train Loss: 0.0307 | Val rms_score: 0.4247
|
| 294 |
+
2025-09-26 12:02:10,723 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 39/100 | Train Loss: 0.0307 | Val rms_score: 0.4263
|
| 295 |
+
2025-09-26 12:02:25,679 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 40/100 | Train Loss: 0.0304 | Val rms_score: 0.4275
|
| 296 |
+
2025-09-26 12:02:41,178 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 41/100 | Train Loss: 0.0281 | Val rms_score: 0.4316
|
| 297 |
+
2025-09-26 12:02:55,491 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 42/100 | Train Loss: 0.0273 | Val rms_score: 0.4287
|
| 298 |
+
2025-09-26 12:03:10,269 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 43/100 | Train Loss: 0.0284 | Val rms_score: 0.4148
|
| 299 |
+
2025-09-26 12:03:24,406 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 44/100 | Train Loss: 0.0252 | Val rms_score: 0.4198
|
| 300 |
+
2025-09-26 12:03:39,927 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 45/100 | Train Loss: 0.0261 | Val rms_score: 0.4265
|
| 301 |
+
2025-09-26 12:03:54,161 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 46/100 | Train Loss: 0.0290 | Val rms_score: 0.4135
|
| 302 |
+
2025-09-26 12:04:09,600 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 47/100 | Train Loss: 0.0278 | Val rms_score: 0.4225
|
| 303 |
+
2025-09-26 12:04:24,104 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 48/100 | Train Loss: 0.0256 | Val rms_score: 0.4223
|
| 304 |
+
2025-09-26 12:04:39,683 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 49/100 | Train Loss: 0.0277 | Val rms_score: 0.4215
|
| 305 |
+
2025-09-26 12:04:52,520 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 50/100 | Train Loss: 0.0241 | Val rms_score: 0.4319
|
| 306 |
+
2025-09-26 12:05:07,831 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 51/100 | Train Loss: 0.0211 | Val rms_score: 0.4176
|
| 307 |
+
2025-09-26 12:05:22,583 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 52/100 | Train Loss: 0.0254 | Val rms_score: 0.4376
|
| 308 |
+
2025-09-26 12:05:38,022 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 53/100 | Train Loss: 0.0224 | Val rms_score: 0.4240
|
| 309 |
+
2025-09-26 12:05:52,789 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 54/100 | Train Loss: 0.0246 | Val rms_score: 0.4244
|
| 310 |
+
2025-09-26 12:06:08,389 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 55/100 | Train Loss: 0.0275 | Val rms_score: 0.4195
|
| 311 |
+
2025-09-26 12:06:23,209 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 56/100 | Train Loss: 0.0240 | Val rms_score: 0.4204
|
| 312 |
+
2025-09-26 12:06:37,849 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 57/100 | Train Loss: 0.0246 | Val rms_score: 0.4256
|
| 313 |
+
2025-09-26 12:06:53,433 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 58/100 | Train Loss: 0.0243 | Val rms_score: 0.4138
|
| 314 |
+
2025-09-26 12:07:06,731 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 59/100 | Train Loss: 0.0236 | Val rms_score: 0.4180
|
| 315 |
+
2025-09-26 12:07:22,084 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 60/100 | Train Loss: 0.0212 | Val rms_score: 0.4248
|
| 316 |
+
2025-09-26 12:07:36,008 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 61/100 | Train Loss: 0.0209 | Val rms_score: 0.4169
|
| 317 |
+
2025-09-26 12:07:51,361 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 62/100 | Train Loss: 0.0177 | Val rms_score: 0.4163
|
| 318 |
+
2025-09-26 12:08:04,350 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 63/100 | Train Loss: 0.0227 | Val rms_score: 0.4225
|
| 319 |
+
2025-09-26 12:08:19,893 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 64/100 | Train Loss: 0.0229 | Val rms_score: 0.4324
|
| 320 |
+
2025-09-26 12:08:34,733 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 65/100 | Train Loss: 0.0214 | Val rms_score: 0.4285
|
| 321 |
+
2025-09-26 12:08:50,061 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 66/100 | Train Loss: 0.0201 | Val rms_score: 0.4287
|
| 322 |
+
2025-09-26 12:09:04,507 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 67/100 | Train Loss: 0.0202 | Val rms_score: 0.4284
|
| 323 |
+
2025-09-26 12:09:19,697 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 68/100 | Train Loss: 0.0214 | Val rms_score: 0.4259
|
| 324 |
+
2025-09-26 12:09:34,162 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 69/100 | Train Loss: 0.0210 | Val rms_score: 0.4239
|
| 325 |
+
2025-09-26 12:09:49,677 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 70/100 | Train Loss: 0.0211 | Val rms_score: 0.4201
|
| 326 |
+
2025-09-26 12:10:04,820 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 71/100 | Train Loss: 0.0209 | Val rms_score: 0.4237
|
| 327 |
+
2025-09-26 12:10:20,031 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 72/100 | Train Loss: 0.0213 | Val rms_score: 0.4170
|
| 328 |
+
2025-09-26 12:10:35,582 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 73/100 | Train Loss: 0.0210 | Val rms_score: 0.4185
|
| 329 |
+
2025-09-26 12:10:50,768 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 74/100 | Train Loss: 0.0206 | Val rms_score: 0.4224
|
| 330 |
+
2025-09-26 12:11:05,838 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 75/100 | Train Loss: 0.0234 | Val rms_score: 0.4267
|
| 331 |
+
2025-09-26 12:11:21,337 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 76/100 | Train Loss: 0.0223 | Val rms_score: 0.4221
|
| 332 |
+
2025-09-26 12:11:36,271 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 77/100 | Train Loss: 0.0173 | Val rms_score: 0.4126
|
| 333 |
+
2025-09-26 12:11:49,714 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 78/100 | Train Loss: 0.0196 | Val rms_score: 0.4176
|
| 334 |
+
2025-09-26 12:12:05,052 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 79/100 | Train Loss: 0.0188 | Val rms_score: 0.4214
|
| 335 |
+
2025-09-26 12:12:18,506 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 80/100 | Train Loss: 0.0195 | Val rms_score: 0.4252
|
| 336 |
+
2025-09-26 12:12:33,911 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 81/100 | Train Loss: 0.0196 | Val rms_score: 0.4228
|
| 337 |
+
2025-09-26 12:12:47,415 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 82/100 | Train Loss: 0.0167 | Val rms_score: 0.4251
|
| 338 |
+
2025-09-26 12:13:02,207 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 83/100 | Train Loss: 0.0205 | Val rms_score: 0.4224
|
| 339 |
+
2025-09-26 12:13:16,773 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 84/100 | Train Loss: 0.0160 | Val rms_score: 0.4212
|
| 340 |
+
2025-09-26 12:13:32,249 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 85/100 | Train Loss: 0.0210 | Val rms_score: 0.4315
|
| 341 |
+
2025-09-26 12:13:46,752 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 86/100 | Train Loss: 0.0184 | Val rms_score: 0.4270
|
| 342 |
+
2025-09-26 12:14:01,487 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 87/100 | Train Loss: 0.0179 | Val rms_score: 0.4234
|
| 343 |
+
2025-09-26 12:14:16,235 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 88/100 | Train Loss: 0.0209 | Val rms_score: 0.4239
|
| 344 |
+
2025-09-26 12:14:31,560 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 89/100 | Train Loss: 0.0190 | Val rms_score: 0.4222
|
| 345 |
+
2025-09-26 12:14:46,290 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 90/100 | Train Loss: 0.0184 | Val rms_score: 0.4183
|
| 346 |
+
2025-09-26 12:15:02,580 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 91/100 | Train Loss: 0.0137 | Val rms_score: 0.4263
|
| 347 |
+
2025-09-26 12:15:17,226 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 92/100 | Train Loss: 0.0186 | Val rms_score: 0.4221
|
| 348 |
+
2025-09-26 12:15:32,537 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 93/100 | Train Loss: 0.0180 | Val rms_score: 0.4179
|
| 349 |
+
2025-09-26 12:15:47,831 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 94/100 | Train Loss: 0.0181 | Val rms_score: 0.4229
|
| 350 |
+
2025-09-26 12:16:02,903 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 95/100 | Train Loss: 0.0169 | Val rms_score: 0.4223
|
| 351 |
+
2025-09-26 12:16:18,046 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 96/100 | Train Loss: 0.0182 | Val rms_score: 0.4194
|
| 352 |
+
2025-09-26 12:16:32,845 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 97/100 | Train Loss: 0.0165 | Val rms_score: 0.4127
|
| 353 |
+
2025-09-26 12:16:47,829 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 98/100 | Train Loss: 0.0178 | Val rms_score: 0.4195
|
| 354 |
+
2025-09-26 12:17:02,407 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 99/100 | Train Loss: 0.0175 | Val rms_score: 0.4197
|
| 355 |
+
2025-09-26 12:17:18,056 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 100/100 | Train Loss: 0.0185 | Val rms_score: 0.4246
|
| 356 |
+
2025-09-26 12:17:19,102 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Test rms_score: 0.5286
|
| 357 |
+
2025-09-26 12:17:19,651 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Final Triplicate Test Results — Avg rms_score: 0.5191, Std Dev: 0.0147
|
logs_modchembert_regression_ModChemBERT-MLM/modchembert_deepchem_splits_run_astrazeneca_cl_epochs100_batch_size32_20250926_121719.log
ADDED
|
@@ -0,0 +1,327 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
2025-09-26 12:17:19,653 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Running benchmark for dataset: astrazeneca_cl
|
| 2 |
+
2025-09-26 12:17:19,653 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - dataset: astrazeneca_cl, tasks: ['y'], epochs: 100, learning rate: 3e-05, transform: True
|
| 3 |
+
2025-09-26 12:17:19,689 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Starting triplicate run 1 for dataset astrazeneca_cl at 2025-09-26_12-17-19
|
| 4 |
+
2025-09-26 12:17:30,800 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 1/100 | Train Loss: 1.4583 | Val rms_score: 0.5043
|
| 5 |
+
2025-09-26 12:17:30,800 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Global step of best model: 36
|
| 6 |
+
2025-09-26 12:17:31,427 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Best model saved at epoch 1 with val rms_score: 0.5043
|
| 7 |
+
2025-09-26 12:17:44,574 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 2/100 | Train Loss: 0.8264 | Val rms_score: 0.4859
|
| 8 |
+
2025-09-26 12:17:44,759 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Global step of best model: 72
|
| 9 |
+
2025-09-26 12:17:45,315 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Best model saved at epoch 2 with val rms_score: 0.4859
|
| 10 |
+
2025-09-26 12:17:57,466 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 3/100 | Train Loss: 0.6562 | Val rms_score: 0.4734
|
| 11 |
+
2025-09-26 12:17:57,659 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Global step of best model: 108
|
| 12 |
+
2025-09-26 12:17:58,466 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Best model saved at epoch 3 with val rms_score: 0.4734
|
| 13 |
+
2025-09-26 12:18:11,506 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 4/100 | Train Loss: 0.6250 | Val rms_score: 0.4744
|
| 14 |
+
2025-09-26 12:18:24,832 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 5/100 | Train Loss: 0.5104 | Val rms_score: 0.4757
|
| 15 |
+
2025-09-26 12:18:34,827 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 6/100 | Train Loss: 0.4082 | Val rms_score: 0.4835
|
| 16 |
+
2025-09-26 12:18:48,274 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 7/100 | Train Loss: 0.3212 | Val rms_score: 0.4814
|
| 17 |
+
2025-09-26 12:19:00,699 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 8/100 | Train Loss: 0.2743 | Val rms_score: 0.5080
|
| 18 |
+
2025-09-26 12:19:13,774 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 9/100 | Train Loss: 0.2214 | Val rms_score: 0.4917
|
| 19 |
+
2025-09-26 12:19:26,767 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 10/100 | Train Loss: 0.1997 | Val rms_score: 0.5002
|
| 20 |
+
2025-09-26 12:19:37,668 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 11/100 | Train Loss: 0.1684 | Val rms_score: 0.4890
|
| 21 |
+
2025-09-26 12:19:51,074 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 12/100 | Train Loss: 0.1582 | Val rms_score: 0.5040
|
| 22 |
+
2025-09-26 12:20:03,405 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 13/100 | Train Loss: 0.1441 | Val rms_score: 0.5099
|
| 23 |
+
2025-09-26 12:20:15,979 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 14/100 | Train Loss: 0.1426 | Val rms_score: 0.5025
|
| 24 |
+
2025-09-26 12:20:28,777 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 15/100 | Train Loss: 0.1198 | Val rms_score: 0.5215
|
| 25 |
+
2025-09-26 12:20:40,116 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 16/100 | Train Loss: 0.1224 | Val rms_score: 0.5273
|
| 26 |
+
2025-09-26 12:20:53,661 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 17/100 | Train Loss: 0.1191 | Val rms_score: 0.5211
|
| 27 |
+
2025-09-26 12:21:06,507 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 18/100 | Train Loss: 0.1072 | Val rms_score: 0.5115
|
| 28 |
+
2025-09-26 12:21:19,061 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 19/100 | Train Loss: 0.1003 | Val rms_score: 0.5098
|
| 29 |
+
2025-09-26 12:21:32,307 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 20/100 | Train Loss: 0.1109 | Val rms_score: 0.5151
|
| 30 |
+
2025-09-26 12:21:44,263 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 21/100 | Train Loss: 0.0968 | Val rms_score: 0.5334
|
| 31 |
+
2025-09-26 12:21:57,733 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 22/100 | Train Loss: 0.0924 | Val rms_score: 0.5288
|
| 32 |
+
2025-09-26 12:22:10,137 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 23/100 | Train Loss: 0.0904 | Val rms_score: 0.5141
|
| 33 |
+
2025-09-26 12:22:22,592 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 24/100 | Train Loss: 0.0885 | Val rms_score: 0.5102
|
| 34 |
+
2025-09-26 12:22:35,433 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 25/100 | Train Loss: 0.0812 | Val rms_score: 0.5059
|
| 35 |
+
2025-09-26 12:22:47,250 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 26/100 | Train Loss: 0.0716 | Val rms_score: 0.5156
|
| 36 |
+
2025-09-26 12:23:00,404 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 27/100 | Train Loss: 0.0768 | Val rms_score: 0.5112
|
| 37 |
+
2025-09-26 12:23:13,875 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 28/100 | Train Loss: 0.0688 | Val rms_score: 0.5108
|
| 38 |
+
2025-09-26 12:23:25,348 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 29/100 | Train Loss: 0.0668 | Val rms_score: 0.5153
|
| 39 |
+
2025-09-26 12:23:37,941 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 30/100 | Train Loss: 0.0729 | Val rms_score: 0.5118
|
| 40 |
+
2025-09-26 12:23:49,873 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 31/100 | Train Loss: 0.0635 | Val rms_score: 0.5171
|
| 41 |
+
2025-09-26 12:24:03,003 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 32/100 | Train Loss: 0.0664 | Val rms_score: 0.5012
|
| 42 |
+
2025-09-26 12:24:16,019 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 33/100 | Train Loss: 0.0642 | Val rms_score: 0.5075
|
| 43 |
+
2025-09-26 12:24:27,013 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 34/100 | Train Loss: 0.0602 | Val rms_score: 0.5063
|
| 44 |
+
2025-09-26 12:24:40,156 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 35/100 | Train Loss: 0.0647 | Val rms_score: 0.5066
|
| 45 |
+
2025-09-26 12:24:52,745 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 36/100 | Train Loss: 0.0595 | Val rms_score: 0.5063
|
| 46 |
+
2025-09-26 12:25:05,761 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 37/100 | Train Loss: 0.0659 | Val rms_score: 0.5148
|
| 47 |
+
2025-09-26 12:25:18,531 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 38/100 | Train Loss: 0.0595 | Val rms_score: 0.5062
|
| 48 |
+
2025-09-26 12:25:30,664 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 39/100 | Train Loss: 0.0603 | Val rms_score: 0.4995
|
| 49 |
+
2025-09-26 12:25:43,714 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 40/100 | Train Loss: 0.0577 | Val rms_score: 0.5079
|
| 50 |
+
2025-09-26 12:25:56,298 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 41/100 | Train Loss: 0.0536 | Val rms_score: 0.5025
|
| 51 |
+
2025-09-26 12:26:09,574 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 42/100 | Train Loss: 0.0485 | Val rms_score: 0.5062
|
| 52 |
+
2025-09-26 12:26:22,437 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 43/100 | Train Loss: 0.0480 | Val rms_score: 0.5065
|
| 53 |
+
2025-09-26 12:26:34,434 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 44/100 | Train Loss: 0.0503 | Val rms_score: 0.4977
|
| 54 |
+
2025-09-26 12:26:47,317 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 45/100 | Train Loss: 0.0520 | Val rms_score: 0.5051
|
| 55 |
+
2025-09-26 12:26:59,782 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 46/100 | Train Loss: 0.0523 | Val rms_score: 0.5042
|
| 56 |
+
2025-09-26 12:27:13,334 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 47/100 | Train Loss: 0.0534 | Val rms_score: 0.4994
|
| 57 |
+
2025-09-26 12:27:26,526 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 48/100 | Train Loss: 0.0474 | Val rms_score: 0.5004
|
| 58 |
+
2025-09-26 12:27:38,540 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 49/100 | Train Loss: 0.0475 | Val rms_score: 0.4931
|
| 59 |
+
2025-09-26 12:27:50,945 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 50/100 | Train Loss: 0.0462 | Val rms_score: 0.5022
|
| 60 |
+
2025-09-26 12:28:03,111 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 51/100 | Train Loss: 0.0525 | Val rms_score: 0.5003
|
| 61 |
+
2025-09-26 12:28:15,670 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 52/100 | Train Loss: 0.0441 | Val rms_score: 0.4972
|
| 62 |
+
2025-09-26 12:28:28,611 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 53/100 | Train Loss: 0.0476 | Val rms_score: 0.4995
|
| 63 |
+
2025-09-26 12:28:40,625 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 54/100 | Train Loss: 0.0410 | Val rms_score: 0.5063
|
| 64 |
+
2025-09-26 12:28:53,495 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 55/100 | Train Loss: 0.0412 | Val rms_score: 0.5031
|
| 65 |
+
2025-09-26 12:29:06,897 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 56/100 | Train Loss: 0.0400 | Val rms_score: 0.5071
|
| 66 |
+
2025-09-26 12:29:20,202 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 57/100 | Train Loss: 0.0404 | Val rms_score: 0.5015
|
| 67 |
+
2025-09-26 12:29:33,119 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 58/100 | Train Loss: 0.0404 | Val rms_score: 0.4989
|
| 68 |
+
2025-09-26 12:29:45,159 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 59/100 | Train Loss: 0.0369 | Val rms_score: 0.5103
|
| 69 |
+
2025-09-26 12:29:58,126 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 60/100 | Train Loss: 0.0438 | Val rms_score: 0.5048
|
| 70 |
+
2025-09-26 12:30:10,171 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 61/100 | Train Loss: 0.0393 | Val rms_score: 0.4988
|
| 71 |
+
2025-09-26 12:30:23,863 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 62/100 | Train Loss: 0.0400 | Val rms_score: 0.5003
|
| 72 |
+
2025-09-26 12:30:36,941 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 63/100 | Train Loss: 0.0386 | Val rms_score: 0.5047
|
| 73 |
+
2025-09-26 12:30:48,490 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 64/100 | Train Loss: 0.0447 | Val rms_score: 0.5019
|
| 74 |
+
2025-09-26 12:31:01,547 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 65/100 | Train Loss: 0.0380 | Val rms_score: 0.4971
|
| 75 |
+
2025-09-26 12:31:13,958 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 66/100 | Train Loss: 0.0386 | Val rms_score: 0.4947
|
| 76 |
+
2025-09-26 12:31:27,439 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 67/100 | Train Loss: 0.0327 | Val rms_score: 0.5034
|
| 77 |
+
2025-09-26 12:31:40,364 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 68/100 | Train Loss: 0.0369 | Val rms_score: 0.5046
|
| 78 |
+
2025-09-26 12:31:51,773 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 69/100 | Train Loss: 0.0365 | Val rms_score: 0.5008
|
| 79 |
+
2025-09-26 12:32:04,748 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 70/100 | Train Loss: 0.0371 | Val rms_score: 0.5051
|
| 80 |
+
2025-09-26 12:32:16,980 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 71/100 | Train Loss: 0.0404 | Val rms_score: 0.5008
|
| 81 |
+
2025-09-26 12:32:30,522 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 72/100 | Train Loss: 0.0349 | Val rms_score: 0.5048
|
| 82 |
+
2025-09-26 12:32:43,671 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 73/100 | Train Loss: 0.0385 | Val rms_score: 0.5017
|
| 83 |
+
2025-09-26 12:32:54,507 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 74/100 | Train Loss: 0.0336 | Val rms_score: 0.4961
|
| 84 |
+
2025-09-26 12:33:07,590 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 75/100 | Train Loss: 0.0323 | Val rms_score: 0.4997
|
| 85 |
+
2025-09-26 12:33:18,579 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 76/100 | Train Loss: 0.0317 | Val rms_score: 0.5033
|
| 86 |
+
2025-09-26 12:33:30,533 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 77/100 | Train Loss: 0.0339 | Val rms_score: 0.5052
|
| 87 |
+
2025-09-26 12:33:42,923 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 78/100 | Train Loss: 0.0354 | Val rms_score: 0.5054
|
| 88 |
+
2025-09-26 12:33:55,145 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 79/100 | Train Loss: 0.0308 | Val rms_score: 0.4994
|
| 89 |
+
2025-09-26 12:34:08,194 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 80/100 | Train Loss: 0.0341 | Val rms_score: 0.5030
|
| 90 |
+
2025-09-26 12:34:20,943 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 81/100 | Train Loss: 0.0349 | Val rms_score: 0.5059
|
| 91 |
+
2025-09-26 12:34:33,460 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 82/100 | Train Loss: 0.0312 | Val rms_score: 0.5003
|
| 92 |
+
2025-09-26 12:34:46,439 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 83/100 | Train Loss: 0.0299 | Val rms_score: 0.5019
|
| 93 |
+
2025-09-26 12:34:59,602 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 84/100 | Train Loss: 0.0339 | Val rms_score: 0.5013
|
| 94 |
+
2025-09-26 12:35:12,593 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 85/100 | Train Loss: 0.0308 | Val rms_score: 0.4992
|
| 95 |
+
2025-09-26 12:35:25,169 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 86/100 | Train Loss: 0.0310 | Val rms_score: 0.5068
|
| 96 |
+
2025-09-26 12:35:37,706 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 87/100 | Train Loss: 0.0337 | Val rms_score: 0.5007
|
| 97 |
+
2025-09-26 12:35:50,761 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 88/100 | Train Loss: 0.0304 | Val rms_score: 0.5007
|
| 98 |
+
2025-09-26 12:36:02,794 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 89/100 | Train Loss: 0.0437 | Val rms_score: 0.4976
|
| 99 |
+
2025-09-26 12:36:15,836 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 90/100 | Train Loss: 0.0312 | Val rms_score: 0.5072
|
| 100 |
+
2025-09-26 12:36:28,599 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 91/100 | Train Loss: 0.0339 | Val rms_score: 0.5063
|
| 101 |
+
2025-09-26 12:36:40,816 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 92/100 | Train Loss: 0.0301 | Val rms_score: 0.4976
|
| 102 |
+
2025-09-26 12:36:54,078 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 93/100 | Train Loss: 0.0297 | Val rms_score: 0.5060
|
| 103 |
+
2025-09-26 12:37:06,092 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 94/100 | Train Loss: 0.0323 | Val rms_score: 0.5011
|
| 104 |
+
2025-09-26 12:37:18,544 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 95/100 | Train Loss: 0.0305 | Val rms_score: 0.5019
|
| 105 |
+
2025-09-26 12:37:31,167 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 96/100 | Train Loss: 0.0302 | Val rms_score: 0.5077
|
| 106 |
+
2025-09-26 12:37:43,622 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 97/100 | Train Loss: 0.0291 | Val rms_score: 0.5003
|
| 107 |
+
2025-09-26 12:37:56,646 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 98/100 | Train Loss: 0.0280 | Val rms_score: 0.5084
|
| 108 |
+
2025-09-26 12:38:08,483 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 99/100 | Train Loss: 0.0286 | Val rms_score: 0.5071
|
| 109 |
+
2025-09-26 12:38:19,316 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 100/100 | Train Loss: 0.0275 | Val rms_score: 0.5111
|
| 110 |
+
2025-09-26 12:38:20,248 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Test rms_score: 0.4848
|
| 111 |
+
2025-09-26 12:38:20,616 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Starting triplicate run 2 for dataset astrazeneca_cl at 2025-09-26_12-38-20
|
| 112 |
+
2025-09-26 12:38:32,683 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 1/100 | Train Loss: 1.2986 | Val rms_score: 0.5465
|
| 113 |
+
2025-09-26 12:38:32,683 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Global step of best model: 36
|
| 114 |
+
2025-09-26 12:38:33,269 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Best model saved at epoch 1 with val rms_score: 0.5465
|
| 115 |
+
2025-09-26 12:38:44,868 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 2/100 | Train Loss: 0.8229 | Val rms_score: 0.5037
|
| 116 |
+
2025-09-26 12:38:45,016 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Global step of best model: 72
|
| 117 |
+
2025-09-26 12:38:45,573 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Best model saved at epoch 2 with val rms_score: 0.5037
|
| 118 |
+
2025-09-26 12:38:58,595 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 3/100 | Train Loss: 0.6953 | Val rms_score: 0.4960
|
| 119 |
+
2025-09-26 12:38:58,783 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Global step of best model: 108
|
| 120 |
+
2025-09-26 12:38:59,364 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Best model saved at epoch 3 with val rms_score: 0.4960
|
| 121 |
+
2025-09-26 12:39:11,413 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 4/100 | Train Loss: 0.6007 | Val rms_score: 0.4953
|
| 122 |
+
2025-09-26 12:39:11,606 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Global step of best model: 144
|
| 123 |
+
2025-09-26 12:39:12,190 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Best model saved at epoch 4 with val rms_score: 0.4953
|
| 124 |
+
2025-09-26 12:39:25,269 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 5/100 | Train Loss: 0.5174 | Val rms_score: 0.5050
|
| 125 |
+
2025-09-26 12:39:37,862 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 6/100 | Train Loss: 0.4277 | Val rms_score: 0.5073
|
| 126 |
+
2025-09-26 12:39:49,939 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 7/100 | Train Loss: 0.3733 | Val rms_score: 0.5220
|
| 127 |
+
2025-09-26 12:40:03,005 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 8/100 | Train Loss: 0.2934 | Val rms_score: 0.5160
|
| 128 |
+
2025-09-26 12:40:15,280 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 9/100 | Train Loss: 0.2227 | Val rms_score: 0.5163
|
| 129 |
+
2025-09-26 12:40:28,189 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 10/100 | Train Loss: 0.2049 | Val rms_score: 0.5246
|
| 130 |
+
2025-09-26 12:40:40,845 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 11/100 | Train Loss: 0.1814 | Val rms_score: 0.5232
|
| 131 |
+
2025-09-26 12:40:50,127 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 12/100 | Train Loss: 0.1484 | Val rms_score: 0.5285
|
| 132 |
+
2025-09-26 12:41:03,007 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 13/100 | Train Loss: 0.1432 | Val rms_score: 0.5270
|
| 133 |
+
2025-09-26 12:41:15,770 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 14/100 | Train Loss: 0.1465 | Val rms_score: 0.5224
|
| 134 |
+
2025-09-26 12:41:28,273 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 15/100 | Train Loss: 0.1146 | Val rms_score: 0.5223
|
| 135 |
+
2025-09-26 12:41:40,762 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 16/100 | Train Loss: 0.1137 | Val rms_score: 0.5258
|
| 136 |
+
2025-09-26 12:41:51,249 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 17/100 | Train Loss: 0.1009 | Val rms_score: 0.5274
|
| 137 |
+
2025-09-26 12:42:04,062 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 18/100 | Train Loss: 0.1050 | Val rms_score: 0.5328
|
| 138 |
+
2025-09-26 12:42:16,921 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 19/100 | Train Loss: 0.1033 | Val rms_score: 0.5290
|
| 139 |
+
2025-09-26 12:42:28,873 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 20/100 | Train Loss: 0.1062 | Val rms_score: 0.5410
|
| 140 |
+
2025-09-26 12:42:41,693 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 21/100 | Train Loss: 0.0890 | Val rms_score: 0.5330
|
| 141 |
+
2025-09-26 12:42:54,019 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 22/100 | Train Loss: 0.0907 | Val rms_score: 0.5252
|
| 142 |
+
2025-09-26 12:43:06,728 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 23/100 | Train Loss: 0.0882 | Val rms_score: 0.5246
|
| 143 |
+
2025-09-26 12:43:18,955 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 24/100 | Train Loss: 0.0833 | Val rms_score: 0.5272
|
| 144 |
+
2025-09-26 12:43:31,006 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 25/100 | Train Loss: 0.0786 | Val rms_score: 0.5288
|
| 145 |
+
2025-09-26 12:43:44,315 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 26/100 | Train Loss: 0.0799 | Val rms_score: 0.5203
|
| 146 |
+
2025-09-26 12:43:56,876 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 27/100 | Train Loss: 0.0720 | Val rms_score: 0.5280
|
| 147 |
+
2025-09-26 12:44:11,127 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 28/100 | Train Loss: 0.1035 | Val rms_score: 0.5255
|
| 148 |
+
2025-09-26 12:44:24,118 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 29/100 | Train Loss: 0.0725 | Val rms_score: 0.5185
|
| 149 |
+
2025-09-26 12:44:36,383 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 30/100 | Train Loss: 0.0621 | Val rms_score: 0.5285
|
| 150 |
+
2025-09-26 12:44:49,182 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 31/100 | Train Loss: 0.0620 | Val rms_score: 0.5229
|
| 151 |
+
2025-09-26 12:45:00,302 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 32/100 | Train Loss: 0.0616 | Val rms_score: 0.5147
|
| 152 |
+
2025-09-26 12:45:13,357 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 33/100 | Train Loss: 0.0612 | Val rms_score: 0.5249
|
| 153 |
+
2025-09-26 12:45:26,340 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 34/100 | Train Loss: 0.0596 | Val rms_score: 0.5257
|
| 154 |
+
2025-09-26 12:45:38,429 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 35/100 | Train Loss: 0.0595 | Val rms_score: 0.5164
|
| 155 |
+
2025-09-26 12:45:51,287 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 36/100 | Train Loss: 0.0582 | Val rms_score: 0.5212
|
| 156 |
+
2025-09-26 12:46:02,806 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 37/100 | Train Loss: 0.0574 | Val rms_score: 0.5232
|
| 157 |
+
2025-09-26 12:46:15,175 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 38/100 | Train Loss: 0.0503 | Val rms_score: 0.5185
|
| 158 |
+
2025-09-26 12:46:27,982 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 39/100 | Train Loss: 0.0605 | Val rms_score: 0.5204
|
| 159 |
+
2025-09-26 12:46:40,197 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 40/100 | Train Loss: 0.0564 | Val rms_score: 0.5216
|
| 160 |
+
2025-09-26 12:46:53,154 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 41/100 | Train Loss: 0.0545 | Val rms_score: 0.5136
|
| 161 |
+
2025-09-26 12:47:06,299 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 42/100 | Train Loss: 0.0485 | Val rms_score: 0.5130
|
| 162 |
+
2025-09-26 12:47:18,884 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 43/100 | Train Loss: 0.0490 | Val rms_score: 0.5223
|
| 163 |
+
2025-09-26 12:47:31,633 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 44/100 | Train Loss: 0.0477 | Val rms_score: 0.5252
|
| 164 |
+
2025-09-26 12:47:43,278 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 45/100 | Train Loss: 0.0504 | Val rms_score: 0.5211
|
| 165 |
+
2025-09-26 12:47:56,226 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 46/100 | Train Loss: 0.0464 | Val rms_score: 0.5202
|
| 166 |
+
2025-09-26 12:48:09,684 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 47/100 | Train Loss: 0.0475 | Val rms_score: 0.5196
|
| 167 |
+
2025-09-26 12:48:21,756 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 48/100 | Train Loss: 0.0505 | Val rms_score: 0.5205
|
| 168 |
+
2025-09-26 12:48:34,764 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 49/100 | Train Loss: 0.0451 | Val rms_score: 0.5252
|
| 169 |
+
2025-09-26 12:48:45,533 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 50/100 | Train Loss: 0.0451 | Val rms_score: 0.5300
|
| 170 |
+
2025-09-26 12:48:57,868 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 51/100 | Train Loss: 0.0473 | Val rms_score: 0.5231
|
| 171 |
+
2025-09-26 12:49:11,311 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 52/100 | Train Loss: 0.0447 | Val rms_score: 0.5210
|
| 172 |
+
2025-09-26 12:49:23,349 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 53/100 | Train Loss: 0.0488 | Val rms_score: 0.5224
|
| 173 |
+
2025-09-26 12:49:36,085 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 54/100 | Train Loss: 0.0395 | Val rms_score: 0.5178
|
| 174 |
+
2025-09-26 12:49:47,934 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 55/100 | Train Loss: 0.0386 | Val rms_score: 0.5259
|
| 175 |
+
2025-09-26 12:50:01,836 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 56/100 | Train Loss: 0.0378 | Val rms_score: 0.5244
|
| 176 |
+
2025-09-26 12:50:15,281 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 57/100 | Train Loss: 0.0369 | Val rms_score: 0.5246
|
| 177 |
+
2025-09-26 12:50:27,365 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 58/100 | Train Loss: 0.0388 | Val rms_score: 0.5228
|
| 178 |
+
2025-09-26 12:50:40,161 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 59/100 | Train Loss: 0.0400 | Val rms_score: 0.5230
|
| 179 |
+
2025-09-26 12:50:51,664 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 60/100 | Train Loss: 0.0354 | Val rms_score: 0.5248
|
| 180 |
+
2025-09-26 12:51:04,731 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 61/100 | Train Loss: 0.0388 | Val rms_score: 0.5207
|
| 181 |
+
2025-09-26 12:51:17,274 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 62/100 | Train Loss: 0.0366 | Val rms_score: 0.5183
|
| 182 |
+
2025-09-26 12:51:27,631 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 63/100 | Train Loss: 0.0378 | Val rms_score: 0.5200
|
| 183 |
+
2025-09-26 12:51:39,936 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 64/100 | Train Loss: 0.0315 | Val rms_score: 0.5188
|
| 184 |
+
2025-09-26 12:51:52,720 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 65/100 | Train Loss: 0.0354 | Val rms_score: 0.5231
|
| 185 |
+
2025-09-26 12:52:05,334 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 66/100 | Train Loss: 0.0367 | Val rms_score: 0.5248
|
| 186 |
+
2025-09-26 12:52:18,096 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 67/100 | Train Loss: 0.0329 | Val rms_score: 0.5233
|
| 187 |
+
2025-09-26 12:52:28,956 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 68/100 | Train Loss: 0.0358 | Val rms_score: 0.5237
|
| 188 |
+
2025-09-26 12:52:40,488 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 69/100 | Train Loss: 0.0343 | Val rms_score: 0.5214
|
| 189 |
+
2025-09-26 12:52:53,229 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 70/100 | Train Loss: 0.0439 | Val rms_score: 0.5213
|
| 190 |
+
2025-09-26 12:53:05,880 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 71/100 | Train Loss: 0.0419 | Val rms_score: 0.5207
|
| 191 |
+
2025-09-26 12:53:18,609 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 72/100 | Train Loss: 0.0345 | Val rms_score: 0.5256
|
| 192 |
+
2025-09-26 12:53:30,587 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 73/100 | Train Loss: 0.0342 | Val rms_score: 0.5210
|
| 193 |
+
2025-09-26 12:53:41,499 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 74/100 | Train Loss: 0.0358 | Val rms_score: 0.5205
|
| 194 |
+
2025-09-26 12:53:54,369 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 75/100 | Train Loss: 0.0352 | Val rms_score: 0.5175
|
| 195 |
+
2025-09-26 12:54:06,590 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 76/100 | Train Loss: 0.0347 | Val rms_score: 0.5241
|
| 196 |
+
2025-09-26 12:54:19,190 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 77/100 | Train Loss: 0.0352 | Val rms_score: 0.5221
|
| 197 |
+
2025-09-26 12:54:31,161 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 78/100 | Train Loss: 0.0317 | Val rms_score: 0.5208
|
| 198 |
+
2025-09-26 12:54:42,467 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 79/100 | Train Loss: 0.0282 | Val rms_score: 0.5179
|
| 199 |
+
2025-09-26 12:54:55,595 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 80/100 | Train Loss: 0.0270 | Val rms_score: 0.5169
|
| 200 |
+
2025-09-26 12:55:08,075 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 81/100 | Train Loss: 0.0332 | Val rms_score: 0.5209
|
| 201 |
+
2025-09-26 12:55:20,719 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 82/100 | Train Loss: 0.0291 | Val rms_score: 0.5187
|
| 202 |
+
2025-09-26 12:55:33,209 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 83/100 | Train Loss: 0.0295 | Val rms_score: 0.5229
|
| 203 |
+
2025-09-26 12:55:45,775 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 84/100 | Train Loss: 0.0273 | Val rms_score: 0.5214
|
| 204 |
+
2025-09-26 12:55:57,858 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 85/100 | Train Loss: 0.0349 | Val rms_score: 0.5153
|
| 205 |
+
2025-09-26 12:56:10,662 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 86/100 | Train Loss: 0.0323 | Val rms_score: 0.5140
|
| 206 |
+
2025-09-26 12:56:23,578 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 87/100 | Train Loss: 0.0344 | Val rms_score: 0.5169
|
| 207 |
+
2025-09-26 12:56:35,013 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 88/100 | Train Loss: 0.0295 | Val rms_score: 0.5210
|
| 208 |
+
2025-09-26 12:56:45,885 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 89/100 | Train Loss: 0.0408 | Val rms_score: 0.5184
|
| 209 |
+
2025-09-26 12:56:59,208 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 90/100 | Train Loss: 0.0302 | Val rms_score: 0.5207
|
| 210 |
+
2025-09-26 12:57:12,079 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 91/100 | Train Loss: 0.0280 | Val rms_score: 0.5161
|
| 211 |
+
2025-09-26 12:57:24,803 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 92/100 | Train Loss: 0.0285 | Val rms_score: 0.5204
|
| 212 |
+
2025-09-26 12:57:35,256 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 93/100 | Train Loss: 0.0269 | Val rms_score: 0.5192
|
| 213 |
+
2025-09-26 12:57:47,134 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 94/100 | Train Loss: 0.0286 | Val rms_score: 0.5161
|
| 214 |
+
2025-09-26 12:57:59,871 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 95/100 | Train Loss: 0.0262 | Val rms_score: 0.5249
|
| 215 |
+
2025-09-26 12:58:12,751 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 96/100 | Train Loss: 0.0262 | Val rms_score: 0.5183
|
| 216 |
+
2025-09-26 12:58:24,967 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 97/100 | Train Loss: 0.0282 | Val rms_score: 0.5159
|
| 217 |
+
2025-09-26 12:58:37,339 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 98/100 | Train Loss: 0.0264 | Val rms_score: 0.5200
|
| 218 |
+
2025-09-26 12:58:47,970 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 99/100 | Train Loss: 0.0271 | Val rms_score: 0.5178
|
| 219 |
+
2025-09-26 12:59:01,190 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 100/100 | Train Loss: 0.0255 | Val rms_score: 0.5165
|
| 220 |
+
2025-09-26 12:59:01,874 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Test rms_score: 0.4711
|
| 221 |
+
2025-09-26 12:59:02,261 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Starting triplicate run 3 for dataset astrazeneca_cl at 2025-09-26_12-59-02
|
| 222 |
+
2025-09-26 12:59:14,551 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 1/100 | Train Loss: 1.3333 | Val rms_score: 0.5254
|
| 223 |
+
2025-09-26 12:59:14,551 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Global step of best model: 36
|
| 224 |
+
2025-09-26 12:59:15,166 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Best model saved at epoch 1 with val rms_score: 0.5254
|
| 225 |
+
2025-09-26 12:59:27,653 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 2/100 | Train Loss: 0.8125 | Val rms_score: 0.5001
|
| 226 |
+
2025-09-26 12:59:27,832 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Global step of best model: 72
|
| 227 |
+
2025-09-26 12:59:28,413 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Best model saved at epoch 2 with val rms_score: 0.5001
|
| 228 |
+
2025-09-26 12:59:40,154 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 3/100 | Train Loss: 0.6953 | Val rms_score: 0.5003
|
| 229 |
+
2025-09-26 12:59:51,196 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 4/100 | Train Loss: 0.6111 | Val rms_score: 0.5044
|
| 230 |
+
2025-09-26 13:00:03,452 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 5/100 | Train Loss: 0.5208 | Val rms_score: 0.5062
|
| 231 |
+
2025-09-26 13:00:16,239 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 6/100 | Train Loss: 0.4688 | Val rms_score: 0.5070
|
| 232 |
+
2025-09-26 13:00:29,110 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 7/100 | Train Loss: 0.3646 | Val rms_score: 0.5157
|
| 233 |
+
2025-09-26 13:00:41,109 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 8/100 | Train Loss: 0.2986 | Val rms_score: 0.5157
|
| 234 |
+
2025-09-26 13:00:51,897 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 9/100 | Train Loss: 0.2591 | Val rms_score: 0.5400
|
| 235 |
+
2025-09-26 13:01:04,526 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 10/100 | Train Loss: 0.2196 | Val rms_score: 0.5301
|
| 236 |
+
2025-09-26 13:01:17,400 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 11/100 | Train Loss: 0.2049 | Val rms_score: 0.5368
|
| 237 |
+
2025-09-26 13:01:30,268 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 12/100 | Train Loss: 0.1826 | Val rms_score: 0.5357
|
| 238 |
+
2025-09-26 13:01:42,043 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 13/100 | Train Loss: 0.1753 | Val rms_score: 0.5321
|
| 239 |
+
2025-09-26 13:01:53,411 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 14/100 | Train Loss: 0.1348 | Val rms_score: 0.5441
|
| 240 |
+
2025-09-26 13:02:05,925 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 15/100 | Train Loss: 0.1241 | Val rms_score: 0.5511
|
| 241 |
+
2025-09-26 13:02:16,382 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 16/100 | Train Loss: 0.1319 | Val rms_score: 0.5370
|
| 242 |
+
2025-09-26 13:02:29,279 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 17/100 | Train Loss: 0.1387 | Val rms_score: 0.5284
|
| 243 |
+
2025-09-26 13:02:40,273 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 18/100 | Train Loss: 0.1137 | Val rms_score: 0.5419
|
| 244 |
+
2025-09-26 13:02:52,569 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 19/100 | Train Loss: 0.1072 | Val rms_score: 0.5446
|
| 245 |
+
2025-09-26 13:03:05,550 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 20/100 | Train Loss: 0.1078 | Val rms_score: 0.5405
|
| 246 |
+
2025-09-26 13:03:17,572 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 21/100 | Train Loss: 0.0968 | Val rms_score: 0.5522
|
| 247 |
+
2025-09-26 13:03:30,019 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 22/100 | Train Loss: 0.0959 | Val rms_score: 0.5534
|
| 248 |
+
2025-09-26 13:03:40,648 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 23/100 | Train Loss: 0.0982 | Val rms_score: 0.5476
|
| 249 |
+
2025-09-26 13:03:53,220 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 24/100 | Train Loss: 0.0964 | Val rms_score: 0.5391
|
| 250 |
+
2025-09-26 13:04:06,584 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 25/100 | Train Loss: 0.0907 | Val rms_score: 0.5471
|
| 251 |
+
2025-09-26 13:04:20,018 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 26/100 | Train Loss: 0.0755 | Val rms_score: 0.5503
|
| 252 |
+
2025-09-26 13:04:33,390 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 27/100 | Train Loss: 0.0786 | Val rms_score: 0.5530
|
| 253 |
+
2025-09-26 13:04:46,030 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 28/100 | Train Loss: 0.0791 | Val rms_score: 0.5526
|
| 254 |
+
2025-09-26 13:04:58,606 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 29/100 | Train Loss: 0.0790 | Val rms_score: 0.5489
|
| 255 |
+
2025-09-26 13:05:10,769 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 30/100 | Train Loss: 0.0707 | Val rms_score: 0.5383
|
| 256 |
+
2025-09-26 13:05:23,202 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 31/100 | Train Loss: 0.0693 | Val rms_score: 0.5470
|
| 257 |
+
2025-09-26 13:05:36,144 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 32/100 | Train Loss: 0.0651 | Val rms_score: 0.5475
|
| 258 |
+
2025-09-26 13:05:48,656 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 33/100 | Train Loss: 0.0673 | Val rms_score: 0.5447
|
| 259 |
+
2025-09-26 13:06:01,452 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 34/100 | Train Loss: 0.0625 | Val rms_score: 0.5396
|
| 260 |
+
2025-09-26 13:06:14,373 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 35/100 | Train Loss: 0.0590 | Val rms_score: 0.5333
|
| 261 |
+
2025-09-26 13:06:24,831 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 36/100 | Train Loss: 0.0629 | Val rms_score: 0.5414
|
| 262 |
+
2025-09-26 13:06:37,446 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 37/100 | Train Loss: 0.0591 | Val rms_score: 0.5393
|
| 263 |
+
2025-09-26 13:06:49,789 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 38/100 | Train Loss: 0.0629 | Val rms_score: 0.5302
|
| 264 |
+
2025-09-26 13:07:02,491 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 39/100 | Train Loss: 0.0498 | Val rms_score: 0.5290
|
| 265 |
+
2025-09-26 13:07:15,604 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 40/100 | Train Loss: 0.0625 | Val rms_score: 0.5295
|
| 266 |
+
2025-09-26 13:07:26,837 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 41/100 | Train Loss: 0.0642 | Val rms_score: 0.5401
|
| 267 |
+
2025-09-26 13:07:38,250 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 42/100 | Train Loss: 0.0505 | Val rms_score: 0.5394
|
| 268 |
+
2025-09-26 13:07:48,626 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 43/100 | Train Loss: 0.0634 | Val rms_score: 0.5404
|
| 269 |
+
2025-09-26 13:08:01,444 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 44/100 | Train Loss: 0.0590 | Val rms_score: 0.5393
|
| 270 |
+
2025-09-26 13:08:13,703 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 45/100 | Train Loss: 0.0574 | Val rms_score: 0.5398
|
| 271 |
+
2025-09-26 13:08:24,273 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 46/100 | Train Loss: 0.0508 | Val rms_score: 0.5413
|
| 272 |
+
2025-09-26 13:08:36,702 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 47/100 | Train Loss: 0.0477 | Val rms_score: 0.5362
|
| 273 |
+
2025-09-26 13:08:49,740 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 48/100 | Train Loss: 0.0494 | Val rms_score: 0.5275
|
| 274 |
+
2025-09-26 13:09:02,477 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 49/100 | Train Loss: 0.0477 | Val rms_score: 0.5291
|
| 275 |
+
2025-09-26 13:09:14,273 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 50/100 | Train Loss: 0.0473 | Val rms_score: 0.5382
|
| 276 |
+
2025-09-26 13:09:25,224 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 51/100 | Train Loss: 0.0499 | Val rms_score: 0.5384
|
| 277 |
+
2025-09-26 13:09:36,879 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 52/100 | Train Loss: 0.0438 | Val rms_score: 0.5364
|
| 278 |
+
2025-09-26 13:09:49,876 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 53/100 | Train Loss: 0.0420 | Val rms_score: 0.5363
|
| 279 |
+
2025-09-26 13:10:02,359 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 54/100 | Train Loss: 0.0475 | Val rms_score: 0.5340
|
| 280 |
+
2025-09-26 13:10:14,401 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 55/100 | Train Loss: 0.0414 | Val rms_score: 0.5351
|
| 281 |
+
2025-09-26 13:10:28,212 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 56/100 | Train Loss: 0.0527 | Val rms_score: 0.5326
|
| 282 |
+
2025-09-26 13:10:39,362 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 57/100 | Train Loss: 0.0421 | Val rms_score: 0.5294
|
| 283 |
+
2025-09-26 13:10:52,404 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 58/100 | Train Loss: 0.0410 | Val rms_score: 0.5344
|
| 284 |
+
2025-09-26 13:11:05,129 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 59/100 | Train Loss: 0.0443 | Val rms_score: 0.5300
|
| 285 |
+
2025-09-26 13:11:17,414 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 60/100 | Train Loss: 0.0436 | Val rms_score: 0.5334
|
| 286 |
+
2025-09-26 13:11:27,972 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 61/100 | Train Loss: 0.0391 | Val rms_score: 0.5299
|
| 287 |
+
2025-09-26 13:11:40,138 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 62/100 | Train Loss: 0.0425 | Val rms_score: 0.5320
|
| 288 |
+
2025-09-26 13:11:52,936 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 63/100 | Train Loss: 0.0404 | Val rms_score: 0.5289
|
| 289 |
+
2025-09-26 13:12:05,445 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 64/100 | Train Loss: 0.0354 | Val rms_score: 0.5289
|
| 290 |
+
2025-09-26 13:12:17,709 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 65/100 | Train Loss: 0.0373 | Val rms_score: 0.5276
|
| 291 |
+
2025-09-26 13:12:29,041 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 66/100 | Train Loss: 0.0412 | Val rms_score: 0.5330
|
| 292 |
+
2025-09-26 13:12:40,416 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 67/100 | Train Loss: 0.0400 | Val rms_score: 0.5239
|
| 293 |
+
2025-09-26 13:12:53,239 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 68/100 | Train Loss: 0.0384 | Val rms_score: 0.5280
|
| 294 |
+
2025-09-26 13:13:05,615 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 69/100 | Train Loss: 0.0362 | Val rms_score: 0.5247
|
| 295 |
+
2025-09-26 13:13:18,039 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 70/100 | Train Loss: 0.0340 | Val rms_score: 0.5251
|
| 296 |
+
2025-09-26 13:13:29,135 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 71/100 | Train Loss: 0.0373 | Val rms_score: 0.5249
|
| 297 |
+
2025-09-26 13:13:41,817 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 72/100 | Train Loss: 0.0386 | Val rms_score: 0.5250
|
| 298 |
+
2025-09-26 13:13:54,874 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 73/100 | Train Loss: 0.0391 | Val rms_score: 0.5257
|
| 299 |
+
2025-09-26 13:14:07,441 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 74/100 | Train Loss: 0.0367 | Val rms_score: 0.5216
|
| 300 |
+
2025-09-26 13:14:19,892 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 75/100 | Train Loss: 0.0408 | Val rms_score: 0.5245
|
| 301 |
+
2025-09-26 13:14:32,489 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 76/100 | Train Loss: 0.0360 | Val rms_score: 0.5228
|
| 302 |
+
2025-09-26 13:14:43,999 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 77/100 | Train Loss: 0.0362 | Val rms_score: 0.5256
|
| 303 |
+
2025-09-26 13:14:56,369 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 78/100 | Train Loss: 0.0417 | Val rms_score: 0.5231
|
| 304 |
+
2025-09-26 13:15:09,511 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 79/100 | Train Loss: 0.0326 | Val rms_score: 0.5268
|
| 305 |
+
2025-09-26 13:15:20,575 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 80/100 | Train Loss: 0.0352 | Val rms_score: 0.5243
|
| 306 |
+
2025-09-26 13:15:33,380 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 81/100 | Train Loss: 0.0325 | Val rms_score: 0.5250
|
| 307 |
+
2025-09-26 13:15:46,061 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 82/100 | Train Loss: 0.0323 | Val rms_score: 0.5238
|
| 308 |
+
2025-09-26 13:15:59,670 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 83/100 | Train Loss: 0.0310 | Val rms_score: 0.5231
|
| 309 |
+
2025-09-26 13:16:13,209 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 84/100 | Train Loss: 0.0322 | Val rms_score: 0.5254
|
| 310 |
+
2025-09-26 13:16:24,153 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 85/100 | Train Loss: 0.0326 | Val rms_score: 0.5229
|
| 311 |
+
2025-09-26 13:16:36,492 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 86/100 | Train Loss: 0.0323 | Val rms_score: 0.5269
|
| 312 |
+
2025-09-26 13:16:49,497 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 87/100 | Train Loss: 0.0327 | Val rms_score: 0.5272
|
| 313 |
+
2025-09-26 13:17:01,609 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 88/100 | Train Loss: 0.0315 | Val rms_score: 0.5254
|
| 314 |
+
2025-09-26 13:17:13,012 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 89/100 | Train Loss: 0.0222 | Val rms_score: 0.5237
|
| 315 |
+
2025-09-26 13:17:24,993 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 90/100 | Train Loss: 0.0326 | Val rms_score: 0.5219
|
| 316 |
+
2025-09-26 13:17:37,897 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 91/100 | Train Loss: 0.0326 | Val rms_score: 0.5270
|
| 317 |
+
2025-09-26 13:17:50,501 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 92/100 | Train Loss: 0.0288 | Val rms_score: 0.5282
|
| 318 |
+
2025-09-26 13:18:02,819 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 93/100 | Train Loss: 0.0336 | Val rms_score: 0.5267
|
| 319 |
+
2025-09-26 13:18:14,472 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 94/100 | Train Loss: 0.0278 | Val rms_score: 0.5257
|
| 320 |
+
2025-09-26 13:18:26,232 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 95/100 | Train Loss: 0.0344 | Val rms_score: 0.5265
|
| 321 |
+
2025-09-26 13:18:39,024 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 96/100 | Train Loss: 0.0291 | Val rms_score: 0.5229
|
| 322 |
+
2025-09-26 13:18:53,318 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 97/100 | Train Loss: 0.0336 | Val rms_score: 0.5251
|
| 323 |
+
2025-09-26 13:19:06,295 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 98/100 | Train Loss: 0.0329 | Val rms_score: 0.5216
|
| 324 |
+
2025-09-26 13:19:18,662 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 99/100 | Train Loss: 0.0312 | Val rms_score: 0.5230
|
| 325 |
+
2025-09-26 13:19:31,594 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 100/100 | Train Loss: 0.0299 | Val rms_score: 0.5229
|
| 326 |
+
2025-09-26 13:19:32,578 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Test rms_score: 0.4878
|
| 327 |
+
2025-09-26 13:19:32,966 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Final Triplicate Test Results — Avg rms_score: 0.4812, Std Dev: 0.0073
|
logs_modchembert_regression_ModChemBERT-MLM/modchembert_deepchem_splits_run_astrazeneca_logd74_epochs100_batch_size32_20250926_131932.log
ADDED
|
@@ -0,0 +1,391 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
2025-09-26 13:19:32,967 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Running benchmark for dataset: astrazeneca_logd74
|
| 2 |
+
2025-09-26 13:19:32,967 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - dataset: astrazeneca_logd74, tasks: ['y'], epochs: 100, learning rate: 3e-05, transform: True
|
| 3 |
+
2025-09-26 13:19:32,971 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Starting triplicate run 1 for dataset astrazeneca_logd74 at 2025-09-26_13-19-32
|
| 4 |
+
2025-09-26 13:20:02,347 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 1/100 | Train Loss: 0.6219 | Val rms_score: 0.8734
|
| 5 |
+
2025-09-26 13:20:02,347 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Global step of best model: 105
|
| 6 |
+
2025-09-26 13:20:02,925 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Best model saved at epoch 1 with val rms_score: 0.8734
|
| 7 |
+
2025-09-26 13:20:33,275 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 2/100 | Train Loss: 0.3906 | Val rms_score: 0.7758
|
| 8 |
+
2025-09-26 13:20:33,460 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Global step of best model: 210
|
| 9 |
+
2025-09-26 13:20:34,023 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Best model saved at epoch 2 with val rms_score: 0.7758
|
| 10 |
+
2025-09-26 13:21:04,804 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 3/100 | Train Loss: 0.2635 | Val rms_score: 0.7247
|
| 11 |
+
2025-09-26 13:21:05,004 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Global step of best model: 315
|
| 12 |
+
2025-09-26 13:21:05,568 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Best model saved at epoch 3 with val rms_score: 0.7247
|
| 13 |
+
2025-09-26 13:21:38,464 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 4/100 | Train Loss: 0.2141 | Val rms_score: 0.7279
|
| 14 |
+
2025-09-26 13:22:10,300 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 5/100 | Train Loss: 0.1825 | Val rms_score: 0.7406
|
| 15 |
+
2025-09-26 13:22:43,517 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 6/100 | Train Loss: 0.1615 | Val rms_score: 0.7530
|
| 16 |
+
2025-09-26 13:23:16,181 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 7/100 | Train Loss: 0.1295 | Val rms_score: 0.7322
|
| 17 |
+
2025-09-26 13:23:49,407 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 8/100 | Train Loss: 0.1187 | Val rms_score: 0.7280
|
| 18 |
+
2025-09-26 13:24:21,781 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 9/100 | Train Loss: 0.1076 | Val rms_score: 0.7461
|
| 19 |
+
2025-09-26 13:24:55,518 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 10/100 | Train Loss: 0.0912 | Val rms_score: 0.7169
|
| 20 |
+
2025-09-26 13:24:55,693 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Global step of best model: 1050
|
| 21 |
+
2025-09-26 13:24:56,301 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Best model saved at epoch 10 with val rms_score: 0.7169
|
| 22 |
+
2025-09-26 13:25:29,082 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 11/100 | Train Loss: 0.0847 | Val rms_score: 0.7236
|
| 23 |
+
2025-09-26 13:26:01,759 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 12/100 | Train Loss: 0.0792 | Val rms_score: 0.7302
|
| 24 |
+
2025-09-26 13:26:33,205 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 13/100 | Train Loss: 0.0788 | Val rms_score: 0.7259
|
| 25 |
+
2025-09-26 13:27:05,203 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 14/100 | Train Loss: 0.0701 | Val rms_score: 0.7361
|
| 26 |
+
2025-09-26 13:27:37,007 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 15/100 | Train Loss: 0.0667 | Val rms_score: 0.7112
|
| 27 |
+
2025-09-26 13:27:37,185 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Global step of best model: 1575
|
| 28 |
+
2025-09-26 13:27:37,804 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Best model saved at epoch 15 with val rms_score: 0.7112
|
| 29 |
+
2025-09-26 13:28:10,485 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 16/100 | Train Loss: 0.0633 | Val rms_score: 0.7223
|
| 30 |
+
2025-09-26 13:28:43,745 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 17/100 | Train Loss: 0.0607 | Val rms_score: 0.7267
|
| 31 |
+
2025-09-26 13:29:16,552 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 18/100 | Train Loss: 0.0576 | Val rms_score: 0.7248
|
| 32 |
+
2025-09-26 13:29:49,075 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 19/100 | Train Loss: 0.0510 | Val rms_score: 0.7263
|
| 33 |
+
2025-09-26 13:30:23,237 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 20/100 | Train Loss: 0.0522 | Val rms_score: 0.7268
|
| 34 |
+
2025-09-26 13:30:56,670 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 21/100 | Train Loss: 0.0520 | Val rms_score: 0.7232
|
| 35 |
+
2025-09-26 13:31:30,143 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 22/100 | Train Loss: 0.0555 | Val rms_score: 0.7180
|
| 36 |
+
2025-09-26 13:32:03,322 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 23/100 | Train Loss: 0.0461 | Val rms_score: 0.7205
|
| 37 |
+
2025-09-26 13:32:36,418 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 24/100 | Train Loss: 0.0447 | Val rms_score: 0.7238
|
| 38 |
+
2025-09-26 13:33:08,768 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 25/100 | Train Loss: 0.0409 | Val rms_score: 0.7143
|
| 39 |
+
2025-09-26 13:33:40,781 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 26/100 | Train Loss: 0.0474 | Val rms_score: 0.7139
|
| 40 |
+
2025-09-26 13:34:13,638 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 27/100 | Train Loss: 0.0420 | Val rms_score: 0.7141
|
| 41 |
+
2025-09-26 13:34:43,714 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 28/100 | Train Loss: 0.0430 | Val rms_score: 0.7150
|
| 42 |
+
2025-09-26 13:35:15,836 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 29/100 | Train Loss: 0.0403 | Val rms_score: 0.7156
|
| 43 |
+
2025-09-26 13:35:48,472 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 30/100 | Train Loss: 0.0422 | Val rms_score: 0.7205
|
| 44 |
+
2025-09-26 13:36:21,663 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 31/100 | Train Loss: 0.0403 | Val rms_score: 0.7138
|
| 45 |
+
2025-09-26 13:36:55,698 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 32/100 | Train Loss: 0.0393 | Val rms_score: 0.7216
|
| 46 |
+
2025-09-26 13:37:28,571 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 33/100 | Train Loss: 0.0377 | Val rms_score: 0.7155
|
| 47 |
+
2025-09-26 13:38:01,404 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 34/100 | Train Loss: 0.0373 | Val rms_score: 0.7156
|
| 48 |
+
2025-09-26 13:38:33,030 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 35/100 | Train Loss: 0.0348 | Val rms_score: 0.7117
|
| 49 |
+
2025-09-26 13:39:04,061 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 36/100 | Train Loss: 0.0340 | Val rms_score: 0.7093
|
| 50 |
+
2025-09-26 13:39:04,568 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Global step of best model: 3780
|
| 51 |
+
2025-09-26 13:39:05,145 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Best model saved at epoch 36 with val rms_score: 0.7093
|
| 52 |
+
2025-09-26 13:39:38,082 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 37/100 | Train Loss: 0.0360 | Val rms_score: 0.7109
|
| 53 |
+
2025-09-26 13:40:10,120 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 38/100 | Train Loss: 0.0332 | Val rms_score: 0.7076
|
| 54 |
+
2025-09-26 13:40:10,281 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Global step of best model: 3990
|
| 55 |
+
2025-09-26 13:40:10,858 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Best model saved at epoch 38 with val rms_score: 0.7076
|
| 56 |
+
2025-09-26 13:40:44,660 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 39/100 | Train Loss: 0.0336 | Val rms_score: 0.7040
|
| 57 |
+
2025-09-26 13:40:44,833 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Global step of best model: 4095
|
| 58 |
+
2025-09-26 13:40:45,388 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Best model saved at epoch 39 with val rms_score: 0.7040
|
| 59 |
+
2025-09-26 13:41:18,540 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 40/100 | Train Loss: 0.0325 | Val rms_score: 0.7113
|
| 60 |
+
2025-09-26 13:41:51,015 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 41/100 | Train Loss: 0.0293 | Val rms_score: 0.7078
|
| 61 |
+
2025-09-26 13:42:24,744 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 42/100 | Train Loss: 0.0334 | Val rms_score: 0.7108
|
| 62 |
+
2025-09-26 13:42:57,616 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 43/100 | Train Loss: 0.0290 | Val rms_score: 0.7046
|
| 63 |
+
2025-09-26 13:43:29,616 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 44/100 | Train Loss: 0.0307 | Val rms_score: 0.7098
|
| 64 |
+
2025-09-26 13:44:02,416 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 45/100 | Train Loss: 0.0322 | Val rms_score: 0.7033
|
| 65 |
+
2025-09-26 13:44:02,579 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Global step of best model: 4725
|
| 66 |
+
2025-09-26 13:44:03,205 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Best model saved at epoch 45 with val rms_score: 0.7033
|
| 67 |
+
2025-09-26 13:44:35,621 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 46/100 | Train Loss: 0.0303 | Val rms_score: 0.7023
|
| 68 |
+
2025-09-26 13:44:36,227 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Global step of best model: 4830
|
| 69 |
+
2025-09-26 13:44:36,824 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Best model saved at epoch 46 with val rms_score: 0.7023
|
| 70 |
+
2025-09-26 13:45:08,693 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 47/100 | Train Loss: 0.0308 | Val rms_score: 0.7034
|
| 71 |
+
2025-09-26 13:45:40,185 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 48/100 | Train Loss: 0.0305 | Val rms_score: 0.7013
|
| 72 |
+
2025-09-26 13:45:40,357 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Global step of best model: 5040
|
| 73 |
+
2025-09-26 13:45:40,977 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Best model saved at epoch 48 with val rms_score: 0.7013
|
| 74 |
+
2025-09-26 13:46:13,251 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 49/100 | Train Loss: 0.0311 | Val rms_score: 0.7062
|
| 75 |
+
2025-09-26 13:46:45,610 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 50/100 | Train Loss: 0.0309 | Val rms_score: 0.7015
|
| 76 |
+
2025-09-26 13:47:17,632 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 51/100 | Train Loss: 0.0297 | Val rms_score: 0.7083
|
| 77 |
+
2025-09-26 13:47:51,089 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 52/100 | Train Loss: 0.0323 | Val rms_score: 0.7096
|
| 78 |
+
2025-09-26 13:48:24,270 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 53/100 | Train Loss: 0.0303 | Val rms_score: 0.7049
|
| 79 |
+
2025-09-26 13:48:55,710 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 54/100 | Train Loss: 0.0275 | Val rms_score: 0.7043
|
| 80 |
+
2025-09-26 13:49:28,281 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 55/100 | Train Loss: 0.0288 | Val rms_score: 0.7073
|
| 81 |
+
2025-09-26 13:50:01,553 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 56/100 | Train Loss: 0.0309 | Val rms_score: 0.7059
|
| 82 |
+
2025-09-26 13:50:35,145 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 57/100 | Train Loss: 0.0285 | Val rms_score: 0.7004
|
| 83 |
+
2025-09-26 13:50:35,302 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Global step of best model: 5985
|
| 84 |
+
2025-09-26 13:50:35,871 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Best model saved at epoch 57 with val rms_score: 0.7004
|
| 85 |
+
2025-09-26 13:51:09,617 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 58/100 | Train Loss: 0.0278 | Val rms_score: 0.7025
|
| 86 |
+
2025-09-26 13:51:42,968 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 59/100 | Train Loss: 0.0273 | Val rms_score: 0.7036
|
| 87 |
+
2025-09-26 13:52:15,435 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 60/100 | Train Loss: 0.0264 | Val rms_score: 0.6990
|
| 88 |
+
2025-09-26 13:52:15,591 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Global step of best model: 6300
|
| 89 |
+
2025-09-26 13:52:16,178 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Best model saved at epoch 60 with val rms_score: 0.6990
|
| 90 |
+
2025-09-26 13:52:48,473 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 61/100 | Train Loss: 0.0238 | Val rms_score: 0.6974
|
| 91 |
+
2025-09-26 13:52:49,103 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Global step of best model: 6405
|
| 92 |
+
2025-09-26 13:52:49,700 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Best model saved at epoch 61 with val rms_score: 0.6974
|
| 93 |
+
2025-09-26 13:53:23,933 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 62/100 | Train Loss: 0.0275 | Val rms_score: 0.7063
|
| 94 |
+
2025-09-26 13:53:56,995 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 63/100 | Train Loss: 0.0260 | Val rms_score: 0.7026
|
| 95 |
+
2025-09-26 13:54:29,450 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 64/100 | Train Loss: 0.0293 | Val rms_score: 0.7036
|
| 96 |
+
2025-09-26 13:55:02,532 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 65/100 | Train Loss: 0.0248 | Val rms_score: 0.7020
|
| 97 |
+
2025-09-26 13:55:35,022 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 66/100 | Train Loss: 0.0296 | Val rms_score: 0.7059
|
| 98 |
+
2025-09-26 13:56:09,404 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 67/100 | Train Loss: 0.0258 | Val rms_score: 0.7016
|
| 99 |
+
2025-09-26 13:56:42,653 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 68/100 | Train Loss: 0.0240 | Val rms_score: 0.7010
|
| 100 |
+
2025-09-26 13:57:15,631 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 69/100 | Train Loss: 0.0273 | Val rms_score: 0.7033
|
| 101 |
+
2025-09-26 13:57:48,032 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 70/100 | Train Loss: 0.0244 | Val rms_score: 0.7010
|
| 102 |
+
2025-09-26 13:58:19,698 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 71/100 | Train Loss: 0.0241 | Val rms_score: 0.7018
|
| 103 |
+
2025-09-26 13:58:52,014 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 72/100 | Train Loss: 0.0262 | Val rms_score: 0.7001
|
| 104 |
+
2025-09-26 13:59:25,288 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 73/100 | Train Loss: 0.0251 | Val rms_score: 0.6989
|
| 105 |
+
2025-09-26 13:59:58,330 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 74/100 | Train Loss: 0.0258 | Val rms_score: 0.6985
|
| 106 |
+
2025-09-26 14:00:30,875 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 75/100 | Train Loss: 0.0247 | Val rms_score: 0.7018
|
| 107 |
+
2025-09-26 14:01:03,504 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 76/100 | Train Loss: 0.0238 | Val rms_score: 0.7027
|
| 108 |
+
2025-09-26 14:01:38,393 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 77/100 | Train Loss: 0.0252 | Val rms_score: 0.7036
|
| 109 |
+
2025-09-26 14:02:11,842 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 78/100 | Train Loss: 0.0243 | Val rms_score: 0.6990
|
| 110 |
+
2025-09-26 14:02:44,939 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 79/100 | Train Loss: 0.0243 | Val rms_score: 0.6974
|
| 111 |
+
2025-09-26 14:03:18,123 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 80/100 | Train Loss: 0.0247 | Val rms_score: 0.6980
|
| 112 |
+
2025-09-26 14:03:51,188 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 81/100 | Train Loss: 0.0213 | Val rms_score: 0.6999
|
| 113 |
+
2025-09-26 14:04:24,643 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 82/100 | Train Loss: 0.0252 | Val rms_score: 0.6978
|
| 114 |
+
2025-09-26 14:04:56,165 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 83/100 | Train Loss: 0.0260 | Val rms_score: 0.6943
|
| 115 |
+
2025-09-26 14:04:56,321 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Global step of best model: 8715
|
| 116 |
+
2025-09-26 14:04:56,884 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Best model saved at epoch 83 with val rms_score: 0.6943
|
| 117 |
+
2025-09-26 14:05:29,692 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 84/100 | Train Loss: 0.0244 | Val rms_score: 0.6969
|
| 118 |
+
2025-09-26 14:06:01,670 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 85/100 | Train Loss: 0.0233 | Val rms_score: 0.7006
|
| 119 |
+
2025-09-26 14:06:35,531 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 86/100 | Train Loss: 0.0242 | Val rms_score: 0.6950
|
| 120 |
+
2025-09-26 14:07:08,989 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 87/100 | Train Loss: 0.0239 | Val rms_score: 0.7005
|
| 121 |
+
2025-09-26 14:07:41,707 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 88/100 | Train Loss: 0.0226 | Val rms_score: 0.6967
|
| 122 |
+
2025-09-26 14:08:14,284 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 89/100 | Train Loss: 0.0226 | Val rms_score: 0.6961
|
| 123 |
+
2025-09-26 14:08:47,118 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 90/100 | Train Loss: 0.0227 | Val rms_score: 0.6962
|
| 124 |
+
2025-09-26 14:09:19,573 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 91/100 | Train Loss: 0.0240 | Val rms_score: 0.6945
|
| 125 |
+
2025-09-26 14:09:52,833 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 92/100 | Train Loss: 0.0238 | Val rms_score: 0.6963
|
| 126 |
+
2025-09-26 14:10:25,504 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 93/100 | Train Loss: 0.0237 | Val rms_score: 0.6941
|
| 127 |
+
2025-09-26 14:10:25,661 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Global step of best model: 9765
|
| 128 |
+
2025-09-26 14:10:26,381 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Best model saved at epoch 93 with val rms_score: 0.6941
|
| 129 |
+
2025-09-26 14:10:58,530 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 94/100 | Train Loss: 0.0219 | Val rms_score: 0.6923
|
| 130 |
+
2025-09-26 14:10:58,693 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Global step of best model: 9870
|
| 131 |
+
2025-09-26 14:10:59,264 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Best model saved at epoch 94 with val rms_score: 0.6923
|
| 132 |
+
2025-09-26 14:11:31,036 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 95/100 | Train Loss: 0.0218 | Val rms_score: 0.6954
|
| 133 |
+
2025-09-26 14:12:04,018 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 96/100 | Train Loss: 0.0227 | Val rms_score: 0.6984
|
| 134 |
+
2025-09-26 14:12:37,401 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 97/100 | Train Loss: 0.0212 | Val rms_score: 0.6957
|
| 135 |
+
2025-09-26 14:13:08,220 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 98/100 | Train Loss: 0.0226 | Val rms_score: 0.6959
|
| 136 |
+
2025-09-26 14:13:38,333 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 99/100 | Train Loss: 0.0222 | Val rms_score: 0.6947
|
| 137 |
+
2025-09-26 14:14:10,224 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 100/100 | Train Loss: 0.0225 | Val rms_score: 0.6947
|
| 138 |
+
2025-09-26 14:14:12,094 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Test rms_score: 0.8117
|
| 139 |
+
2025-09-26 14:14:12,481 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Starting triplicate run 2 for dataset astrazeneca_logd74 at 2025-09-26_14-14-12
|
| 140 |
+
2025-09-26 14:14:43,317 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 1/100 | Train Loss: 0.4344 | Val rms_score: 0.8715
|
| 141 |
+
2025-09-26 14:14:43,317 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Global step of best model: 105
|
| 142 |
+
2025-09-26 14:14:43,890 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Best model saved at epoch 1 with val rms_score: 0.8715
|
| 143 |
+
2025-09-26 14:15:15,420 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 2/100 | Train Loss: 0.3422 | Val rms_score: 0.7523
|
| 144 |
+
2025-09-26 14:15:15,617 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Global step of best model: 210
|
| 145 |
+
2025-09-26 14:15:16,202 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Best model saved at epoch 2 with val rms_score: 0.7523
|
| 146 |
+
2025-09-26 14:15:47,780 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 3/100 | Train Loss: 0.2552 | Val rms_score: 0.7332
|
| 147 |
+
2025-09-26 14:15:47,929 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Global step of best model: 315
|
| 148 |
+
2025-09-26 14:15:48,485 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Best model saved at epoch 3 with val rms_score: 0.7332
|
| 149 |
+
2025-09-26 14:16:20,537 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 4/100 | Train Loss: 0.2109 | Val rms_score: 0.7584
|
| 150 |
+
2025-09-26 14:16:52,605 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 5/100 | Train Loss: 0.1775 | Val rms_score: 0.7377
|
| 151 |
+
2025-09-26 14:17:23,164 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 6/100 | Train Loss: 0.1437 | Val rms_score: 0.7409
|
| 152 |
+
2025-09-26 14:17:54,910 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 7/100 | Train Loss: 0.1232 | Val rms_score: 0.7371
|
| 153 |
+
2025-09-26 14:18:26,094 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 8/100 | Train Loss: 0.1031 | Val rms_score: 0.7271
|
| 154 |
+
2025-09-26 14:18:26,245 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Global step of best model: 840
|
| 155 |
+
2025-09-26 14:18:26,787 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Best model saved at epoch 8 with val rms_score: 0.7271
|
| 156 |
+
2025-09-26 14:18:56,820 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 9/100 | Train Loss: 0.0993 | Val rms_score: 0.7296
|
| 157 |
+
2025-09-26 14:19:27,541 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 10/100 | Train Loss: 0.0975 | Val rms_score: 0.7220
|
| 158 |
+
2025-09-26 14:19:27,717 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Global step of best model: 1050
|
| 159 |
+
2025-09-26 14:19:28,334 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Best model saved at epoch 10 with val rms_score: 0.7220
|
| 160 |
+
2025-09-26 14:19:58,688 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 11/100 | Train Loss: 0.0801 | Val rms_score: 0.7328
|
| 161 |
+
2025-09-26 14:20:29,632 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 12/100 | Train Loss: 0.0729 | Val rms_score: 0.7251
|
| 162 |
+
2025-09-26 14:21:00,525 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 13/100 | Train Loss: 0.0659 | Val rms_score: 0.7135
|
| 163 |
+
2025-09-26 14:21:00,678 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Global step of best model: 1365
|
| 164 |
+
2025-09-26 14:21:01,219 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Best model saved at epoch 13 with val rms_score: 0.7135
|
| 165 |
+
2025-09-26 14:21:32,685 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 14/100 | Train Loss: 0.0612 | Val rms_score: 0.7208
|
| 166 |
+
2025-09-26 14:22:03,779 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 15/100 | Train Loss: 0.0642 | Val rms_score: 0.7153
|
| 167 |
+
2025-09-26 14:22:34,601 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 16/100 | Train Loss: 0.0586 | Val rms_score: 0.7184
|
| 168 |
+
2025-09-26 14:23:06,535 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 17/100 | Train Loss: 0.0555 | Val rms_score: 0.7118
|
| 169 |
+
2025-09-26 14:23:06,688 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Global step of best model: 1785
|
| 170 |
+
2025-09-26 14:23:07,222 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Best model saved at epoch 17 with val rms_score: 0.7118
|
| 171 |
+
2025-09-26 14:23:38,988 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 18/100 | Train Loss: 0.0493 | Val rms_score: 0.7132
|
| 172 |
+
2025-09-26 14:24:07,969 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 19/100 | Train Loss: 0.0503 | Val rms_score: 0.7111
|
| 173 |
+
2025-09-26 14:24:08,123 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Global step of best model: 1995
|
| 174 |
+
2025-09-26 14:24:08,688 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Best model saved at epoch 19 with val rms_score: 0.7111
|
| 175 |
+
2025-09-26 14:24:40,579 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 20/100 | Train Loss: 0.0512 | Val rms_score: 0.7079
|
| 176 |
+
2025-09-26 14:24:40,732 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Global step of best model: 2100
|
| 177 |
+
2025-09-26 14:24:41,289 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Best model saved at epoch 20 with val rms_score: 0.7079
|
| 178 |
+
2025-09-26 14:25:12,462 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 21/100 | Train Loss: 0.0494 | Val rms_score: 0.7155
|
| 179 |
+
2025-09-26 14:25:44,637 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 22/100 | Train Loss: 0.0498 | Val rms_score: 0.7118
|
| 180 |
+
2025-09-26 14:26:15,632 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 23/100 | Train Loss: 0.0404 | Val rms_score: 0.7110
|
| 181 |
+
2025-09-26 14:26:47,310 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 24/100 | Train Loss: 0.0418 | Val rms_score: 0.7059
|
| 182 |
+
2025-09-26 14:26:47,464 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Global step of best model: 2520
|
| 183 |
+
2025-09-26 14:26:48,126 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Best model saved at epoch 24 with val rms_score: 0.7059
|
| 184 |
+
2025-09-26 14:27:20,025 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 25/100 | Train Loss: 0.0400 | Val rms_score: 0.7032
|
| 185 |
+
2025-09-26 14:27:20,191 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Global step of best model: 2625
|
| 186 |
+
2025-09-26 14:27:20,745 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Best model saved at epoch 25 with val rms_score: 0.7032
|
| 187 |
+
2025-09-26 14:27:52,221 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 26/100 | Train Loss: 0.0383 | Val rms_score: 0.7054
|
| 188 |
+
2025-09-26 14:28:23,956 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 27/100 | Train Loss: 0.0350 | Val rms_score: 0.7033
|
| 189 |
+
2025-09-26 14:28:54,734 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 28/100 | Train Loss: 0.0404 | Val rms_score: 0.7060
|
| 190 |
+
2025-09-26 14:29:26,862 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 29/100 | Train Loss: 0.0372 | Val rms_score: 0.7146
|
| 191 |
+
2025-09-26 14:29:58,347 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 30/100 | Train Loss: 0.0369 | Val rms_score: 0.7089
|
| 192 |
+
2025-09-26 14:30:29,778 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 31/100 | Train Loss: 0.0347 | Val rms_score: 0.7040
|
| 193 |
+
2025-09-26 14:31:01,888 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 32/100 | Train Loss: 0.0339 | Val rms_score: 0.7022
|
| 194 |
+
2025-09-26 14:31:02,043 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Global step of best model: 3360
|
| 195 |
+
2025-09-26 14:31:02,592 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Best model saved at epoch 32 with val rms_score: 0.7022
|
| 196 |
+
2025-09-26 14:31:33,847 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 33/100 | Train Loss: 0.0358 | Val rms_score: 0.7029
|
| 197 |
+
2025-09-26 14:32:05,179 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 34/100 | Train Loss: 0.0362 | Val rms_score: 0.7034
|
| 198 |
+
2025-09-26 14:32:37,196 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 35/100 | Train Loss: 0.0312 | Val rms_score: 0.7068
|
| 199 |
+
2025-09-26 14:33:07,544 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 36/100 | Train Loss: 0.0324 | Val rms_score: 0.7069
|
| 200 |
+
2025-09-26 14:33:37,489 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 37/100 | Train Loss: 0.0329 | Val rms_score: 0.7030
|
| 201 |
+
2025-09-26 14:34:08,867 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 38/100 | Train Loss: 0.0328 | Val rms_score: 0.7078
|
| 202 |
+
2025-09-26 14:34:41,288 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 39/100 | Train Loss: 0.0327 | Val rms_score: 0.6998
|
| 203 |
+
2025-09-26 14:34:41,450 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Global step of best model: 4095
|
| 204 |
+
2025-09-26 14:34:41,999 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Best model saved at epoch 39 with val rms_score: 0.6998
|
| 205 |
+
2025-09-26 14:35:13,166 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 40/100 | Train Loss: 0.0322 | Val rms_score: 0.6973
|
| 206 |
+
2025-09-26 14:35:13,323 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Global step of best model: 4200
|
| 207 |
+
2025-09-26 14:35:13,877 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Best model saved at epoch 40 with val rms_score: 0.6973
|
| 208 |
+
2025-09-26 14:35:45,571 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 41/100 | Train Loss: 0.0328 | Val rms_score: 0.7051
|
| 209 |
+
2025-09-26 14:36:17,964 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 42/100 | Train Loss: 0.0311 | Val rms_score: 0.7026
|
| 210 |
+
2025-09-26 14:36:49,782 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 43/100 | Train Loss: 0.0311 | Val rms_score: 0.7083
|
| 211 |
+
2025-09-26 14:37:21,444 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 44/100 | Train Loss: 0.0314 | Val rms_score: 0.7069
|
| 212 |
+
2025-09-26 14:37:53,018 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 45/100 | Train Loss: 0.0291 | Val rms_score: 0.6976
|
| 213 |
+
2025-09-26 14:38:22,941 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 46/100 | Train Loss: 0.0298 | Val rms_score: 0.7014
|
| 214 |
+
2025-09-26 14:38:54,555 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 47/100 | Train Loss: 0.0275 | Val rms_score: 0.6976
|
| 215 |
+
2025-09-26 14:39:27,780 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 48/100 | Train Loss: 0.0314 | Val rms_score: 0.6911
|
| 216 |
+
2025-09-26 14:39:27,940 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Global step of best model: 5040
|
| 217 |
+
2025-09-26 14:39:28,531 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Best model saved at epoch 48 with val rms_score: 0.6911
|
| 218 |
+
2025-09-26 14:39:59,981 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 49/100 | Train Loss: 0.0288 | Val rms_score: 0.7023
|
| 219 |
+
2025-09-26 14:40:31,546 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 50/100 | Train Loss: 0.0273 | Val rms_score: 0.6929
|
| 220 |
+
2025-09-26 14:41:02,494 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 51/100 | Train Loss: 0.0270 | Val rms_score: 0.6993
|
| 221 |
+
2025-09-26 14:41:34,653 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 52/100 | Train Loss: 0.0268 | Val rms_score: 0.6954
|
| 222 |
+
2025-09-26 14:42:06,231 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 53/100 | Train Loss: 0.0302 | Val rms_score: 0.6935
|
| 223 |
+
2025-09-26 14:42:37,622 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 54/100 | Train Loss: 0.0281 | Val rms_score: 0.7001
|
| 224 |
+
2025-09-26 14:43:08,574 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 55/100 | Train Loss: 0.0273 | Val rms_score: 0.6972
|
| 225 |
+
2025-09-26 14:43:40,134 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 56/100 | Train Loss: 0.0271 | Val rms_score: 0.6966
|
| 226 |
+
2025-09-26 14:44:12,195 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 57/100 | Train Loss: 0.0279 | Val rms_score: 0.6987
|
| 227 |
+
2025-09-26 14:44:44,522 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 58/100 | Train Loss: 0.0266 | Val rms_score: 0.6985
|
| 228 |
+
2025-09-26 14:45:16,351 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 59/100 | Train Loss: 0.0262 | Val rms_score: 0.6999
|
| 229 |
+
2025-09-26 14:45:46,726 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 60/100 | Train Loss: 0.0256 | Val rms_score: 0.7008
|
| 230 |
+
2025-09-26 14:46:18,431 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 61/100 | Train Loss: 0.0226 | Val rms_score: 0.6959
|
| 231 |
+
2025-09-26 14:46:50,405 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 62/100 | Train Loss: 0.0200 | Val rms_score: 0.6948
|
| 232 |
+
2025-09-26 14:47:21,812 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 63/100 | Train Loss: 0.0229 | Val rms_score: 0.6955
|
| 233 |
+
2025-09-26 14:47:52,361 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 64/100 | Train Loss: 0.0236 | Val rms_score: 0.6990
|
| 234 |
+
2025-09-26 14:48:23,049 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 65/100 | Train Loss: 0.0245 | Val rms_score: 0.6949
|
| 235 |
+
2025-09-26 14:48:54,727 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 66/100 | Train Loss: 0.0267 | Val rms_score: 0.6978
|
| 236 |
+
2025-09-26 14:49:27,388 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 67/100 | Train Loss: 0.0259 | Val rms_score: 0.6991
|
| 237 |
+
2025-09-26 14:49:59,417 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 68/100 | Train Loss: 0.0258 | Val rms_score: 0.7000
|
| 238 |
+
2025-09-26 14:50:31,464 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 69/100 | Train Loss: 0.0238 | Val rms_score: 0.6976
|
| 239 |
+
2025-09-26 14:51:02,528 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 70/100 | Train Loss: 0.0255 | Val rms_score: 0.6941
|
| 240 |
+
2025-09-26 14:51:33,497 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 71/100 | Train Loss: 0.0270 | Val rms_score: 0.6953
|
| 241 |
+
2025-09-26 14:52:04,200 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 72/100 | Train Loss: 0.0250 | Val rms_score: 0.6915
|
| 242 |
+
2025-09-26 14:52:35,495 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 73/100 | Train Loss: 0.0238 | Val rms_score: 0.6964
|
| 243 |
+
2025-09-26 14:53:07,162 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 74/100 | Train Loss: 0.0243 | Val rms_score: 0.6903
|
| 244 |
+
2025-09-26 14:53:07,314 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Global step of best model: 7770
|
| 245 |
+
2025-09-26 14:53:07,855 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Best model saved at epoch 74 with val rms_score: 0.6903
|
| 246 |
+
2025-09-26 14:53:39,466 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 75/100 | Train Loss: 0.0231 | Val rms_score: 0.6941
|
| 247 |
+
2025-09-26 14:54:09,926 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 76/100 | Train Loss: 0.0220 | Val rms_score: 0.6927
|
| 248 |
+
2025-09-26 14:54:42,064 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 77/100 | Train Loss: 0.0233 | Val rms_score: 0.6987
|
| 249 |
+
2025-09-26 14:55:13,903 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 78/100 | Train Loss: 0.0227 | Val rms_score: 0.6984
|
| 250 |
+
2025-09-26 14:55:45,614 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 79/100 | Train Loss: 0.0211 | Val rms_score: 0.6954
|
| 251 |
+
2025-09-26 14:56:17,311 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 80/100 | Train Loss: 0.0234 | Val rms_score: 0.6965
|
| 252 |
+
2025-09-26 14:56:47,916 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 81/100 | Train Loss: 0.0273 | Val rms_score: 0.7012
|
| 253 |
+
2025-09-26 14:57:19,538 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 82/100 | Train Loss: 0.0203 | Val rms_score: 0.6948
|
| 254 |
+
2025-09-26 14:57:50,162 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 83/100 | Train Loss: 0.0254 | Val rms_score: 0.6974
|
| 255 |
+
2025-09-26 14:58:21,566 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 84/100 | Train Loss: 0.0229 | Val rms_score: 0.6976
|
| 256 |
+
2025-09-26 14:58:52,963 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 85/100 | Train Loss: 0.0214 | Val rms_score: 0.6937
|
| 257 |
+
2025-09-26 14:59:24,854 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 86/100 | Train Loss: 0.0211 | Val rms_score: 0.6966
|
| 258 |
+
2025-09-26 14:59:56,110 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 87/100 | Train Loss: 0.0198 | Val rms_score: 0.6955
|
| 259 |
+
2025-09-26 15:00:27,306 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 88/100 | Train Loss: 0.0218 | Val rms_score: 0.6972
|
| 260 |
+
2025-09-26 15:00:58,214 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 89/100 | Train Loss: 0.0224 | Val rms_score: 0.6960
|
| 261 |
+
2025-09-26 15:01:27,892 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 90/100 | Train Loss: 0.0214 | Val rms_score: 0.6947
|
| 262 |
+
2025-09-26 15:01:57,100 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 91/100 | Train Loss: 0.0226 | Val rms_score: 0.6937
|
| 263 |
+
2025-09-26 15:02:28,603 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 92/100 | Train Loss: 0.0227 | Val rms_score: 0.6985
|
| 264 |
+
2025-09-26 15:02:59,751 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 93/100 | Train Loss: 0.0221 | Val rms_score: 0.6924
|
| 265 |
+
2025-09-26 15:03:30,675 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 94/100 | Train Loss: 0.0214 | Val rms_score: 0.6991
|
| 266 |
+
2025-09-26 15:04:00,785 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 95/100 | Train Loss: 0.0220 | Val rms_score: 0.6933
|
| 267 |
+
2025-09-26 15:04:32,192 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 96/100 | Train Loss: 0.0218 | Val rms_score: 0.6945
|
| 268 |
+
2025-09-26 15:05:03,758 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 97/100 | Train Loss: 0.0224 | Val rms_score: 0.6917
|
| 269 |
+
2025-09-26 15:05:34,773 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 98/100 | Train Loss: 0.0209 | Val rms_score: 0.6900
|
| 270 |
+
2025-09-26 15:05:34,925 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Global step of best model: 10290
|
| 271 |
+
2025-09-26 15:05:35,455 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Best model saved at epoch 98 with val rms_score: 0.6900
|
| 272 |
+
2025-09-26 15:06:06,444 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 99/100 | Train Loss: 0.0211 | Val rms_score: 0.6916
|
| 273 |
+
2025-09-26 15:06:36,763 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 100/100 | Train Loss: 0.0216 | Val rms_score: 0.6957
|
| 274 |
+
2025-09-26 15:06:38,246 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Test rms_score: 0.8206
|
| 275 |
+
2025-09-26 15:06:38,696 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Starting triplicate run 3 for dataset astrazeneca_logd74 at 2025-09-26_15-06-38
|
| 276 |
+
2025-09-26 15:07:08,150 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 1/100 | Train Loss: 0.5531 | Val rms_score: 0.8598
|
| 277 |
+
2025-09-26 15:07:08,150 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Global step of best model: 105
|
| 278 |
+
2025-09-26 15:07:08,722 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Best model saved at epoch 1 with val rms_score: 0.8598
|
| 279 |
+
2025-09-26 15:07:39,569 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 2/100 | Train Loss: 0.3609 | Val rms_score: 0.7799
|
| 280 |
+
2025-09-26 15:07:39,766 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Global step of best model: 210
|
| 281 |
+
2025-09-26 15:07:40,505 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Best model saved at epoch 2 with val rms_score: 0.7799
|
| 282 |
+
2025-09-26 15:08:11,169 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 3/100 | Train Loss: 0.2635 | Val rms_score: 0.7369
|
| 283 |
+
2025-09-26 15:08:11,324 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Global step of best model: 315
|
| 284 |
+
2025-09-26 15:08:11,869 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Best model saved at epoch 3 with val rms_score: 0.7369
|
| 285 |
+
2025-09-26 15:08:42,436 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 4/100 | Train Loss: 0.2172 | Val rms_score: 0.7406
|
| 286 |
+
2025-09-26 15:09:13,839 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 5/100 | Train Loss: 0.1913 | Val rms_score: 0.7567
|
| 287 |
+
2025-09-26 15:09:44,496 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 6/100 | Train Loss: 0.1500 | Val rms_score: 0.7624
|
| 288 |
+
2025-09-26 15:10:13,600 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 7/100 | Train Loss: 0.1330 | Val rms_score: 0.7731
|
| 289 |
+
2025-09-26 15:10:43,330 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 8/100 | Train Loss: 0.1094 | Val rms_score: 0.7657
|
| 290 |
+
2025-09-26 15:11:12,970 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 9/100 | Train Loss: 0.1062 | Val rms_score: 0.7594
|
| 291 |
+
2025-09-26 15:11:44,375 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 10/100 | Train Loss: 0.0875 | Val rms_score: 0.7521
|
| 292 |
+
2025-09-26 15:12:14,398 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 11/100 | Train Loss: 0.0835 | Val rms_score: 0.7623
|
| 293 |
+
2025-09-26 15:12:47,065 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 12/100 | Train Loss: 0.0802 | Val rms_score: 0.7624
|
| 294 |
+
2025-09-26 15:13:11,334 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 13/100 | Train Loss: 0.0668 | Val rms_score: 0.7658
|
| 295 |
+
2025-09-26 15:13:24,942 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 14/100 | Train Loss: 0.0652 | Val rms_score: 0.7613
|
| 296 |
+
2025-09-26 15:13:38,825 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 15/100 | Train Loss: 0.0675 | Val rms_score: 0.7560
|
| 297 |
+
2025-09-26 15:13:53,082 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 16/100 | Train Loss: 0.0598 | Val rms_score: 0.7569
|
| 298 |
+
2025-09-26 15:14:08,314 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 17/100 | Train Loss: 0.0563 | Val rms_score: 0.7557
|
| 299 |
+
2025-09-26 15:14:21,942 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 18/100 | Train Loss: 0.0542 | Val rms_score: 0.7482
|
| 300 |
+
2025-09-26 15:14:35,253 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 19/100 | Train Loss: 0.0516 | Val rms_score: 0.7393
|
| 301 |
+
2025-09-26 15:14:50,185 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 20/100 | Train Loss: 0.0516 | Val rms_score: 0.7548
|
| 302 |
+
2025-09-26 15:15:03,723 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 21/100 | Train Loss: 0.0436 | Val rms_score: 0.7397
|
| 303 |
+
2025-09-26 15:15:17,244 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 22/100 | Train Loss: 0.0406 | Val rms_score: 0.7599
|
| 304 |
+
2025-09-26 15:15:30,254 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 23/100 | Train Loss: 0.0435 | Val rms_score: 0.7457
|
| 305 |
+
2025-09-26 15:15:43,002 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 24/100 | Train Loss: 0.0479 | Val rms_score: 0.7474
|
| 306 |
+
2025-09-26 15:15:56,074 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 25/100 | Train Loss: 0.0409 | Val rms_score: 0.7579
|
| 307 |
+
2025-09-26 15:16:08,556 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 26/100 | Train Loss: 0.0448 | Val rms_score: 0.7459
|
| 308 |
+
2025-09-26 15:16:21,247 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 27/100 | Train Loss: 0.0393 | Val rms_score: 0.7429
|
| 309 |
+
2025-09-26 15:16:33,969 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 28/100 | Train Loss: 0.0412 | Val rms_score: 0.7409
|
| 310 |
+
2025-09-26 15:16:47,569 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 29/100 | Train Loss: 0.0408 | Val rms_score: 0.7397
|
| 311 |
+
2025-09-26 15:16:59,989 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 30/100 | Train Loss: 0.0431 | Val rms_score: 0.7407
|
| 312 |
+
2025-09-26 15:17:12,931 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 31/100 | Train Loss: 0.0384 | Val rms_score: 0.7338
|
| 313 |
+
2025-09-26 15:17:13,575 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Global step of best model: 3255
|
| 314 |
+
2025-09-26 15:17:14,245 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Best model saved at epoch 31 with val rms_score: 0.7338
|
| 315 |
+
2025-09-26 15:17:27,051 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 32/100 | Train Loss: 0.0367 | Val rms_score: 0.7407
|
| 316 |
+
2025-09-26 15:17:40,513 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 33/100 | Train Loss: 0.0387 | Val rms_score: 0.7422
|
| 317 |
+
2025-09-26 15:17:53,448 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 34/100 | Train Loss: 0.0362 | Val rms_score: 0.7411
|
| 318 |
+
2025-09-26 15:18:06,593 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 35/100 | Train Loss: 0.0354 | Val rms_score: 0.7400
|
| 319 |
+
2025-09-26 15:18:17,183 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 36/100 | Train Loss: 0.0371 | Val rms_score: 0.7496
|
| 320 |
+
2025-09-26 15:18:27,820 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 37/100 | Train Loss: 0.0368 | Val rms_score: 0.7414
|
| 321 |
+
2025-09-26 15:18:38,389 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 38/100 | Train Loss: 0.0340 | Val rms_score: 0.7453
|
| 322 |
+
2025-09-26 15:18:49,147 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 39/100 | Train Loss: 0.0332 | Val rms_score: 0.7440
|
| 323 |
+
2025-09-26 15:18:59,677 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 40/100 | Train Loss: 0.0323 | Val rms_score: 0.7387
|
| 324 |
+
2025-09-26 15:19:09,396 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 41/100 | Train Loss: 0.0354 | Val rms_score: 0.7436
|
| 325 |
+
2025-09-26 15:19:19,590 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 42/100 | Train Loss: 0.0303 | Val rms_score: 0.7427
|
| 326 |
+
2025-09-26 15:19:29,841 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 43/100 | Train Loss: 0.0329 | Val rms_score: 0.7416
|
| 327 |
+
2025-09-26 15:19:39,611 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 44/100 | Train Loss: 0.0309 | Val rms_score: 0.7389
|
| 328 |
+
2025-09-26 15:19:49,428 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 45/100 | Train Loss: 0.0297 | Val rms_score: 0.7359
|
| 329 |
+
2025-09-26 15:19:58,655 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 46/100 | Train Loss: 0.0297 | Val rms_score: 0.7428
|
| 330 |
+
2025-09-26 15:20:08,566 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 47/100 | Train Loss: 0.0275 | Val rms_score: 0.7363
|
| 331 |
+
2025-09-26 15:20:19,188 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 48/100 | Train Loss: 0.0297 | Val rms_score: 0.7408
|
| 332 |
+
2025-09-26 15:20:28,417 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 49/100 | Train Loss: 0.0304 | Val rms_score: 0.7306
|
| 333 |
+
2025-09-26 15:20:28,577 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Global step of best model: 5145
|
| 334 |
+
2025-09-26 15:20:29,167 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Best model saved at epoch 49 with val rms_score: 0.7306
|
| 335 |
+
2025-09-26 15:20:38,811 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 50/100 | Train Loss: 0.0308 | Val rms_score: 0.7410
|
| 336 |
+
2025-09-26 15:20:49,435 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 51/100 | Train Loss: 0.0303 | Val rms_score: 0.7373
|
| 337 |
+
2025-09-26 15:21:00,497 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 52/100 | Train Loss: 0.0267 | Val rms_score: 0.7365
|
| 338 |
+
2025-09-26 15:21:11,130 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 53/100 | Train Loss: 0.0288 | Val rms_score: 0.7462
|
| 339 |
+
2025-09-26 15:21:21,495 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 54/100 | Train Loss: 0.0290 | Val rms_score: 0.7339
|
| 340 |
+
2025-09-26 15:21:31,709 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 55/100 | Train Loss: 0.0279 | Val rms_score: 0.7385
|
| 341 |
+
2025-09-26 15:21:41,891 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 56/100 | Train Loss: 0.0289 | Val rms_score: 0.7362
|
| 342 |
+
2025-09-26 15:21:52,345 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 57/100 | Train Loss: 0.0268 | Val rms_score: 0.7352
|
| 343 |
+
2025-09-26 15:22:03,277 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 58/100 | Train Loss: 0.0262 | Val rms_score: 0.7390
|
| 344 |
+
2025-09-26 15:22:13,681 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 59/100 | Train Loss: 0.0262 | Val rms_score: 0.7339
|
| 345 |
+
2025-09-26 15:22:24,004 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 60/100 | Train Loss: 0.0262 | Val rms_score: 0.7353
|
| 346 |
+
2025-09-26 15:22:34,784 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 61/100 | Train Loss: 0.0219 | Val rms_score: 0.7380
|
| 347 |
+
2025-09-26 15:22:45,435 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 62/100 | Train Loss: 0.0279 | Val rms_score: 0.7325
|
| 348 |
+
2025-09-26 15:22:55,803 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 63/100 | Train Loss: 0.0326 | Val rms_score: 0.7327
|
| 349 |
+
2025-09-26 15:23:06,106 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 64/100 | Train Loss: 0.0260 | Val rms_score: 0.7343
|
| 350 |
+
2025-09-26 15:23:16,565 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 65/100 | Train Loss: 0.0252 | Val rms_score: 0.7360
|
| 351 |
+
2025-09-26 15:23:26,864 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 66/100 | Train Loss: 0.0247 | Val rms_score: 0.7342
|
| 352 |
+
2025-09-26 15:23:38,637 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 67/100 | Train Loss: 0.0270 | Val rms_score: 0.7338
|
| 353 |
+
2025-09-26 15:23:48,607 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 68/100 | Train Loss: 0.0238 | Val rms_score: 0.7351
|
| 354 |
+
2025-09-26 15:23:59,252 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 69/100 | Train Loss: 0.0248 | Val rms_score: 0.7327
|
| 355 |
+
2025-09-26 15:24:10,176 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 70/100 | Train Loss: 0.0262 | Val rms_score: 0.7339
|
| 356 |
+
2025-09-26 15:24:21,044 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 71/100 | Train Loss: 0.0244 | Val rms_score: 0.7331
|
| 357 |
+
2025-09-26 15:24:31,934 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 72/100 | Train Loss: 0.0246 | Val rms_score: 0.7389
|
| 358 |
+
2025-09-26 15:24:42,669 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 73/100 | Train Loss: 0.0250 | Val rms_score: 0.7378
|
| 359 |
+
2025-09-26 15:24:52,969 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 74/100 | Train Loss: 0.0249 | Val rms_score: 0.7369
|
| 360 |
+
2025-09-26 15:25:03,074 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 75/100 | Train Loss: 0.0240 | Val rms_score: 0.7347
|
| 361 |
+
2025-09-26 15:25:13,226 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 76/100 | Train Loss: 0.0245 | Val rms_score: 0.7315
|
| 362 |
+
2025-09-26 15:25:24,374 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 77/100 | Train Loss: 0.0241 | Val rms_score: 0.7322
|
| 363 |
+
2025-09-26 15:25:34,795 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 78/100 | Train Loss: 0.0240 | Val rms_score: 0.7334
|
| 364 |
+
2025-09-26 15:25:45,219 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 79/100 | Train Loss: 0.0242 | Val rms_score: 0.7330
|
| 365 |
+
2025-09-26 15:25:55,479 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 80/100 | Train Loss: 0.0239 | Val rms_score: 0.7339
|
| 366 |
+
2025-09-26 15:26:05,729 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 81/100 | Train Loss: 0.0262 | Val rms_score: 0.7323
|
| 367 |
+
2025-09-26 15:26:16,887 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 82/100 | Train Loss: 0.0207 | Val rms_score: 0.7323
|
| 368 |
+
2025-09-26 15:26:27,254 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 83/100 | Train Loss: 0.0204 | Val rms_score: 0.7322
|
| 369 |
+
2025-09-26 15:26:37,625 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 84/100 | Train Loss: 0.0224 | Val rms_score: 0.7330
|
| 370 |
+
2025-09-26 15:26:48,662 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 85/100 | Train Loss: 0.0261 | Val rms_score: 0.7349
|
| 371 |
+
2025-09-26 15:27:00,296 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 86/100 | Train Loss: 0.0237 | Val rms_score: 0.7363
|
| 372 |
+
2025-09-26 15:27:11,976 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 87/100 | Train Loss: 0.0214 | Val rms_score: 0.7279
|
| 373 |
+
2025-09-26 15:27:12,144 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Global step of best model: 9135
|
| 374 |
+
2025-09-26 15:27:12,975 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Best model saved at epoch 87 with val rms_score: 0.7279
|
| 375 |
+
2025-09-26 15:27:25,219 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 88/100 | Train Loss: 0.0235 | Val rms_score: 0.7350
|
| 376 |
+
2025-09-26 15:27:36,747 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 89/100 | Train Loss: 0.0219 | Val rms_score: 0.7326
|
| 377 |
+
2025-09-26 15:27:47,915 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 90/100 | Train Loss: 0.0241 | Val rms_score: 0.7301
|
| 378 |
+
2025-09-26 15:27:59,045 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 91/100 | Train Loss: 0.0233 | Val rms_score: 0.7279
|
| 379 |
+
2025-09-26 15:28:10,742 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 92/100 | Train Loss: 0.0224 | Val rms_score: 0.7335
|
| 380 |
+
2025-09-26 15:28:22,042 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 93/100 | Train Loss: 0.0208 | Val rms_score: 0.7349
|
| 381 |
+
2025-09-26 15:28:32,963 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 94/100 | Train Loss: 0.0214 | Val rms_score: 0.7400
|
| 382 |
+
2025-09-26 15:28:44,019 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 95/100 | Train Loss: 0.0218 | Val rms_score: 0.7360
|
| 383 |
+
2025-09-26 15:28:55,681 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 96/100 | Train Loss: 0.0224 | Val rms_score: 0.7319
|
| 384 |
+
2025-09-26 15:29:06,423 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 97/100 | Train Loss: 0.0221 | Val rms_score: 0.7333
|
| 385 |
+
2025-09-26 15:29:16,919 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 98/100 | Train Loss: 0.0226 | Val rms_score: 0.7325
|
| 386 |
+
2025-09-26 15:29:27,847 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 99/100 | Train Loss: 0.0232 | Val rms_score: 0.7262
|
| 387 |
+
2025-09-26 15:29:28,022 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Global step of best model: 10395
|
| 388 |
+
2025-09-26 15:29:28,692 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Best model saved at epoch 99 with val rms_score: 0.7262
|
| 389 |
+
2025-09-26 15:29:39,293 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 100/100 | Train Loss: 0.0227 | Val rms_score: 0.7412
|
| 390 |
+
2025-09-26 15:29:39,945 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Test rms_score: 0.8290
|
| 391 |
+
2025-09-26 15:29:40,489 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Final Triplicate Test Results — Avg rms_score: 0.8204, Std Dev: 0.0070
|
logs_modchembert_regression_ModChemBERT-MLM/modchembert_deepchem_splits_run_astrazeneca_ppb_epochs100_batch_size32_20250926_152940.log
ADDED
|
@@ -0,0 +1,337 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
2025-09-26 15:29:40,491 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Running benchmark for dataset: astrazeneca_ppb
|
| 2 |
+
2025-09-26 15:29:40,492 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - dataset: astrazeneca_ppb, tasks: ['y'], epochs: 100, learning rate: 3e-05, transform: True
|
| 3 |
+
2025-09-26 15:29:40,497 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Starting triplicate run 1 for dataset astrazeneca_ppb at 2025-09-26_15-29-40
|
| 4 |
+
2025-09-26 15:29:45,581 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 1/100 | Train Loss: 1.0778 | Val rms_score: 0.1269
|
| 5 |
+
2025-09-26 15:29:45,581 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Global step of best model: 45
|
| 6 |
+
2025-09-26 15:29:47,573 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Best model saved at epoch 1 with val rms_score: 0.1269
|
| 7 |
+
2025-09-26 15:29:52,235 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 2/100 | Train Loss: 0.6389 | Val rms_score: 0.1183
|
| 8 |
+
2025-09-26 15:29:52,417 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Global step of best model: 90
|
| 9 |
+
2025-09-26 15:29:53,033 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Best model saved at epoch 2 with val rms_score: 0.1183
|
| 10 |
+
2025-09-26 15:29:58,386 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 3/100 | Train Loss: 0.4321 | Val rms_score: 0.1196
|
| 11 |
+
2025-09-26 15:30:03,416 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 4/100 | Train Loss: 0.4056 | Val rms_score: 0.1282
|
| 12 |
+
2025-09-26 15:30:09,013 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 5/100 | Train Loss: 0.3200 | Val rms_score: 0.1254
|
| 13 |
+
2025-09-26 15:30:14,342 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 6/100 | Train Loss: 0.2472 | Val rms_score: 0.1282
|
| 14 |
+
2025-09-26 15:30:20,398 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 7/100 | Train Loss: 0.2146 | Val rms_score: 0.1178
|
| 15 |
+
2025-09-26 15:30:20,600 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Global step of best model: 315
|
| 16 |
+
2025-09-26 15:30:21,224 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Best model saved at epoch 7 with val rms_score: 0.1178
|
| 17 |
+
2025-09-26 15:30:26,461 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 8/100 | Train Loss: 0.1778 | Val rms_score: 0.1216
|
| 18 |
+
2025-09-26 15:30:32,259 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 9/100 | Train Loss: 0.1656 | Val rms_score: 0.1252
|
| 19 |
+
2025-09-26 15:30:37,957 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 10/100 | Train Loss: 0.1271 | Val rms_score: 0.1237
|
| 20 |
+
2025-09-26 15:30:43,482 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 11/100 | Train Loss: 0.1222 | Val rms_score: 0.1204
|
| 21 |
+
2025-09-26 15:30:49,322 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 12/100 | Train Loss: 0.1156 | Val rms_score: 0.1226
|
| 22 |
+
2025-09-26 15:30:54,404 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 13/100 | Train Loss: 0.0993 | Val rms_score: 0.1199
|
| 23 |
+
2025-09-26 15:30:59,967 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 14/100 | Train Loss: 0.0948 | Val rms_score: 0.1230
|
| 24 |
+
2025-09-26 15:31:05,582 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 15/100 | Train Loss: 0.0844 | Val rms_score: 0.1204
|
| 25 |
+
2025-09-26 15:31:11,146 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 16/100 | Train Loss: 0.0832 | Val rms_score: 0.1229
|
| 26 |
+
2025-09-26 15:31:16,728 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 17/100 | Train Loss: 0.0844 | Val rms_score: 0.1236
|
| 27 |
+
2025-09-26 15:31:21,862 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 18/100 | Train Loss: 0.0898 | Val rms_score: 0.1206
|
| 28 |
+
2025-09-26 15:31:27,580 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 19/100 | Train Loss: 0.0719 | Val rms_score: 0.1239
|
| 29 |
+
2025-09-26 15:31:33,333 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 20/100 | Train Loss: 0.0694 | Val rms_score: 0.1232
|
| 30 |
+
2025-09-26 15:31:38,985 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 21/100 | Train Loss: 0.0691 | Val rms_score: 0.1205
|
| 31 |
+
2025-09-26 15:31:44,850 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 22/100 | Train Loss: 0.0719 | Val rms_score: 0.1201
|
| 32 |
+
2025-09-26 15:31:50,404 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 23/100 | Train Loss: 0.0629 | Val rms_score: 0.1196
|
| 33 |
+
2025-09-26 15:31:55,282 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 24/100 | Train Loss: 0.0601 | Val rms_score: 0.1192
|
| 34 |
+
2025-09-26 15:32:00,685 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 25/100 | Train Loss: 0.0506 | Val rms_score: 0.1220
|
| 35 |
+
2025-09-26 15:32:06,339 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 26/100 | Train Loss: 0.0552 | Val rms_score: 0.1220
|
| 36 |
+
2025-09-26 15:32:12,166 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 27/100 | Train Loss: 0.0516 | Val rms_score: 0.1184
|
| 37 |
+
2025-09-26 15:32:17,376 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 28/100 | Train Loss: 0.0559 | Val rms_score: 0.1191
|
| 38 |
+
2025-09-26 15:32:22,833 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 29/100 | Train Loss: 0.0426 | Val rms_score: 0.1200
|
| 39 |
+
2025-09-26 15:32:28,417 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 30/100 | Train Loss: 0.0538 | Val rms_score: 0.1206
|
| 40 |
+
2025-09-26 15:32:34,086 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 31/100 | Train Loss: 0.0569 | Val rms_score: 0.1216
|
| 41 |
+
2025-09-26 15:32:39,728 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 32/100 | Train Loss: 0.0520 | Val rms_score: 0.1207
|
| 42 |
+
2025-09-26 15:32:44,887 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 33/100 | Train Loss: 0.0507 | Val rms_score: 0.1189
|
| 43 |
+
2025-09-26 15:32:50,561 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 34/100 | Train Loss: 0.0490 | Val rms_score: 0.1201
|
| 44 |
+
2025-09-26 15:32:56,103 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 35/100 | Train Loss: 0.0443 | Val rms_score: 0.1206
|
| 45 |
+
2025-09-26 15:33:01,674 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 36/100 | Train Loss: 0.0480 | Val rms_score: 0.1221
|
| 46 |
+
2025-09-26 15:33:07,426 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 37/100 | Train Loss: 0.0410 | Val rms_score: 0.1220
|
| 47 |
+
2025-09-26 15:33:12,651 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 38/100 | Train Loss: 0.0393 | Val rms_score: 0.1209
|
| 48 |
+
2025-09-26 15:33:18,231 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 39/100 | Train Loss: 0.0398 | Val rms_score: 0.1212
|
| 49 |
+
2025-09-26 15:33:23,942 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 40/100 | Train Loss: 0.0431 | Val rms_score: 0.1210
|
| 50 |
+
2025-09-26 15:33:29,514 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 41/100 | Train Loss: 0.0378 | Val rms_score: 0.1212
|
| 51 |
+
2025-09-26 15:33:35,254 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 42/100 | Train Loss: 0.0408 | Val rms_score: 0.1206
|
| 52 |
+
2025-09-26 15:33:40,477 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 43/100 | Train Loss: 0.0393 | Val rms_score: 0.1207
|
| 53 |
+
2025-09-26 15:33:46,102 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 44/100 | Train Loss: 0.0384 | Val rms_score: 0.1210
|
| 54 |
+
2025-09-26 15:33:52,073 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 45/100 | Train Loss: 0.0323 | Val rms_score: 0.1208
|
| 55 |
+
2025-09-26 15:33:57,073 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 46/100 | Train Loss: 0.0391 | Val rms_score: 0.1206
|
| 56 |
+
2025-09-26 15:34:02,412 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 47/100 | Train Loss: 0.0508 | Val rms_score: 0.1204
|
| 57 |
+
2025-09-26 15:34:07,696 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 48/100 | Train Loss: 0.0387 | Val rms_score: 0.1220
|
| 58 |
+
2025-09-26 15:34:13,331 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 49/100 | Train Loss: 0.0379 | Val rms_score: 0.1207
|
| 59 |
+
2025-09-26 15:34:19,000 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 50/100 | Train Loss: 0.0340 | Val rms_score: 0.1219
|
| 60 |
+
2025-09-26 15:34:24,733 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 51/100 | Train Loss: 0.0345 | Val rms_score: 0.1217
|
| 61 |
+
2025-09-26 15:34:30,542 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 52/100 | Train Loss: 0.0387 | Val rms_score: 0.1222
|
| 62 |
+
2025-09-26 15:34:35,796 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 53/100 | Train Loss: 0.0351 | Val rms_score: 0.1212
|
| 63 |
+
2025-09-26 15:34:41,572 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 54/100 | Train Loss: 0.0359 | Val rms_score: 0.1197
|
| 64 |
+
2025-09-26 15:34:47,407 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 55/100 | Train Loss: 0.0352 | Val rms_score: 0.1213
|
| 65 |
+
2025-09-26 15:34:52,999 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 56/100 | Train Loss: 0.0375 | Val rms_score: 0.1215
|
| 66 |
+
2025-09-26 15:34:58,808 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 57/100 | Train Loss: 0.0363 | Val rms_score: 0.1213
|
| 67 |
+
2025-09-26 15:35:04,151 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 58/100 | Train Loss: 0.0332 | Val rms_score: 0.1207
|
| 68 |
+
2025-09-26 15:35:09,874 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 59/100 | Train Loss: 0.0345 | Val rms_score: 0.1204
|
| 69 |
+
2025-09-26 15:35:15,580 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 60/100 | Train Loss: 0.0345 | Val rms_score: 0.1210
|
| 70 |
+
2025-09-26 15:35:21,140 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 61/100 | Train Loss: 0.0342 | Val rms_score: 0.1200
|
| 71 |
+
2025-09-26 15:35:27,024 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 62/100 | Train Loss: 0.0337 | Val rms_score: 0.1203
|
| 72 |
+
2025-09-26 15:35:32,388 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 63/100 | Train Loss: 0.0310 | Val rms_score: 0.1211
|
| 73 |
+
2025-09-26 15:35:37,955 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 64/100 | Train Loss: 0.0339 | Val rms_score: 0.1219
|
| 74 |
+
2025-09-26 15:35:43,735 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 65/100 | Train Loss: 0.0264 | Val rms_score: 0.1208
|
| 75 |
+
2025-09-26 15:35:48,960 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 66/100 | Train Loss: 0.0319 | Val rms_score: 0.1198
|
| 76 |
+
2025-09-26 15:35:55,645 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 67/100 | Train Loss: 0.0383 | Val rms_score: 0.1205
|
| 77 |
+
2025-09-26 15:36:00,533 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 68/100 | Train Loss: 0.0297 | Val rms_score: 0.1203
|
| 78 |
+
2025-09-26 15:36:05,779 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 69/100 | Train Loss: 0.0252 | Val rms_score: 0.1203
|
| 79 |
+
2025-09-26 15:36:11,472 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 70/100 | Train Loss: 0.0285 | Val rms_score: 0.1204
|
| 80 |
+
2025-09-26 15:36:17,110 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 71/100 | Train Loss: 0.0288 | Val rms_score: 0.1197
|
| 81 |
+
2025-09-26 15:36:22,952 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 72/100 | Train Loss: 0.0318 | Val rms_score: 0.1208
|
| 82 |
+
2025-09-26 15:36:28,222 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 73/100 | Train Loss: 0.0300 | Val rms_score: 0.1207
|
| 83 |
+
2025-09-26 15:36:33,916 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 74/100 | Train Loss: 0.0280 | Val rms_score: 0.1191
|
| 84 |
+
2025-09-26 15:36:39,623 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 75/100 | Train Loss: 0.0304 | Val rms_score: 0.1188
|
| 85 |
+
2025-09-26 15:36:45,157 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 76/100 | Train Loss: 0.0275 | Val rms_score: 0.1215
|
| 86 |
+
2025-09-26 15:36:50,849 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 77/100 | Train Loss: 0.0280 | Val rms_score: 0.1200
|
| 87 |
+
2025-09-26 15:36:56,073 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 78/100 | Train Loss: 0.0309 | Val rms_score: 0.1199
|
| 88 |
+
2025-09-26 15:37:01,655 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 79/100 | Train Loss: 0.0278 | Val rms_score: 0.1214
|
| 89 |
+
2025-09-26 15:37:07,030 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 80/100 | Train Loss: 0.0271 | Val rms_score: 0.1205
|
| 90 |
+
2025-09-26 15:37:12,342 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 81/100 | Train Loss: 0.0280 | Val rms_score: 0.1193
|
| 91 |
+
2025-09-26 15:37:17,984 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 82/100 | Train Loss: 0.0260 | Val rms_score: 0.1221
|
| 92 |
+
2025-09-26 15:37:22,839 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 83/100 | Train Loss: 0.0268 | Val rms_score: 0.1206
|
| 93 |
+
2025-09-26 15:37:28,020 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 84/100 | Train Loss: 0.0238 | Val rms_score: 0.1203
|
| 94 |
+
2025-09-26 15:37:33,226 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 85/100 | Train Loss: 0.0245 | Val rms_score: 0.1205
|
| 95 |
+
2025-09-26 15:37:38,413 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 86/100 | Train Loss: 0.0243 | Val rms_score: 0.1215
|
| 96 |
+
2025-09-26 15:37:43,967 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 87/100 | Train Loss: 0.0240 | Val rms_score: 0.1206
|
| 97 |
+
2025-09-26 15:37:49,048 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 88/100 | Train Loss: 0.0243 | Val rms_score: 0.1201
|
| 98 |
+
2025-09-26 15:37:55,504 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 89/100 | Train Loss: 0.0270 | Val rms_score: 0.1205
|
| 99 |
+
2025-09-26 15:38:00,421 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 90/100 | Train Loss: 0.0245 | Val rms_score: 0.1205
|
| 100 |
+
2025-09-26 15:38:05,339 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 91/100 | Train Loss: 0.0252 | Val rms_score: 0.1216
|
| 101 |
+
2025-09-26 15:38:11,005 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 92/100 | Train Loss: 0.0266 | Val rms_score: 0.1208
|
| 102 |
+
2025-09-26 15:38:16,224 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 93/100 | Train Loss: 0.0281 | Val rms_score: 0.1212
|
| 103 |
+
2025-09-26 15:38:21,783 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 94/100 | Train Loss: 0.0238 | Val rms_score: 0.1214
|
| 104 |
+
2025-09-26 15:38:27,398 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 95/100 | Train Loss: 0.0267 | Val rms_score: 0.1209
|
| 105 |
+
2025-09-26 15:38:33,399 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 96/100 | Train Loss: 0.0229 | Val rms_score: 0.1195
|
| 106 |
+
2025-09-26 15:38:39,327 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 97/100 | Train Loss: 0.0227 | Val rms_score: 0.1188
|
| 107 |
+
2025-09-26 15:38:44,581 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 98/100 | Train Loss: 0.0189 | Val rms_score: 0.1194
|
| 108 |
+
2025-09-26 15:38:50,259 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 99/100 | Train Loss: 0.0245 | Val rms_score: 0.1198
|
| 109 |
+
2025-09-26 15:38:55,904 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 100/100 | Train Loss: 0.0233 | Val rms_score: 0.1210
|
| 110 |
+
2025-09-26 15:38:56,466 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Test rms_score: 0.1278
|
| 111 |
+
2025-09-26 15:38:56,939 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Starting triplicate run 2 for dataset astrazeneca_ppb at 2025-09-26_15-38-56
|
| 112 |
+
2025-09-26 15:39:01,502 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 1/100 | Train Loss: 1.0056 | Val rms_score: 0.1277
|
| 113 |
+
2025-09-26 15:39:01,503 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Global step of best model: 45
|
| 114 |
+
2025-09-26 15:39:02,186 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Best model saved at epoch 1 with val rms_score: 0.1277
|
| 115 |
+
2025-09-26 15:39:09,454 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 2/100 | Train Loss: 0.6278 | Val rms_score: 0.1214
|
| 116 |
+
2025-09-26 15:39:09,636 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Global step of best model: 90
|
| 117 |
+
2025-09-26 15:39:10,243 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Best model saved at epoch 2 with val rms_score: 0.1214
|
| 118 |
+
2025-09-26 15:39:16,597 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 3/100 | Train Loss: 0.5393 | Val rms_score: 0.1228
|
| 119 |
+
2025-09-26 15:39:24,039 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 4/100 | Train Loss: 0.4000 | Val rms_score: 0.1162
|
| 120 |
+
2025-09-26 15:39:24,247 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Global step of best model: 180
|
| 121 |
+
2025-09-26 15:39:25,162 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Best model saved at epoch 4 with val rms_score: 0.1162
|
| 122 |
+
2025-09-26 15:39:30,793 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 5/100 | Train Loss: 0.2325 | Val rms_score: 0.1210
|
| 123 |
+
2025-09-26 15:39:36,194 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 6/100 | Train Loss: 0.2250 | Val rms_score: 0.1200
|
| 124 |
+
2025-09-26 15:39:42,558 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 7/100 | Train Loss: 0.1938 | Val rms_score: 0.1225
|
| 125 |
+
2025-09-26 15:39:48,165 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 8/100 | Train Loss: 0.1611 | Val rms_score: 0.1221
|
| 126 |
+
2025-09-26 15:39:53,503 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 9/100 | Train Loss: 0.1328 | Val rms_score: 0.1197
|
| 127 |
+
2025-09-26 15:39:58,428 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 10/100 | Train Loss: 0.1167 | Val rms_score: 0.1181
|
| 128 |
+
2025-09-26 15:40:03,528 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 11/100 | Train Loss: 0.1125 | Val rms_score: 0.1202
|
| 129 |
+
2025-09-26 15:40:09,231 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 12/100 | Train Loss: 0.1031 | Val rms_score: 0.1205
|
| 130 |
+
2025-09-26 15:40:14,485 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 13/100 | Train Loss: 0.0965 | Val rms_score: 0.1201
|
| 131 |
+
2025-09-26 15:40:19,321 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 14/100 | Train Loss: 0.1073 | Val rms_score: 0.1191
|
| 132 |
+
2025-09-26 15:40:24,413 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 15/100 | Train Loss: 0.0896 | Val rms_score: 0.1194
|
| 133 |
+
2025-09-26 15:40:29,600 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 16/100 | Train Loss: 0.0941 | Val rms_score: 0.1181
|
| 134 |
+
2025-09-26 15:40:35,024 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 17/100 | Train Loss: 0.0753 | Val rms_score: 0.1183
|
| 135 |
+
2025-09-26 15:40:40,010 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 18/100 | Train Loss: 0.0816 | Val rms_score: 0.1178
|
| 136 |
+
2025-09-26 15:40:44,750 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 19/100 | Train Loss: 0.0649 | Val rms_score: 0.1150
|
| 137 |
+
2025-09-26 15:40:44,908 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Global step of best model: 855
|
| 138 |
+
2025-09-26 15:40:45,517 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Best model saved at epoch 19 with val rms_score: 0.1150
|
| 139 |
+
2025-09-26 15:40:50,636 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 20/100 | Train Loss: 0.0677 | Val rms_score: 0.1180
|
| 140 |
+
2025-09-26 15:40:55,645 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 21/100 | Train Loss: 0.0646 | Val rms_score: 0.1205
|
| 141 |
+
2025-09-26 15:41:01,512 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 22/100 | Train Loss: 0.0649 | Val rms_score: 0.1178
|
| 142 |
+
2025-09-26 15:41:07,422 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 23/100 | Train Loss: 0.0661 | Val rms_score: 0.1190
|
| 143 |
+
2025-09-26 15:41:13,045 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 24/100 | Train Loss: 0.0615 | Val rms_score: 0.1167
|
| 144 |
+
2025-09-26 15:41:17,742 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 25/100 | Train Loss: 0.0497 | Val rms_score: 0.1163
|
| 145 |
+
2025-09-26 15:41:22,675 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 26/100 | Train Loss: 0.0528 | Val rms_score: 0.1178
|
| 146 |
+
2025-09-26 15:41:28,091 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 27/100 | Train Loss: 0.0523 | Val rms_score: 0.1179
|
| 147 |
+
2025-09-26 15:41:32,788 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 28/100 | Train Loss: 0.0510 | Val rms_score: 0.1190
|
| 148 |
+
2025-09-26 15:41:37,511 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 29/100 | Train Loss: 0.0504 | Val rms_score: 0.1174
|
| 149 |
+
2025-09-26 15:41:42,737 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 30/100 | Train Loss: 0.0493 | Val rms_score: 0.1192
|
| 150 |
+
2025-09-26 15:41:47,526 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 31/100 | Train Loss: 0.0497 | Val rms_score: 0.1170
|
| 151 |
+
2025-09-26 15:41:53,168 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 32/100 | Train Loss: 0.0471 | Val rms_score: 0.1167
|
| 152 |
+
2025-09-26 15:41:57,944 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 33/100 | Train Loss: 0.0462 | Val rms_score: 0.1183
|
| 153 |
+
2025-09-26 15:42:02,419 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 34/100 | Train Loss: 0.0437 | Val rms_score: 0.1168
|
| 154 |
+
2025-09-26 15:42:07,072 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 35/100 | Train Loss: 0.0396 | Val rms_score: 0.1168
|
| 155 |
+
2025-09-26 15:42:11,719 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 36/100 | Train Loss: 0.0391 | Val rms_score: 0.1173
|
| 156 |
+
2025-09-26 15:42:17,360 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 37/100 | Train Loss: 0.0399 | Val rms_score: 0.1171
|
| 157 |
+
2025-09-26 15:42:22,468 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 38/100 | Train Loss: 0.0344 | Val rms_score: 0.1184
|
| 158 |
+
2025-09-26 15:42:27,214 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 39/100 | Train Loss: 0.0417 | Val rms_score: 0.1176
|
| 159 |
+
2025-09-26 15:42:31,955 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 40/100 | Train Loss: 0.0378 | Val rms_score: 0.1193
|
| 160 |
+
2025-09-26 15:42:36,692 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 41/100 | Train Loss: 0.0411 | Val rms_score: 0.1186
|
| 161 |
+
2025-09-26 15:42:41,709 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 42/100 | Train Loss: 0.0403 | Val rms_score: 0.1194
|
| 162 |
+
2025-09-26 15:42:46,582 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 43/100 | Train Loss: 0.0431 | Val rms_score: 0.1168
|
| 163 |
+
2025-09-26 15:42:51,008 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 44/100 | Train Loss: 0.0436 | Val rms_score: 0.1187
|
| 164 |
+
2025-09-26 15:42:57,043 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 45/100 | Train Loss: 0.0392 | Val rms_score: 0.1197
|
| 165 |
+
2025-09-26 15:43:02,137 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 46/100 | Train Loss: 0.0398 | Val rms_score: 0.1183
|
| 166 |
+
2025-09-26 15:43:07,709 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 47/100 | Train Loss: 0.0354 | Val rms_score: 0.1178
|
| 167 |
+
2025-09-26 15:43:12,850 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 48/100 | Train Loss: 0.0359 | Val rms_score: 0.1177
|
| 168 |
+
2025-09-26 15:43:17,420 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 49/100 | Train Loss: 0.0359 | Val rms_score: 0.1182
|
| 169 |
+
2025-09-26 15:43:22,233 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 50/100 | Train Loss: 0.0358 | Val rms_score: 0.1195
|
| 170 |
+
2025-09-26 15:43:27,044 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 51/100 | Train Loss: 0.0351 | Val rms_score: 0.1185
|
| 171 |
+
2025-09-26 15:43:32,166 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 52/100 | Train Loss: 0.0350 | Val rms_score: 0.1184
|
| 172 |
+
2025-09-26 15:43:36,783 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 53/100 | Train Loss: 0.0340 | Val rms_score: 0.1213
|
| 173 |
+
2025-09-26 15:43:41,226 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 54/100 | Train Loss: 0.0341 | Val rms_score: 0.1184
|
| 174 |
+
2025-09-26 15:43:45,813 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 55/100 | Train Loss: 0.0332 | Val rms_score: 0.1191
|
| 175 |
+
2025-09-26 15:43:50,376 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 56/100 | Train Loss: 0.0334 | Val rms_score: 0.1186
|
| 176 |
+
2025-09-26 15:43:55,462 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 57/100 | Train Loss: 0.0319 | Val rms_score: 0.1192
|
| 177 |
+
2025-09-26 15:44:00,566 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 58/100 | Train Loss: 0.0387 | Val rms_score: 0.1201
|
| 178 |
+
2025-09-26 15:44:04,874 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 59/100 | Train Loss: 0.0309 | Val rms_score: 0.1183
|
| 179 |
+
2025-09-26 15:44:09,377 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 60/100 | Train Loss: 0.0281 | Val rms_score: 0.1169
|
| 180 |
+
2025-09-26 15:44:13,010 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 61/100 | Train Loss: 0.0326 | Val rms_score: 0.1192
|
| 181 |
+
2025-09-26 15:44:18,090 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 62/100 | Train Loss: 0.0300 | Val rms_score: 0.1167
|
| 182 |
+
2025-09-26 15:44:22,731 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 63/100 | Train Loss: 0.0280 | Val rms_score: 0.1180
|
| 183 |
+
2025-09-26 15:44:27,270 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 64/100 | Train Loss: 0.0300 | Val rms_score: 0.1170
|
| 184 |
+
2025-09-26 15:44:32,199 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 65/100 | Train Loss: 0.0288 | Val rms_score: 0.1187
|
| 185 |
+
2025-09-26 15:44:37,143 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 66/100 | Train Loss: 0.0280 | Val rms_score: 0.1190
|
| 186 |
+
2025-09-26 15:44:42,367 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 67/100 | Train Loss: 0.0290 | Val rms_score: 0.1187
|
| 187 |
+
2025-09-26 15:44:47,658 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 68/100 | Train Loss: 0.0259 | Val rms_score: 0.1190
|
| 188 |
+
2025-09-26 15:44:52,628 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 69/100 | Train Loss: 0.0420 | Val rms_score: 0.1182
|
| 189 |
+
2025-09-26 15:44:57,201 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 70/100 | Train Loss: 0.0274 | Val rms_score: 0.1194
|
| 190 |
+
2025-09-26 15:45:01,907 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 71/100 | Train Loss: 0.0278 | Val rms_score: 0.1182
|
| 191 |
+
2025-09-26 15:45:07,055 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 72/100 | Train Loss: 0.0285 | Val rms_score: 0.1179
|
| 192 |
+
2025-09-26 15:45:11,792 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 73/100 | Train Loss: 0.0285 | Val rms_score: 0.1183
|
| 193 |
+
2025-09-26 15:45:15,502 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 74/100 | Train Loss: 0.0339 | Val rms_score: 0.1181
|
| 194 |
+
2025-09-26 15:45:20,153 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 75/100 | Train Loss: 0.0293 | Val rms_score: 0.1170
|
| 195 |
+
2025-09-26 15:45:24,855 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 76/100 | Train Loss: 0.0254 | Val rms_score: 0.1177
|
| 196 |
+
2025-09-26 15:45:30,314 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 77/100 | Train Loss: 0.0273 | Val rms_score: 0.1178
|
| 197 |
+
2025-09-26 15:45:35,140 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 78/100 | Train Loss: 0.0270 | Val rms_score: 0.1175
|
| 198 |
+
2025-09-26 15:45:39,636 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 79/100 | Train Loss: 0.0274 | Val rms_score: 0.1165
|
| 199 |
+
2025-09-26 15:45:44,390 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 80/100 | Train Loss: 0.0260 | Val rms_score: 0.1186
|
| 200 |
+
2025-09-26 15:45:48,113 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 81/100 | Train Loss: 0.0227 | Val rms_score: 0.1170
|
| 201 |
+
2025-09-26 15:45:53,199 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 82/100 | Train Loss: 0.0241 | Val rms_score: 0.1167
|
| 202 |
+
2025-09-26 15:45:58,103 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 83/100 | Train Loss: 0.0234 | Val rms_score: 0.1174
|
| 203 |
+
2025-09-26 15:46:02,691 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 84/100 | Train Loss: 0.0252 | Val rms_score: 0.1180
|
| 204 |
+
2025-09-26 15:46:07,857 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 85/100 | Train Loss: 0.0225 | Val rms_score: 0.1170
|
| 205 |
+
2025-09-26 15:46:12,812 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 86/100 | Train Loss: 0.0238 | Val rms_score: 0.1206
|
| 206 |
+
2025-09-26 15:46:17,664 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 87/100 | Train Loss: 0.0280 | Val rms_score: 0.1171
|
| 207 |
+
2025-09-26 15:46:22,498 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 88/100 | Train Loss: 0.0252 | Val rms_score: 0.1187
|
| 208 |
+
2025-09-26 15:46:27,880 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 89/100 | Train Loss: 0.0215 | Val rms_score: 0.1170
|
| 209 |
+
2025-09-26 15:46:33,144 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 90/100 | Train Loss: 0.0257 | Val rms_score: 0.1166
|
| 210 |
+
2025-09-26 15:46:38,167 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 91/100 | Train Loss: 0.0234 | Val rms_score: 0.1187
|
| 211 |
+
2025-09-26 15:46:43,395 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 92/100 | Train Loss: 0.0235 | Val rms_score: 0.1187
|
| 212 |
+
2025-09-26 15:46:47,169 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 93/100 | Train Loss: 0.0229 | Val rms_score: 0.1171
|
| 213 |
+
2025-09-26 15:46:51,601 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 94/100 | Train Loss: 0.0227 | Val rms_score: 0.1187
|
| 214 |
+
2025-09-26 15:46:56,170 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 95/100 | Train Loss: 0.0220 | Val rms_score: 0.1189
|
| 215 |
+
2025-09-26 15:47:00,902 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 96/100 | Train Loss: 0.0197 | Val rms_score: 0.1168
|
| 216 |
+
2025-09-26 15:47:06,158 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 97/100 | Train Loss: 0.0224 | Val rms_score: 0.1164
|
| 217 |
+
2025-09-26 15:47:10,788 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 98/100 | Train Loss: 0.0262 | Val rms_score: 0.1175
|
| 218 |
+
2025-09-26 15:47:15,195 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 99/100 | Train Loss: 0.0216 | Val rms_score: 0.1169
|
| 219 |
+
2025-09-26 15:47:18,837 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 100/100 | Train Loss: 0.0212 | Val rms_score: 0.1180
|
| 220 |
+
2025-09-26 15:47:19,538 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Test rms_score: 0.1381
|
| 221 |
+
2025-09-26 15:47:20,015 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Starting triplicate run 3 for dataset astrazeneca_ppb at 2025-09-26_15-47-20
|
| 222 |
+
2025-09-26 15:47:24,531 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 1/100 | Train Loss: 0.9667 | Val rms_score: 0.1292
|
| 223 |
+
2025-09-26 15:47:24,531 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Global step of best model: 45
|
| 224 |
+
2025-09-26 15:47:26,240 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Best model saved at epoch 1 with val rms_score: 0.1292
|
| 225 |
+
2025-09-26 15:47:30,857 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 2/100 | Train Loss: 0.5833 | Val rms_score: 0.1174
|
| 226 |
+
2025-09-26 15:47:31,041 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Global step of best model: 90
|
| 227 |
+
2025-09-26 15:47:31,592 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Best model saved at epoch 2 with val rms_score: 0.1174
|
| 228 |
+
2025-09-26 15:47:35,973 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 3/100 | Train Loss: 0.5107 | Val rms_score: 0.1157
|
| 229 |
+
2025-09-26 15:47:36,165 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Global step of best model: 135
|
| 230 |
+
2025-09-26 15:47:36,762 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Best model saved at epoch 3 with val rms_score: 0.1157
|
| 231 |
+
2025-09-26 15:47:44,516 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 4/100 | Train Loss: 0.3722 | Val rms_score: 0.1186
|
| 232 |
+
2025-09-26 15:47:49,328 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 5/100 | Train Loss: 0.2938 | Val rms_score: 0.1164
|
| 233 |
+
2025-09-26 15:47:53,215 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 6/100 | Train Loss: 0.2556 | Val rms_score: 0.1240
|
| 234 |
+
2025-09-26 15:47:59,000 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 7/100 | Train Loss: 0.2729 | Val rms_score: 0.1184
|
| 235 |
+
2025-09-26 15:48:04,096 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 8/100 | Train Loss: 0.2069 | Val rms_score: 0.1166
|
| 236 |
+
2025-09-26 15:48:08,882 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 9/100 | Train Loss: 0.1883 | Val rms_score: 0.1124
|
| 237 |
+
2025-09-26 15:48:09,046 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Global step of best model: 405
|
| 238 |
+
2025-09-26 15:48:09,700 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Best model saved at epoch 9 with val rms_score: 0.1124
|
| 239 |
+
2025-09-26 15:48:14,701 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 10/100 | Train Loss: 0.1444 | Val rms_score: 0.1129
|
| 240 |
+
2025-09-26 15:48:19,971 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 11/100 | Train Loss: 0.1361 | Val rms_score: 0.1154
|
| 241 |
+
2025-09-26 15:48:24,195 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 12/100 | Train Loss: 0.1086 | Val rms_score: 0.1161
|
| 242 |
+
2025-09-26 15:48:28,889 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 13/100 | Train Loss: 0.1049 | Val rms_score: 0.1160
|
| 243 |
+
2025-09-26 15:48:33,588 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 14/100 | Train Loss: 0.0927 | Val rms_score: 0.1147
|
| 244 |
+
2025-09-26 15:48:38,242 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 15/100 | Train Loss: 0.0910 | Val rms_score: 0.1147
|
| 245 |
+
2025-09-26 15:48:43,362 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 16/100 | Train Loss: 0.0863 | Val rms_score: 0.1167
|
| 246 |
+
2025-09-26 15:48:48,946 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 17/100 | Train Loss: 0.0861 | Val rms_score: 0.1129
|
| 247 |
+
2025-09-26 15:48:52,753 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 18/100 | Train Loss: 0.0957 | Val rms_score: 0.1153
|
| 248 |
+
2025-09-26 15:48:57,812 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 19/100 | Train Loss: 0.0861 | Val rms_score: 0.1181
|
| 249 |
+
2025-09-26 15:49:02,993 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 20/100 | Train Loss: 0.0910 | Val rms_score: 0.1180
|
| 250 |
+
2025-09-26 15:49:08,141 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 21/100 | Train Loss: 0.0823 | Val rms_score: 0.1135
|
| 251 |
+
2025-09-26 15:49:13,696 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 22/100 | Train Loss: 0.0795 | Val rms_score: 0.1153
|
| 252 |
+
2025-09-26 15:49:21,968 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 23/100 | Train Loss: 0.0616 | Val rms_score: 0.1144
|
| 253 |
+
2025-09-26 15:49:26,590 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 24/100 | Train Loss: 0.0639 | Val rms_score: 0.1163
|
| 254 |
+
2025-09-26 15:49:31,684 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 25/100 | Train Loss: 0.0634 | Val rms_score: 0.1156
|
| 255 |
+
2025-09-26 15:49:36,479 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 26/100 | Train Loss: 0.0601 | Val rms_score: 0.1134
|
| 256 |
+
2025-09-26 15:49:41,790 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 27/100 | Train Loss: 0.0625 | Val rms_score: 0.1120
|
| 257 |
+
2025-09-26 15:49:41,950 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Global step of best model: 1215
|
| 258 |
+
2025-09-26 15:49:42,569 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Best model saved at epoch 27 with val rms_score: 0.1120
|
| 259 |
+
2025-09-26 15:49:47,958 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 28/100 | Train Loss: 0.0535 | Val rms_score: 0.1144
|
| 260 |
+
2025-09-26 15:49:52,948 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 29/100 | Train Loss: 0.0652 | Val rms_score: 0.1155
|
| 261 |
+
2025-09-26 15:49:56,899 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 30/100 | Train Loss: 0.0590 | Val rms_score: 0.1129
|
| 262 |
+
2025-09-26 15:50:02,116 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 31/100 | Train Loss: 0.0556 | Val rms_score: 0.1131
|
| 263 |
+
2025-09-26 15:50:08,227 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 32/100 | Train Loss: 0.0547 | Val rms_score: 0.1110
|
| 264 |
+
2025-09-26 15:50:08,392 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Global step of best model: 1440
|
| 265 |
+
2025-09-26 15:50:09,100 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Best model saved at epoch 32 with val rms_score: 0.1110
|
| 266 |
+
2025-09-26 15:50:14,624 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 33/100 | Train Loss: 0.0524 | Val rms_score: 0.1129
|
| 267 |
+
2025-09-26 15:50:20,574 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 34/100 | Train Loss: 0.0547 | Val rms_score: 0.1128
|
| 268 |
+
2025-09-26 15:50:24,968 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 35/100 | Train Loss: 0.0486 | Val rms_score: 0.1133
|
| 269 |
+
2025-09-26 15:50:30,257 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 36/100 | Train Loss: 0.0449 | Val rms_score: 0.1131
|
| 270 |
+
2025-09-26 15:50:35,672 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 37/100 | Train Loss: 0.0462 | Val rms_score: 0.1138
|
| 271 |
+
2025-09-26 15:50:40,679 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 38/100 | Train Loss: 0.0418 | Val rms_score: 0.1131
|
| 272 |
+
2025-09-26 15:50:46,000 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 39/100 | Train Loss: 0.0431 | Val rms_score: 0.1144
|
| 273 |
+
2025-09-26 15:50:51,142 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 40/100 | Train Loss: 0.0443 | Val rms_score: 0.1130
|
| 274 |
+
2025-09-26 15:50:56,294 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 41/100 | Train Loss: 0.0455 | Val rms_score: 0.1113
|
| 275 |
+
2025-09-26 15:51:00,337 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 42/100 | Train Loss: 0.0465 | Val rms_score: 0.1121
|
| 276 |
+
2025-09-26 15:51:05,307 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 43/100 | Train Loss: 0.0411 | Val rms_score: 0.1116
|
| 277 |
+
2025-09-26 15:51:10,477 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 44/100 | Train Loss: 0.0380 | Val rms_score: 0.1120
|
| 278 |
+
2025-09-26 15:51:16,431 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 45/100 | Train Loss: 0.0409 | Val rms_score: 0.1122
|
| 279 |
+
2025-09-26 15:51:21,147 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 46/100 | Train Loss: 0.0413 | Val rms_score: 0.1134
|
| 280 |
+
2025-09-26 15:51:26,501 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 47/100 | Train Loss: 0.0427 | Val rms_score: 0.1122
|
| 281 |
+
2025-09-26 15:51:30,289 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 48/100 | Train Loss: 0.0384 | Val rms_score: 0.1113
|
| 282 |
+
2025-09-26 15:51:35,745 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 49/100 | Train Loss: 0.0383 | Val rms_score: 0.1118
|
| 283 |
+
2025-09-26 15:51:41,553 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 50/100 | Train Loss: 0.0405 | Val rms_score: 0.1126
|
| 284 |
+
2025-09-26 15:51:47,375 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 51/100 | Train Loss: 0.0434 | Val rms_score: 0.1143
|
| 285 |
+
2025-09-26 15:51:52,767 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 52/100 | Train Loss: 0.0402 | Val rms_score: 0.1128
|
| 286 |
+
2025-09-26 15:51:57,682 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 53/100 | Train Loss: 0.0347 | Val rms_score: 0.1125
|
| 287 |
+
2025-09-26 15:52:01,399 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 54/100 | Train Loss: 0.0326 | Val rms_score: 0.1145
|
| 288 |
+
2025-09-26 15:52:06,844 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 55/100 | Train Loss: 0.0358 | Val rms_score: 0.1131
|
| 289 |
+
2025-09-26 15:52:12,186 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 56/100 | Train Loss: 0.0332 | Val rms_score: 0.1110
|
| 290 |
+
2025-09-26 15:52:17,905 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 57/100 | Train Loss: 0.0356 | Val rms_score: 0.1117
|
| 291 |
+
2025-09-26 15:52:22,776 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 58/100 | Train Loss: 0.0311 | Val rms_score: 0.1122
|
| 292 |
+
2025-09-26 15:52:27,894 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 59/100 | Train Loss: 0.0351 | Val rms_score: 0.1132
|
| 293 |
+
2025-09-26 15:52:31,961 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 60/100 | Train Loss: 0.0344 | Val rms_score: 0.1111
|
| 294 |
+
2025-09-26 15:52:37,045 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 61/100 | Train Loss: 0.0375 | Val rms_score: 0.1124
|
| 295 |
+
2025-09-26 15:52:42,810 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 62/100 | Train Loss: 0.0378 | Val rms_score: 0.1132
|
| 296 |
+
2025-09-26 15:52:47,865 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 63/100 | Train Loss: 0.0295 | Val rms_score: 0.1116
|
| 297 |
+
2025-09-26 15:52:53,010 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 64/100 | Train Loss: 0.0314 | Val rms_score: 0.1128
|
| 298 |
+
2025-09-26 15:52:58,271 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 65/100 | Train Loss: 0.0280 | Val rms_score: 0.1127
|
| 299 |
+
2025-09-26 15:53:02,162 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 66/100 | Train Loss: 0.0318 | Val rms_score: 0.1123
|
| 300 |
+
2025-09-26 15:53:09,335 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 67/100 | Train Loss: 0.0333 | Val rms_score: 0.1120
|
| 301 |
+
2025-09-26 15:53:13,755 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 68/100 | Train Loss: 0.0302 | Val rms_score: 0.1119
|
| 302 |
+
2025-09-26 15:53:18,254 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 69/100 | Train Loss: 0.0334 | Val rms_score: 0.1115
|
| 303 |
+
2025-09-26 15:53:23,252 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 70/100 | Train Loss: 0.0312 | Val rms_score: 0.1120
|
| 304 |
+
2025-09-26 15:53:28,229 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 71/100 | Train Loss: 0.0292 | Val rms_score: 0.1148
|
| 305 |
+
2025-09-26 15:53:32,189 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 72/100 | Train Loss: 0.0277 | Val rms_score: 0.1153
|
| 306 |
+
2025-09-26 15:53:37,001 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 73/100 | Train Loss: 0.0300 | Val rms_score: 0.1135
|
| 307 |
+
2025-09-26 15:53:41,959 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 74/100 | Train Loss: 0.0297 | Val rms_score: 0.1118
|
| 308 |
+
2025-09-26 15:53:46,966 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 75/100 | Train Loss: 0.0302 | Val rms_score: 0.1151
|
| 309 |
+
2025-09-26 15:53:52,069 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 76/100 | Train Loss: 0.0330 | Val rms_score: 0.1139
|
| 310 |
+
2025-09-26 15:53:57,417 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 77/100 | Train Loss: 0.0299 | Val rms_score: 0.1136
|
| 311 |
+
2025-09-26 15:54:02,586 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 78/100 | Train Loss: 0.0326 | Val rms_score: 0.1135
|
| 312 |
+
2025-09-26 15:54:06,507 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 79/100 | Train Loss: 0.0269 | Val rms_score: 0.1132
|
| 313 |
+
2025-09-26 15:54:11,848 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 80/100 | Train Loss: 0.0260 | Val rms_score: 0.1145
|
| 314 |
+
2025-09-26 15:54:17,200 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 81/100 | Train Loss: 0.0250 | Val rms_score: 0.1127
|
| 315 |
+
2025-09-26 15:54:22,796 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 82/100 | Train Loss: 0.0252 | Val rms_score: 0.1117
|
| 316 |
+
2025-09-26 15:54:28,093 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 83/100 | Train Loss: 0.0254 | Val rms_score: 0.1128
|
| 317 |
+
2025-09-26 15:54:32,580 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 84/100 | Train Loss: 0.0250 | Val rms_score: 0.1125
|
| 318 |
+
2025-09-26 15:54:37,725 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 85/100 | Train Loss: 0.0252 | Val rms_score: 0.1120
|
| 319 |
+
2025-09-26 15:54:43,019 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 86/100 | Train Loss: 0.0255 | Val rms_score: 0.1120
|
| 320 |
+
2025-09-26 15:54:48,746 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 87/100 | Train Loss: 0.0229 | Val rms_score: 0.1133
|
| 321 |
+
2025-09-26 15:54:53,820 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 88/100 | Train Loss: 0.0236 | Val rms_score: 0.1129
|
| 322 |
+
2025-09-26 15:55:00,293 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 89/100 | Train Loss: 0.0277 | Val rms_score: 0.1133
|
| 323 |
+
2025-09-26 15:55:04,019 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 90/100 | Train Loss: 0.0238 | Val rms_score: 0.1114
|
| 324 |
+
2025-09-26 15:55:08,710 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 91/100 | Train Loss: 0.0266 | Val rms_score: 0.1130
|
| 325 |
+
2025-09-26 15:55:14,510 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 92/100 | Train Loss: 0.0238 | Val rms_score: 0.1141
|
| 326 |
+
2025-09-26 15:55:19,971 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 93/100 | Train Loss: 0.0217 | Val rms_score: 0.1144
|
| 327 |
+
2025-09-26 15:55:25,295 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 94/100 | Train Loss: 0.0237 | Val rms_score: 0.1126
|
| 328 |
+
2025-09-26 15:55:30,540 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 95/100 | Train Loss: 0.0260 | Val rms_score: 0.1129
|
| 329 |
+
2025-09-26 15:55:34,318 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 96/100 | Train Loss: 0.0246 | Val rms_score: 0.1119
|
| 330 |
+
2025-09-26 15:55:39,847 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 97/100 | Train Loss: 0.0243 | Val rms_score: 0.1139
|
| 331 |
+
2025-09-26 15:55:44,865 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 98/100 | Train Loss: 0.0214 | Val rms_score: 0.1128
|
| 332 |
+
2025-09-26 15:55:50,390 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 99/100 | Train Loss: 0.0229 | Val rms_score: 0.1108
|
| 333 |
+
2025-09-26 15:55:50,564 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Global step of best model: 4455
|
| 334 |
+
2025-09-26 15:55:51,276 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Best model saved at epoch 99 with val rms_score: 0.1108
|
| 335 |
+
2025-09-26 15:55:56,341 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 100/100 | Train Loss: 0.0280 | Val rms_score: 0.1124
|
| 336 |
+
2025-09-26 15:55:56,822 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Test rms_score: 0.1436
|
| 337 |
+
2025-09-26 15:55:57,309 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Final Triplicate Test Results — Avg rms_score: 0.1365, Std Dev: 0.0066
|
logs_modchembert_regression_ModChemBERT-MLM/modchembert_deepchem_splits_run_astrazeneca_solubility_epochs100_batch_size32_20250926_155557.log
ADDED
|
@@ -0,0 +1,357 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
2025-09-26 15:55:57,310 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Running benchmark for dataset: astrazeneca_solubility
|
| 2 |
+
2025-09-26 15:55:57,311 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - dataset: astrazeneca_solubility, tasks: ['y'], epochs: 100, learning rate: 3e-05, transform: True
|
| 3 |
+
2025-09-26 15:55:57,315 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Starting triplicate run 1 for dataset astrazeneca_solubility at 2025-09-26_15-55-57
|
| 4 |
+
2025-09-26 15:56:01,647 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 1/100 | Train Loss: 1.1944 | Val rms_score: 1.0160
|
| 5 |
+
2025-09-26 15:56:01,647 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Global step of best model: 45
|
| 6 |
+
2025-09-26 15:56:02,963 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Best model saved at epoch 1 with val rms_score: 1.0160
|
| 7 |
+
2025-09-26 15:56:06,192 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 2/100 | Train Loss: 0.8500 | Val rms_score: 0.9805
|
| 8 |
+
2025-09-26 15:56:06,376 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Global step of best model: 90
|
| 9 |
+
2025-09-26 15:56:06,957 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Best model saved at epoch 2 with val rms_score: 0.9805
|
| 10 |
+
2025-09-26 15:56:16,387 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 3/100 | Train Loss: 0.7214 | Val rms_score: 0.9292
|
| 11 |
+
2025-09-26 15:56:16,590 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Global step of best model: 135
|
| 12 |
+
2025-09-26 15:56:17,343 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Best model saved at epoch 3 with val rms_score: 0.9292
|
| 13 |
+
2025-09-26 15:56:23,436 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 4/100 | Train Loss: 0.5778 | Val rms_score: 0.9151
|
| 14 |
+
2025-09-26 15:56:23,638 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Global step of best model: 180
|
| 15 |
+
2025-09-26 15:56:24,501 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Best model saved at epoch 4 with val rms_score: 0.9151
|
| 16 |
+
2025-09-26 15:56:29,313 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 5/100 | Train Loss: 0.4850 | Val rms_score: 0.9253
|
| 17 |
+
2025-09-26 15:56:34,408 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 6/100 | Train Loss: 0.3667 | Val rms_score: 0.8717
|
| 18 |
+
2025-09-26 15:56:35,061 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Global step of best model: 270
|
| 19 |
+
2025-09-26 15:56:35,754 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Best model saved at epoch 6 with val rms_score: 0.8717
|
| 20 |
+
2025-09-26 15:56:38,713 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 7/100 | Train Loss: 0.3688 | Val rms_score: 0.8913
|
| 21 |
+
2025-09-26 15:56:43,243 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 8/100 | Train Loss: 0.2500 | Val rms_score: 0.9065
|
| 22 |
+
2025-09-26 15:56:47,948 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 9/100 | Train Loss: 0.2484 | Val rms_score: 0.8784
|
| 23 |
+
2025-09-26 15:56:54,431 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 10/100 | Train Loss: 0.2194 | Val rms_score: 0.9198
|
| 24 |
+
2025-09-26 15:57:00,853 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 11/100 | Train Loss: 0.1819 | Val rms_score: 0.8991
|
| 25 |
+
2025-09-26 15:57:06,497 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 12/100 | Train Loss: 0.1828 | Val rms_score: 0.8894
|
| 26 |
+
2025-09-26 15:57:10,316 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 13/100 | Train Loss: 0.1458 | Val rms_score: 0.9294
|
| 27 |
+
2025-09-26 15:57:15,712 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 14/100 | Train Loss: 0.1521 | Val rms_score: 0.8901
|
| 28 |
+
2025-09-26 15:57:21,220 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 15/100 | Train Loss: 0.1340 | Val rms_score: 0.9136
|
| 29 |
+
2025-09-26 15:57:26,587 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 16/100 | Train Loss: 0.1250 | Val rms_score: 0.9046
|
| 30 |
+
2025-09-26 15:57:32,194 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 17/100 | Train Loss: 0.1236 | Val rms_score: 0.9170
|
| 31 |
+
2025-09-26 15:57:37,314 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 18/100 | Train Loss: 0.0910 | Val rms_score: 0.9014
|
| 32 |
+
2025-09-26 15:57:41,485 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 19/100 | Train Loss: 0.0993 | Val rms_score: 0.9066
|
| 33 |
+
2025-09-26 15:57:46,360 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 20/100 | Train Loss: 0.1465 | Val rms_score: 0.9188
|
| 34 |
+
2025-09-26 15:57:51,262 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 21/100 | Train Loss: 0.1653 | Val rms_score: 0.8938
|
| 35 |
+
2025-09-26 15:57:56,881 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 22/100 | Train Loss: 0.0951 | Val rms_score: 0.9036
|
| 36 |
+
2025-09-26 15:58:02,589 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 23/100 | Train Loss: 0.0879 | Val rms_score: 0.8995
|
| 37 |
+
2025-09-26 15:58:07,566 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 24/100 | Train Loss: 0.0889 | Val rms_score: 0.9115
|
| 38 |
+
2025-09-26 15:58:10,677 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 25/100 | Train Loss: 0.0887 | Val rms_score: 0.9004
|
| 39 |
+
2025-09-26 15:58:15,302 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 26/100 | Train Loss: 0.0816 | Val rms_score: 0.9063
|
| 40 |
+
2025-09-26 15:58:20,815 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 27/100 | Train Loss: 0.0854 | Val rms_score: 0.9016
|
| 41 |
+
2025-09-26 15:58:26,032 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 28/100 | Train Loss: 0.0726 | Val rms_score: 0.9027
|
| 42 |
+
2025-09-26 15:58:31,190 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 29/100 | Train Loss: 0.0988 | Val rms_score: 0.9252
|
| 43 |
+
2025-09-26 15:58:36,547 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 30/100 | Train Loss: 0.0896 | Val rms_score: 0.9077
|
| 44 |
+
2025-09-26 15:58:40,750 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 31/100 | Train Loss: 0.0882 | Val rms_score: 0.9208
|
| 45 |
+
2025-09-26 15:58:46,227 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 32/100 | Train Loss: 0.0738 | Val rms_score: 0.8937
|
| 46 |
+
2025-09-26 15:58:51,556 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 33/100 | Train Loss: 0.0649 | Val rms_score: 0.8929
|
| 47 |
+
2025-09-26 15:58:56,750 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 34/100 | Train Loss: 0.0615 | Val rms_score: 0.8878
|
| 48 |
+
2025-09-26 15:59:02,206 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 35/100 | Train Loss: 0.0615 | Val rms_score: 0.8913
|
| 49 |
+
2025-09-26 15:59:07,330 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 36/100 | Train Loss: 0.0586 | Val rms_score: 0.9093
|
| 50 |
+
2025-09-26 15:59:11,390 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 37/100 | Train Loss: 0.0604 | Val rms_score: 0.8941
|
| 51 |
+
2025-09-26 15:59:16,095 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 38/100 | Train Loss: 0.0492 | Val rms_score: 0.8980
|
| 52 |
+
2025-09-26 15:59:20,799 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 39/100 | Train Loss: 0.0576 | Val rms_score: 0.9079
|
| 53 |
+
2025-09-26 15:59:25,703 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 40/100 | Train Loss: 0.0545 | Val rms_score: 0.8968
|
| 54 |
+
2025-09-26 15:59:30,604 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 41/100 | Train Loss: 0.0580 | Val rms_score: 0.9063
|
| 55 |
+
2025-09-26 15:59:35,653 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 42/100 | Train Loss: 0.0611 | Val rms_score: 0.8962
|
| 56 |
+
2025-09-26 15:59:40,644 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 43/100 | Train Loss: 0.0616 | Val rms_score: 0.8837
|
| 57 |
+
2025-09-26 15:59:43,794 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 44/100 | Train Loss: 0.0684 | Val rms_score: 0.8916
|
| 58 |
+
2025-09-26 15:59:50,032 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 45/100 | Train Loss: 0.0537 | Val rms_score: 0.8899
|
| 59 |
+
2025-09-26 15:59:54,896 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 46/100 | Train Loss: 0.0490 | Val rms_score: 0.8961
|
| 60 |
+
2025-09-26 16:00:00,129 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 47/100 | Train Loss: 0.0534 | Val rms_score: 0.8889
|
| 61 |
+
2025-09-26 16:00:05,027 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 48/100 | Train Loss: 0.0490 | Val rms_score: 0.9030
|
| 62 |
+
2025-09-26 16:00:10,017 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 49/100 | Train Loss: 0.0437 | Val rms_score: 0.8952
|
| 63 |
+
2025-09-26 16:00:13,474 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 50/100 | Train Loss: 0.0455 | Val rms_score: 0.8940
|
| 64 |
+
2025-09-26 16:00:18,220 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 51/100 | Train Loss: 0.0483 | Val rms_score: 0.8967
|
| 65 |
+
2025-09-26 16:00:23,485 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 52/100 | Train Loss: 0.0453 | Val rms_score: 0.8911
|
| 66 |
+
2025-09-26 16:00:28,303 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 53/100 | Train Loss: 0.0542 | Val rms_score: 0.8978
|
| 67 |
+
2025-09-26 16:00:32,809 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 54/100 | Train Loss: 0.0526 | Val rms_score: 0.8838
|
| 68 |
+
2025-09-26 16:00:37,571 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 55/100 | Train Loss: 0.0576 | Val rms_score: 0.8887
|
| 69 |
+
2025-09-26 16:00:42,310 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 56/100 | Train Loss: 0.0402 | Val rms_score: 0.8874
|
| 70 |
+
2025-09-26 16:00:46,172 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 57/100 | Train Loss: 0.0455 | Val rms_score: 0.8818
|
| 71 |
+
2025-09-26 16:00:50,870 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 58/100 | Train Loss: 0.0365 | Val rms_score: 0.8884
|
| 72 |
+
2025-09-26 16:00:55,357 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 59/100 | Train Loss: 0.0424 | Val rms_score: 0.8907
|
| 73 |
+
2025-09-26 16:01:00,266 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 60/100 | Train Loss: 0.0451 | Val rms_score: 0.8855
|
| 74 |
+
2025-09-26 16:01:04,970 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 61/100 | Train Loss: 0.0411 | Val rms_score: 0.8769
|
| 75 |
+
2025-09-26 16:01:10,111 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 62/100 | Train Loss: 0.0437 | Val rms_score: 0.8819
|
| 76 |
+
2025-09-26 16:01:14,782 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 63/100 | Train Loss: 0.0422 | Val rms_score: 0.8945
|
| 77 |
+
2025-09-26 16:01:17,988 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 64/100 | Train Loss: 0.0403 | Val rms_score: 0.8906
|
| 78 |
+
2025-09-26 16:01:22,690 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 65/100 | Train Loss: 0.0356 | Val rms_score: 0.8920
|
| 79 |
+
2025-09-26 16:01:27,445 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 66/100 | Train Loss: 0.0418 | Val rms_score: 0.8927
|
| 80 |
+
2025-09-26 16:01:33,681 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 67/100 | Train Loss: 0.0414 | Val rms_score: 0.8810
|
| 81 |
+
2025-09-26 16:01:38,519 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 68/100 | Train Loss: 0.0425 | Val rms_score: 0.8839
|
| 82 |
+
2025-09-26 16:01:43,401 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 69/100 | Train Loss: 0.0354 | Val rms_score: 0.8752
|
| 83 |
+
2025-09-26 16:01:46,928 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 70/100 | Train Loss: 0.0375 | Val rms_score: 0.8846
|
| 84 |
+
2025-09-26 16:01:51,961 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 71/100 | Train Loss: 0.0394 | Val rms_score: 0.8831
|
| 85 |
+
2025-09-26 16:01:57,797 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 72/100 | Train Loss: 0.0363 | Val rms_score: 0.8840
|
| 86 |
+
2025-09-26 16:02:02,815 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 73/100 | Train Loss: 0.0358 | Val rms_score: 0.8828
|
| 87 |
+
2025-09-26 16:02:07,997 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 74/100 | Train Loss: 0.0393 | Val rms_score: 0.8881
|
| 88 |
+
2025-09-26 16:02:13,083 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 75/100 | Train Loss: 0.0389 | Val rms_score: 0.8873
|
| 89 |
+
2025-09-26 16:02:16,421 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 76/100 | Train Loss: 0.0365 | Val rms_score: 0.8889
|
| 90 |
+
2025-09-26 16:02:21,562 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 77/100 | Train Loss: 0.0337 | Val rms_score: 0.8865
|
| 91 |
+
2025-09-26 16:02:26,433 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 78/100 | Train Loss: 0.0357 | Val rms_score: 0.8806
|
| 92 |
+
2025-09-26 16:02:31,188 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 79/100 | Train Loss: 0.0345 | Val rms_score: 0.8890
|
| 93 |
+
2025-09-26 16:02:36,159 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 80/100 | Train Loss: 0.0366 | Val rms_score: 0.8858
|
| 94 |
+
2025-09-26 16:02:40,923 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 81/100 | Train Loss: 0.0363 | Val rms_score: 0.8894
|
| 95 |
+
2025-09-26 16:02:46,420 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 82/100 | Train Loss: 0.0332 | Val rms_score: 0.8832
|
| 96 |
+
2025-09-26 16:02:49,764 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 83/100 | Train Loss: 0.0362 | Val rms_score: 0.8875
|
| 97 |
+
2025-09-26 16:02:54,290 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 84/100 | Train Loss: 0.0342 | Val rms_score: 0.8911
|
| 98 |
+
2025-09-26 16:02:58,997 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 85/100 | Train Loss: 0.0338 | Val rms_score: 0.8837
|
| 99 |
+
2025-09-26 16:03:03,646 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 86/100 | Train Loss: 0.0354 | Val rms_score: 0.8899
|
| 100 |
+
2025-09-26 16:03:09,060 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 87/100 | Train Loss: 0.0271 | Val rms_score: 0.8906
|
| 101 |
+
2025-09-26 16:03:13,812 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 88/100 | Train Loss: 0.0368 | Val rms_score: 0.8819
|
| 102 |
+
2025-09-26 16:03:18,174 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 89/100 | Train Loss: 0.0469 | Val rms_score: 0.8811
|
| 103 |
+
2025-09-26 16:03:23,063 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 90/100 | Train Loss: 0.0392 | Val rms_score: 0.8840
|
| 104 |
+
2025-09-26 16:03:27,801 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 91/100 | Train Loss: 0.0337 | Val rms_score: 0.8902
|
| 105 |
+
2025-09-26 16:03:33,441 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 92/100 | Train Loss: 0.0365 | Val rms_score: 0.8887
|
| 106 |
+
2025-09-26 16:03:38,773 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 93/100 | Train Loss: 0.0318 | Val rms_score: 0.8846
|
| 107 |
+
2025-09-26 16:03:43,483 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 94/100 | Train Loss: 0.0357 | Val rms_score: 0.8828
|
| 108 |
+
2025-09-26 16:03:48,700 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 95/100 | Train Loss: 0.0325 | Val rms_score: 0.8987
|
| 109 |
+
2025-09-26 16:03:52,224 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 96/100 | Train Loss: 0.0332 | Val rms_score: 0.8845
|
| 110 |
+
2025-09-26 16:03:57,860 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 97/100 | Train Loss: 0.0309 | Val rms_score: 0.8926
|
| 111 |
+
2025-09-26 16:04:02,864 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 98/100 | Train Loss: 0.0314 | Val rms_score: 0.8879
|
| 112 |
+
2025-09-26 16:04:07,594 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 99/100 | Train Loss: 0.0307 | Val rms_score: 0.8876
|
| 113 |
+
2025-09-26 16:04:12,780 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 100/100 | Train Loss: 0.0306 | Val rms_score: 0.8871
|
| 114 |
+
2025-09-26 16:04:13,352 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Test rms_score: 0.9823
|
| 115 |
+
2025-09-26 16:04:13,875 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Starting triplicate run 2 for dataset astrazeneca_solubility at 2025-09-26_16-04-13
|
| 116 |
+
2025-09-26 16:04:18,159 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 1/100 | Train Loss: 1.1778 | Val rms_score: 1.0098
|
| 117 |
+
2025-09-26 16:04:18,159 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Global step of best model: 45
|
| 118 |
+
2025-09-26 16:04:19,189 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Best model saved at epoch 1 with val rms_score: 1.0098
|
| 119 |
+
2025-09-26 16:04:23,919 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 2/100 | Train Loss: 0.7833 | Val rms_score: 0.9563
|
| 120 |
+
2025-09-26 16:04:24,112 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Global step of best model: 90
|
| 121 |
+
2025-09-26 16:04:24,701 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Best model saved at epoch 2 with val rms_score: 0.9563
|
| 122 |
+
2025-09-26 16:04:29,419 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 3/100 | Train Loss: 0.6500 | Val rms_score: 0.9412
|
| 123 |
+
2025-09-26 16:04:29,608 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Global step of best model: 135
|
| 124 |
+
2025-09-26 16:04:30,237 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Best model saved at epoch 3 with val rms_score: 0.9412
|
| 125 |
+
2025-09-26 16:04:38,611 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 4/100 | Train Loss: 0.5167 | Val rms_score: 0.9040
|
| 126 |
+
2025-09-26 16:04:38,853 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Global step of best model: 180
|
| 127 |
+
2025-09-26 16:04:40,161 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Best model saved at epoch 4 with val rms_score: 0.9040
|
| 128 |
+
2025-09-26 16:04:45,677 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 5/100 | Train Loss: 0.4100 | Val rms_score: 0.9414
|
| 129 |
+
2025-09-26 16:04:50,778 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 6/100 | Train Loss: 0.3167 | Val rms_score: 0.9624
|
| 130 |
+
2025-09-26 16:04:54,583 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 7/100 | Train Loss: 0.3354 | Val rms_score: 0.9378
|
| 131 |
+
2025-09-26 16:04:59,299 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 8/100 | Train Loss: 0.2833 | Val rms_score: 0.9417
|
| 132 |
+
2025-09-26 16:05:04,246 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 9/100 | Train Loss: 0.1812 | Val rms_score: 0.9693
|
| 133 |
+
2025-09-26 16:05:09,078 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 10/100 | Train Loss: 0.1972 | Val rms_score: 0.9365
|
| 134 |
+
2025-09-26 16:05:13,769 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 11/100 | Train Loss: 0.1861 | Val rms_score: 0.9325
|
| 135 |
+
2025-09-26 16:05:18,919 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 12/100 | Train Loss: 0.1500 | Val rms_score: 0.9375
|
| 136 |
+
2025-09-26 16:05:22,381 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 13/100 | Train Loss: 0.1479 | Val rms_score: 0.9251
|
| 137 |
+
2025-09-26 16:05:27,098 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 14/100 | Train Loss: 0.1250 | Val rms_score: 0.9322
|
| 138 |
+
2025-09-26 16:05:32,058 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 15/100 | Train Loss: 0.1174 | Val rms_score: 0.9174
|
| 139 |
+
2025-09-26 16:05:39,090 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 16/100 | Train Loss: 0.1219 | Val rms_score: 0.9314
|
| 140 |
+
2025-09-26 16:05:44,587 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 17/100 | Train Loss: 0.1049 | Val rms_score: 0.9327
|
| 141 |
+
2025-09-26 16:05:49,484 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 18/100 | Train Loss: 0.1031 | Val rms_score: 0.9729
|
| 142 |
+
2025-09-26 16:05:53,272 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 19/100 | Train Loss: 0.1056 | Val rms_score: 0.9292
|
| 143 |
+
2025-09-26 16:05:59,428 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 20/100 | Train Loss: 0.0938 | Val rms_score: 0.9315
|
| 144 |
+
2025-09-26 16:06:05,032 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 21/100 | Train Loss: 0.0910 | Val rms_score: 0.9166
|
| 145 |
+
2025-09-26 16:06:10,686 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 22/100 | Train Loss: 0.0903 | Val rms_score: 0.9142
|
| 146 |
+
2025-09-26 16:06:16,799 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 23/100 | Train Loss: 0.0915 | Val rms_score: 0.9803
|
| 147 |
+
2025-09-26 16:06:22,331 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 24/100 | Train Loss: 0.1069 | Val rms_score: 0.9183
|
| 148 |
+
2025-09-26 16:06:26,652 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 25/100 | Train Loss: 0.0881 | Val rms_score: 0.9105
|
| 149 |
+
2025-09-26 16:06:31,877 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 26/100 | Train Loss: 0.0771 | Val rms_score: 0.9251
|
| 150 |
+
2025-09-26 16:06:37,980 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 27/100 | Train Loss: 0.0760 | Val rms_score: 0.9319
|
| 151 |
+
2025-09-26 16:06:43,446 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 28/100 | Train Loss: 0.0698 | Val rms_score: 0.9171
|
| 152 |
+
2025-09-26 16:06:48,938 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 29/100 | Train Loss: 0.0758 | Val rms_score: 0.9080
|
| 153 |
+
2025-09-26 16:06:54,200 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 30/100 | Train Loss: 0.0663 | Val rms_score: 0.9054
|
| 154 |
+
2025-09-26 16:06:58,006 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 31/100 | Train Loss: 0.0604 | Val rms_score: 0.9197
|
| 155 |
+
2025-09-26 16:07:04,175 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 32/100 | Train Loss: 0.0656 | Val rms_score: 0.9227
|
| 156 |
+
2025-09-26 16:07:09,513 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 33/100 | Train Loss: 0.0615 | Val rms_score: 0.9164
|
| 157 |
+
2025-09-26 16:07:15,369 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 34/100 | Train Loss: 0.0651 | Val rms_score: 0.9143
|
| 158 |
+
2025-09-26 16:07:20,630 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 35/100 | Train Loss: 0.0642 | Val rms_score: 0.9168
|
| 159 |
+
2025-09-26 16:07:24,372 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 36/100 | Train Loss: 0.0676 | Val rms_score: 0.9066
|
| 160 |
+
2025-09-26 16:07:30,228 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 37/100 | Train Loss: 0.0580 | Val rms_score: 0.9051
|
| 161 |
+
2025-09-26 16:07:36,053 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 38/100 | Train Loss: 0.0504 | Val rms_score: 0.9126
|
| 162 |
+
2025-09-26 16:07:41,548 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 39/100 | Train Loss: 0.0583 | Val rms_score: 0.9160
|
| 163 |
+
2025-09-26 16:07:46,716 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 40/100 | Train Loss: 0.0566 | Val rms_score: 0.9151
|
| 164 |
+
2025-09-26 16:07:51,864 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 41/100 | Train Loss: 0.0566 | Val rms_score: 0.9146
|
| 165 |
+
2025-09-26 16:07:55,814 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 42/100 | Train Loss: 0.0792 | Val rms_score: 0.9093
|
| 166 |
+
2025-09-26 16:08:00,875 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 43/100 | Train Loss: 0.0554 | Val rms_score: 0.9105
|
| 167 |
+
2025-09-26 16:08:05,913 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 44/100 | Train Loss: 0.0514 | Val rms_score: 0.9201
|
| 168 |
+
2025-09-26 16:08:12,221 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 45/100 | Train Loss: 0.0522 | Val rms_score: 0.9162
|
| 169 |
+
2025-09-26 16:08:17,262 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 46/100 | Train Loss: 0.0538 | Val rms_score: 0.9100
|
| 170 |
+
2025-09-26 16:08:23,255 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 47/100 | Train Loss: 0.0474 | Val rms_score: 0.9110
|
| 171 |
+
2025-09-26 16:08:26,512 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 48/100 | Train Loss: 0.0486 | Val rms_score: 0.9170
|
| 172 |
+
2025-09-26 16:08:31,322 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 49/100 | Train Loss: 0.0523 | Val rms_score: 0.9048
|
| 173 |
+
2025-09-26 16:08:35,862 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 50/100 | Train Loss: 0.0510 | Val rms_score: 0.9078
|
| 174 |
+
2025-09-26 16:08:40,624 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 51/100 | Train Loss: 0.0469 | Val rms_score: 0.9136
|
| 175 |
+
2025-09-26 16:08:46,282 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 52/100 | Train Loss: 0.0467 | Val rms_score: 0.9117
|
| 176 |
+
2025-09-26 16:08:51,199 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 53/100 | Train Loss: 0.0413 | Val rms_score: 0.9185
|
| 177 |
+
2025-09-26 16:08:55,463 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 54/100 | Train Loss: 0.0414 | Val rms_score: 0.9215
|
| 178 |
+
2025-09-26 16:08:58,727 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 55/100 | Train Loss: 0.0436 | Val rms_score: 0.9120
|
| 179 |
+
2025-09-26 16:09:03,684 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 56/100 | Train Loss: 0.0459 | Val rms_score: 0.9180
|
| 180 |
+
2025-09-26 16:09:08,810 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 57/100 | Train Loss: 0.0410 | Val rms_score: 0.9167
|
| 181 |
+
2025-09-26 16:09:13,552 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 58/100 | Train Loss: 0.0334 | Val rms_score: 0.9071
|
| 182 |
+
2025-09-26 16:09:18,280 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 59/100 | Train Loss: 0.0427 | Val rms_score: 0.9107
|
| 183 |
+
2025-09-26 16:09:22,912 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 60/100 | Train Loss: 0.0399 | Val rms_score: 0.9119
|
| 184 |
+
2025-09-26 16:09:27,767 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 61/100 | Train Loss: 0.0403 | Val rms_score: 0.9082
|
| 185 |
+
2025-09-26 16:09:31,549 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 62/100 | Train Loss: 0.0389 | Val rms_score: 0.9073
|
| 186 |
+
2025-09-26 16:09:36,265 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 63/100 | Train Loss: 0.0395 | Val rms_score: 0.9114
|
| 187 |
+
2025-09-26 16:09:40,908 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 64/100 | Train Loss: 0.0389 | Val rms_score: 0.9105
|
| 188 |
+
2025-09-26 16:09:45,994 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 65/100 | Train Loss: 0.0372 | Val rms_score: 0.9106
|
| 189 |
+
2025-09-26 16:09:51,266 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 66/100 | Train Loss: 0.0372 | Val rms_score: 0.9120
|
| 190 |
+
2025-09-26 16:09:57,895 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 67/100 | Train Loss: 0.0380 | Val rms_score: 0.9153
|
| 191 |
+
2025-09-26 16:10:01,317 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 68/100 | Train Loss: 0.0373 | Val rms_score: 0.9174
|
| 192 |
+
2025-09-26 16:10:05,999 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 69/100 | Train Loss: 0.0342 | Val rms_score: 0.9092
|
| 193 |
+
2025-09-26 16:10:10,512 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 70/100 | Train Loss: 0.0344 | Val rms_score: 0.9155
|
| 194 |
+
2025-09-26 16:10:15,149 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 71/100 | Train Loss: 0.0366 | Val rms_score: 0.9068
|
| 195 |
+
2025-09-26 16:10:20,631 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 72/100 | Train Loss: 0.0375 | Val rms_score: 0.9081
|
| 196 |
+
2025-09-26 16:10:25,581 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 73/100 | Train Loss: 0.0372 | Val rms_score: 0.9050
|
| 197 |
+
2025-09-26 16:10:29,169 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 74/100 | Train Loss: 0.0339 | Val rms_score: 0.9088
|
| 198 |
+
2025-09-26 16:10:34,109 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 75/100 | Train Loss: 0.0351 | Val rms_score: 0.9083
|
| 199 |
+
2025-09-26 16:10:39,632 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 76/100 | Train Loss: 0.0393 | Val rms_score: 0.9039
|
| 200 |
+
2025-09-26 16:10:40,305 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Global step of best model: 3420
|
| 201 |
+
2025-09-26 16:10:41,318 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Best model saved at epoch 76 with val rms_score: 0.9039
|
| 202 |
+
2025-09-26 16:10:46,767 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 77/100 | Train Loss: 0.0326 | Val rms_score: 0.9043
|
| 203 |
+
2025-09-26 16:10:52,171 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 78/100 | Train Loss: 0.0437 | Val rms_score: 0.9055
|
| 204 |
+
2025-09-26 16:10:57,384 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 79/100 | Train Loss: 0.0408 | Val rms_score: 0.9068
|
| 205 |
+
2025-09-26 16:11:00,638 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 80/100 | Train Loss: 0.0347 | Val rms_score: 0.9004
|
| 206 |
+
2025-09-26 16:11:00,809 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Global step of best model: 3600
|
| 207 |
+
2025-09-26 16:11:01,425 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Best model saved at epoch 80 with val rms_score: 0.9004
|
| 208 |
+
2025-09-26 16:11:07,038 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 81/100 | Train Loss: 0.0387 | Val rms_score: 0.9253
|
| 209 |
+
2025-09-26 16:11:13,053 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 82/100 | Train Loss: 0.0750 | Val rms_score: 0.9055
|
| 210 |
+
2025-09-26 16:11:18,315 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 83/100 | Train Loss: 0.0500 | Val rms_score: 0.9026
|
| 211 |
+
2025-09-26 16:11:23,429 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 84/100 | Train Loss: 0.0392 | Val rms_score: 0.9060
|
| 212 |
+
2025-09-26 16:11:28,092 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 85/100 | Train Loss: 0.0387 | Val rms_score: 0.9063
|
| 213 |
+
2025-09-26 16:11:31,642 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 86/100 | Train Loss: 0.0384 | Val rms_score: 0.8996
|
| 214 |
+
2025-09-26 16:11:32,323 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Global step of best model: 3870
|
| 215 |
+
2025-09-26 16:11:32,954 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Best model saved at epoch 86 with val rms_score: 0.8996
|
| 216 |
+
2025-09-26 16:11:38,667 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 87/100 | Train Loss: 0.0396 | Val rms_score: 0.9027
|
| 217 |
+
2025-09-26 16:11:43,786 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 88/100 | Train Loss: 0.0368 | Val rms_score: 0.9070
|
| 218 |
+
2025-09-26 16:11:49,936 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 89/100 | Train Loss: 0.0430 | Val rms_score: 0.8978
|
| 219 |
+
2025-09-26 16:11:50,137 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Global step of best model: 4005
|
| 220 |
+
2025-09-26 16:11:50,804 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Best model saved at epoch 89 with val rms_score: 0.8978
|
| 221 |
+
2025-09-26 16:11:56,288 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 90/100 | Train Loss: 0.0358 | Val rms_score: 0.8986
|
| 222 |
+
2025-09-26 16:12:01,645 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 91/100 | Train Loss: 0.0337 | Val rms_score: 0.9008
|
| 223 |
+
2025-09-26 16:12:06,312 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 92/100 | Train Loss: 0.0332 | Val rms_score: 0.8994
|
| 224 |
+
2025-09-26 16:12:11,875 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 93/100 | Train Loss: 0.0345 | Val rms_score: 0.9037
|
| 225 |
+
2025-09-26 16:12:17,396 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 94/100 | Train Loss: 0.0322 | Val rms_score: 0.9022
|
| 226 |
+
2025-09-26 16:12:22,815 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 95/100 | Train Loss: 0.0319 | Val rms_score: 0.9018
|
| 227 |
+
2025-09-26 16:12:28,190 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 96/100 | Train Loss: 0.0293 | Val rms_score: 0.9007
|
| 228 |
+
2025-09-26 16:12:32,600 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 97/100 | Train Loss: 0.0312 | Val rms_score: 0.9073
|
| 229 |
+
2025-09-26 16:12:37,858 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 98/100 | Train Loss: 0.0270 | Val rms_score: 0.9065
|
| 230 |
+
2025-09-26 16:12:43,154 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 99/100 | Train Loss: 0.0302 | Val rms_score: 0.8969
|
| 231 |
+
2025-09-26 16:12:43,313 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Global step of best model: 4455
|
| 232 |
+
2025-09-26 16:12:43,920 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Best model saved at epoch 99 with val rms_score: 0.8969
|
| 233 |
+
2025-09-26 16:12:49,107 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 100/100 | Train Loss: 0.0302 | Val rms_score: 0.9071
|
| 234 |
+
2025-09-26 16:12:49,572 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Test rms_score: 0.9365
|
| 235 |
+
2025-09-26 16:12:50,048 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Starting triplicate run 3 for dataset astrazeneca_solubility at 2025-09-26_16-12-50
|
| 236 |
+
2025-09-26 16:12:54,169 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 1/100 | Train Loss: 1.2889 | Val rms_score: 0.9695
|
| 237 |
+
2025-09-26 16:12:54,169 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Global step of best model: 45
|
| 238 |
+
2025-09-26 16:12:54,888 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Best model saved at epoch 1 with val rms_score: 0.9695
|
| 239 |
+
2025-09-26 16:13:01,215 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 2/100 | Train Loss: 0.8056 | Val rms_score: 0.9519
|
| 240 |
+
2025-09-26 16:13:01,417 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Global step of best model: 90
|
| 241 |
+
2025-09-26 16:13:02,079 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Best model saved at epoch 2 with val rms_score: 0.9519
|
| 242 |
+
2025-09-26 16:13:05,959 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 3/100 | Train Loss: 0.7179 | Val rms_score: 0.9161
|
| 243 |
+
2025-09-26 16:13:06,117 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Global step of best model: 135
|
| 244 |
+
2025-09-26 16:13:06,709 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Best model saved at epoch 3 with val rms_score: 0.9161
|
| 245 |
+
2025-09-26 16:13:14,434 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 4/100 | Train Loss: 0.5667 | Val rms_score: 0.9020
|
| 246 |
+
2025-09-26 16:13:14,636 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Global step of best model: 180
|
| 247 |
+
2025-09-26 16:13:15,248 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Best model saved at epoch 4 with val rms_score: 0.9020
|
| 248 |
+
2025-09-26 16:13:20,292 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 5/100 | Train Loss: 0.4275 | Val rms_score: 0.9125
|
| 249 |
+
2025-09-26 16:13:26,978 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 6/100 | Train Loss: 0.3500 | Val rms_score: 0.9189
|
| 250 |
+
2025-09-26 16:13:32,787 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 7/100 | Train Loss: 0.3083 | Val rms_score: 0.9097
|
| 251 |
+
2025-09-26 16:13:37,631 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 8/100 | Train Loss: 0.2736 | Val rms_score: 0.9210
|
| 252 |
+
2025-09-26 16:13:42,816 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 9/100 | Train Loss: 0.1922 | Val rms_score: 0.9223
|
| 253 |
+
2025-09-26 16:13:47,572 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 10/100 | Train Loss: 0.2111 | Val rms_score: 0.9403
|
| 254 |
+
2025-09-26 16:13:52,694 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 11/100 | Train Loss: 0.2111 | Val rms_score: 0.9269
|
| 255 |
+
2025-09-26 16:13:57,930 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 12/100 | Train Loss: 0.1734 | Val rms_score: 0.9132
|
| 256 |
+
2025-09-26 16:14:02,852 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 13/100 | Train Loss: 0.1354 | Val rms_score: 0.9119
|
| 257 |
+
2025-09-26 16:14:06,513 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 14/100 | Train Loss: 0.1396 | Val rms_score: 0.9143
|
| 258 |
+
2025-09-26 16:14:11,711 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 15/100 | Train Loss: 0.1264 | Val rms_score: 0.9072
|
| 259 |
+
2025-09-26 16:14:17,080 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 16/100 | Train Loss: 0.1172 | Val rms_score: 0.9104
|
| 260 |
+
2025-09-26 16:14:22,813 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 17/100 | Train Loss: 0.1375 | Val rms_score: 0.9192
|
| 261 |
+
2025-09-26 16:14:28,486 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 18/100 | Train Loss: 0.1391 | Val rms_score: 0.9159
|
| 262 |
+
2025-09-26 16:14:33,494 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 19/100 | Train Loss: 0.1389 | Val rms_score: 0.9020
|
| 263 |
+
2025-09-26 16:14:36,988 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 20/100 | Train Loss: 0.1153 | Val rms_score: 0.8902
|
| 264 |
+
2025-09-26 16:14:37,154 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Global step of best model: 900
|
| 265 |
+
2025-09-26 16:14:38,102 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Best model saved at epoch 20 with val rms_score: 0.8902
|
| 266 |
+
2025-09-26 16:14:43,445 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 21/100 | Train Loss: 0.1042 | Val rms_score: 0.9048
|
| 267 |
+
2025-09-26 16:14:49,646 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 22/100 | Train Loss: 0.0917 | Val rms_score: 0.9144
|
| 268 |
+
2025-09-26 16:14:55,790 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 23/100 | Train Loss: 0.0835 | Val rms_score: 0.8995
|
| 269 |
+
2025-09-26 16:15:01,507 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 24/100 | Train Loss: 0.1083 | Val rms_score: 0.9151
|
| 270 |
+
2025-09-26 16:15:06,789 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 25/100 | Train Loss: 0.1094 | Val rms_score: 0.8801
|
| 271 |
+
2025-09-26 16:15:06,951 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Global step of best model: 1125
|
| 272 |
+
2025-09-26 16:15:06,104 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Best model saved at epoch 25 with val rms_score: 0.8801
|
| 273 |
+
2025-09-26 16:15:11,102 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 26/100 | Train Loss: 0.0875 | Val rms_score: 0.8891
|
| 274 |
+
2025-09-26 16:15:16,438 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 27/100 | Train Loss: 0.0659 | Val rms_score: 0.8964
|
| 275 |
+
2025-09-26 16:15:21,475 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 28/100 | Train Loss: 0.0726 | Val rms_score: 0.8998
|
| 276 |
+
2025-09-26 16:15:26,605 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 29/100 | Train Loss: 0.1289 | Val rms_score: 0.9024
|
| 277 |
+
2025-09-26 16:15:31,444 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 30/100 | Train Loss: 0.1139 | Val rms_score: 0.8807
|
| 278 |
+
2025-09-26 16:15:36,626 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 31/100 | Train Loss: 0.0903 | Val rms_score: 0.9071
|
| 279 |
+
2025-09-26 16:15:41,193 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 32/100 | Train Loss: 0.0781 | Val rms_score: 0.8787
|
| 280 |
+
2025-09-26 16:15:41,353 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Global step of best model: 1440
|
| 281 |
+
2025-09-26 16:15:41,935 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Best model saved at epoch 32 with val rms_score: 0.8787
|
| 282 |
+
2025-09-26 16:15:46,819 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 33/100 | Train Loss: 0.0813 | Val rms_score: 0.9032
|
| 283 |
+
2025-09-26 16:15:51,704 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 34/100 | Train Loss: 0.0719 | Val rms_score: 0.8915
|
| 284 |
+
2025-09-26 16:15:56,640 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 35/100 | Train Loss: 0.0615 | Val rms_score: 0.8944
|
| 285 |
+
2025-09-26 16:16:01,377 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 36/100 | Train Loss: 0.0609 | Val rms_score: 0.8918
|
| 286 |
+
2025-09-26 16:16:06,652 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 37/100 | Train Loss: 0.0576 | Val rms_score: 0.8875
|
| 287 |
+
2025-09-26 16:16:09,911 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 38/100 | Train Loss: 0.0602 | Val rms_score: 0.8808
|
| 288 |
+
2025-09-26 16:16:14,661 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 39/100 | Train Loss: 0.0563 | Val rms_score: 0.8958
|
| 289 |
+
2025-09-26 16:16:19,409 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 40/100 | Train Loss: 0.0524 | Val rms_score: 0.8858
|
| 290 |
+
2025-09-26 16:16:24,111 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 41/100 | Train Loss: 0.0563 | Val rms_score: 0.8938
|
| 291 |
+
2025-09-26 16:16:29,378 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 42/100 | Train Loss: 0.0556 | Val rms_score: 0.8911
|
| 292 |
+
2025-09-26 16:16:34,160 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 43/100 | Train Loss: 0.0551 | Val rms_score: 0.8959
|
| 293 |
+
2025-09-26 16:16:38,877 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 44/100 | Train Loss: 0.0517 | Val rms_score: 0.8966
|
| 294 |
+
2025-09-26 16:16:43,617 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 45/100 | Train Loss: 0.0525 | Val rms_score: 0.8964
|
| 295 |
+
2025-09-26 16:16:48,125 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 46/100 | Train Loss: 0.0476 | Val rms_score: 0.8926
|
| 296 |
+
2025-09-26 16:16:53,350 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 47/100 | Train Loss: 0.0510 | Val rms_score: 0.8927
|
| 297 |
+
2025-09-26 16:16:58,131 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 48/100 | Train Loss: 0.0483 | Val rms_score: 0.8945
|
| 298 |
+
2025-09-26 16:17:02,585 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 49/100 | Train Loss: 0.0402 | Val rms_score: 0.8845
|
| 299 |
+
2025-09-26 16:17:07,629 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 50/100 | Train Loss: 0.0472 | Val rms_score: 0.8852
|
| 300 |
+
2025-09-26 16:17:11,551 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 51/100 | Train Loss: 0.0458 | Val rms_score: 0.8990
|
| 301 |
+
2025-09-26 16:17:17,721 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 52/100 | Train Loss: 0.0430 | Val rms_score: 0.8971
|
| 302 |
+
2025-09-26 16:17:23,363 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 53/100 | Train Loss: 0.0500 | Val rms_score: 0.8857
|
| 303 |
+
2025-09-26 16:17:28,894 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 54/100 | Train Loss: 0.0482 | Val rms_score: 0.8898
|
| 304 |
+
2025-09-26 16:17:34,123 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 55/100 | Train Loss: 0.0479 | Val rms_score: 0.8865
|
| 305 |
+
2025-09-26 16:17:39,640 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 56/100 | Train Loss: 0.0424 | Val rms_score: 0.8872
|
| 306 |
+
2025-09-26 16:17:44,117 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 57/100 | Train Loss: 0.0455 | Val rms_score: 0.8863
|
| 307 |
+
2025-09-26 16:17:49,142 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 58/100 | Train Loss: 0.0342 | Val rms_score: 0.8767
|
| 308 |
+
2025-09-26 16:17:49,306 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Global step of best model: 2610
|
| 309 |
+
2025-09-26 16:17:49,909 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Best model saved at epoch 58 with val rms_score: 0.8767
|
| 310 |
+
2025-09-26 16:17:55,375 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 59/100 | Train Loss: 0.0413 | Val rms_score: 0.8845
|
| 311 |
+
2025-09-26 16:18:01,054 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 60/100 | Train Loss: 0.0444 | Val rms_score: 0.8812
|
| 312 |
+
2025-09-26 16:18:06,730 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 61/100 | Train Loss: 0.0384 | Val rms_score: 0.8804
|
| 313 |
+
2025-09-26 16:18:11,408 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 62/100 | Train Loss: 0.0425 | Val rms_score: 0.8782
|
| 314 |
+
2025-09-26 16:18:17,010 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 63/100 | Train Loss: 0.0426 | Val rms_score: 0.8887
|
| 315 |
+
2025-09-26 16:18:22,607 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 64/100 | Train Loss: 0.0420 | Val rms_score: 0.8912
|
| 316 |
+
2025-09-26 16:18:28,202 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 65/100 | Train Loss: 0.0364 | Val rms_score: 0.8821
|
| 317 |
+
2025-09-26 16:18:33,900 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 66/100 | Train Loss: 0.0415 | Val rms_score: 0.8810
|
| 318 |
+
2025-09-26 16:18:41,070 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 67/100 | Train Loss: 0.0411 | Val rms_score: 0.8855
|
| 319 |
+
2025-09-26 16:18:45,066 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 68/100 | Train Loss: 0.0389 | Val rms_score: 0.8875
|
| 320 |
+
2025-09-26 16:18:50,735 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 69/100 | Train Loss: 0.0393 | Val rms_score: 0.8890
|
| 321 |
+
2025-09-26 16:18:56,093 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 70/100 | Train Loss: 0.0352 | Val rms_score: 0.8897
|
| 322 |
+
2025-09-26 16:19:01,426 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 71/100 | Train Loss: 0.0394 | Val rms_score: 0.8892
|
| 323 |
+
2025-09-26 16:19:06,891 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 72/100 | Train Loss: 0.0408 | Val rms_score: 0.8886
|
| 324 |
+
2025-09-26 16:19:12,090 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 73/100 | Train Loss: 0.0392 | Val rms_score: 0.8860
|
| 325 |
+
2025-09-26 16:19:16,794 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 74/100 | Train Loss: 0.0427 | Val rms_score: 0.8857
|
| 326 |
+
2025-09-26 16:19:24,876 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 75/100 | Train Loss: 0.0372 | Val rms_score: 0.8883
|
| 327 |
+
2025-09-26 16:19:31,517 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 76/100 | Train Loss: 0.0359 | Val rms_score: 0.8898
|
| 328 |
+
2025-09-26 16:19:38,113 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 77/100 | Train Loss: 0.0382 | Val rms_score: 0.8844
|
| 329 |
+
2025-09-26 16:19:43,841 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 78/100 | Train Loss: 0.0375 | Val rms_score: 0.8864
|
| 330 |
+
2025-09-26 16:19:48,132 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 79/100 | Train Loss: 0.0375 | Val rms_score: 0.8808
|
| 331 |
+
2025-09-26 16:19:56,720 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 80/100 | Train Loss: 0.0358 | Val rms_score: 0.8922
|
| 332 |
+
2025-09-26 16:20:03,261 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 81/100 | Train Loss: 0.0391 | Val rms_score: 0.9030
|
| 333 |
+
2025-09-26 16:20:09,535 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 82/100 | Train Loss: 0.0455 | Val rms_score: 0.8830
|
| 334 |
+
2025-09-26 16:20:13,494 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 83/100 | Train Loss: 0.0364 | Val rms_score: 0.8803
|
| 335 |
+
2025-09-26 16:20:18,924 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 84/100 | Train Loss: 0.0368 | Val rms_score: 0.8861
|
| 336 |
+
2025-09-26 16:20:24,442 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 85/100 | Train Loss: 0.0325 | Val rms_score: 0.8830
|
| 337 |
+
2025-09-26 16:20:29,989 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 86/100 | Train Loss: 0.0344 | Val rms_score: 0.8785
|
| 338 |
+
2025-09-26 16:20:36,187 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 87/100 | Train Loss: 0.0359 | Val rms_score: 0.8780
|
| 339 |
+
2025-09-26 16:20:41,684 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 88/100 | Train Loss: 0.0358 | Val rms_score: 0.8764
|
| 340 |
+
2025-09-26 16:20:41,856 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Global step of best model: 3960
|
| 341 |
+
2025-09-26 16:20:42,610 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Best model saved at epoch 88 with val rms_score: 0.8764
|
| 342 |
+
2025-09-26 16:20:48,957 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 89/100 | Train Loss: 0.0348 | Val rms_score: 0.8883
|
| 343 |
+
2025-09-26 16:20:54,024 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 90/100 | Train Loss: 0.0342 | Val rms_score: 0.8863
|
| 344 |
+
2025-09-26 16:20:58,838 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 91/100 | Train Loss: 0.0361 | Val rms_score: 0.8816
|
| 345 |
+
2025-09-26 16:21:04,442 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 92/100 | Train Loss: 0.0330 | Val rms_score: 0.8755
|
| 346 |
+
2025-09-26 16:21:04,606 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Global step of best model: 4140
|
| 347 |
+
2025-09-26 16:21:05,230 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Best model saved at epoch 92 with val rms_score: 0.8755
|
| 348 |
+
2025-09-26 16:21:10,593 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 93/100 | Train Loss: 0.0365 | Val rms_score: 0.8801
|
| 349 |
+
2025-09-26 16:21:15,349 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 94/100 | Train Loss: 0.0372 | Val rms_score: 0.8766
|
| 350 |
+
2025-09-26 16:21:21,487 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 95/100 | Train Loss: 0.0448 | Val rms_score: 0.8841
|
| 351 |
+
2025-09-26 16:21:27,522 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 96/100 | Train Loss: 0.0312 | Val rms_score: 0.8780
|
| 352 |
+
2025-09-26 16:21:34,199 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 97/100 | Train Loss: 0.0330 | Val rms_score: 0.8802
|
| 353 |
+
2025-09-26 16:21:40,066 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 98/100 | Train Loss: 0.0262 | Val rms_score: 0.8859
|
| 354 |
+
2025-09-26 16:21:46,079 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 99/100 | Train Loss: 0.0332 | Val rms_score: 0.8870
|
| 355 |
+
2025-09-26 16:21:50,715 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 100/100 | Train Loss: 0.0340 | Val rms_score: 0.8829
|
| 356 |
+
2025-09-26 16:21:51,211 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Test rms_score: 0.9654
|
| 357 |
+
2025-09-26 16:21:51,884 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Final Triplicate Test Results — Avg rms_score: 0.9614, Std Dev: 0.0189
|