Update README.md
Browse files
README.md
CHANGED
|
@@ -1,15 +1,73 @@
|
|
| 1 |
---
|
| 2 |
base_model: readerbench/RoBERT-base
|
|
|
|
|
|
|
| 3 |
tags:
|
| 4 |
-
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 5 |
metrics:
|
| 6 |
- accuracy
|
| 7 |
- precision
|
| 8 |
- recall
|
| 9 |
- f1
|
|
|
|
| 10 |
model-index:
|
| 11 |
- name: ro-sentiment-03
|
| 12 |
-
results:
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 13 |
---
|
| 14 |
|
| 15 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
|
|
@@ -26,6 +84,11 @@ It achieves the following results on the evaluation set:
|
|
| 26 |
- F1: 0.8652
|
| 27 |
- F1 Weighted: 0.8287
|
| 28 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 29 |
## Model description
|
| 30 |
|
| 31 |
More information needed
|
|
@@ -50,14 +113,14 @@ The following hyperparameters were used during training:
|
|
| 50 |
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
|
| 51 |
- lr_scheduler_type: linear
|
| 52 |
- lr_scheduler_warmup_ratio: 0.2
|
| 53 |
-
- num_epochs: 10
|
| 54 |
|
| 55 |
### Training results
|
| 56 |
|
| 57 |
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 | F1 Weighted |
|
| 58 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|:-----------:|
|
| 59 |
| 0.4198 | 1.0 | 1629 | 0.3983 | 0.8377 | 0.8791 | 0.8721 | 0.8756 | 0.8380 |
|
| 60 |
-
| 0.3861 | 2.0 | 3258 | 0.4312 | 0.8429 | 0.8963 | 0.8665 | 0.8812 | 0.8442 |
|
| 61 |
| 0.3189 | 3.0 | 4887 | 0.3923 | 0.8307 | 0.8366 | 0.8959 | 0.8652 | 0.8287 |
|
| 62 |
|
| 63 |
|
|
|
|
| 1 |
---
|
| 2 |
base_model: readerbench/RoBERT-base
|
| 3 |
+
language:
|
| 4 |
+
- ro
|
| 5 |
tags:
|
| 6 |
+
- sentiment
|
| 7 |
+
- classification
|
| 8 |
+
- nlp
|
| 9 |
+
- bert
|
| 10 |
+
datasets:
|
| 11 |
+
- decathlon_reviews
|
| 12 |
+
- cinemagia_reviews
|
| 13 |
metrics:
|
| 14 |
- accuracy
|
| 15 |
- precision
|
| 16 |
- recall
|
| 17 |
- f1
|
| 18 |
+
- f1 weighted
|
| 19 |
model-index:
|
| 20 |
- name: ro-sentiment-03
|
| 21 |
+
results:
|
| 22 |
+
- task:
|
| 23 |
+
type: text-classification # Required. Example: automatic-speech-recognition
|
| 24 |
+
name: Text Classification # Optional. Example: Speech Recognition
|
| 25 |
+
dataset:
|
| 26 |
+
type: ro_sent # Required. Example: common_voice. Use dataset id from https://hf.co/datasets
|
| 27 |
+
name: Rommanian Sentiment Dataset # Required. A pretty name for the dataset. Example: Common Voice (French)
|
| 28 |
+
config: default # Optional. The name of the dataset configuration used in `load_dataset()`. Example: fr in `load_dataset("common_voice", "fr")`. See the `datasets` docs for more info: https://huggingface.co/docs/datasets/package_reference/loading_methods#datasets.load_dataset.name
|
| 29 |
+
split: all # Optional. Example: test
|
| 30 |
+
metrics:
|
| 31 |
+
- type: accuracy # Required. Example: wer. Use metric id from https://hf.co/metrics
|
| 32 |
+
value: 0.85 # Required. Example: 20.90
|
| 33 |
+
name: Accuracy # Optional. Example: Test WER
|
| 34 |
+
- type: precision # Required. Example: wer. Use metric id from https://hf.co/metrics
|
| 35 |
+
value: 0.85 # Required. Example: 20.90
|
| 36 |
+
name: Precision # Optional. Example: Test WER
|
| 37 |
+
- type: recall # Required. Example: wer. Use metric id from https://hf.co/metrics
|
| 38 |
+
value: 0.85 # Required. Example: 20.90
|
| 39 |
+
name: Recall # Optional. Example: Test WER
|
| 40 |
+
- type: f1_weighted # Required. Example: wer. Use metric id from https://hf.co/metrics
|
| 41 |
+
value: 0.85 # Required. Example: 20.90
|
| 42 |
+
name: Weighted F1 # Optional. Example: Test WER
|
| 43 |
+
- type: f1_macro # Required. Example: wer. Use metric id from https://hf.co/metrics
|
| 44 |
+
value: 0.84 # Required. Example: 20.90
|
| 45 |
+
name: Weighted F1 # Optional. Example: Test WER
|
| 46 |
+
- task:
|
| 47 |
+
type: text-classification # Required. Example: automatic-speech-recognition
|
| 48 |
+
name: Text Classification # Optional. Example: Speech Recognition
|
| 49 |
+
dataset:
|
| 50 |
+
type: laroseda # Required. Example: common_voice. Use dataset id from https://hf.co/datasets
|
| 51 |
+
name: A Large Romanian Sentiment Data Set # Required. A pretty name for the dataset. Example: Common Voice (French)
|
| 52 |
+
config: default # Optional. The name of the dataset configuration used in `load_dataset()`. Example: fr in `load_dataset("common_voice", "fr")`. See the `datasets` docs for more info: https://huggingface.co/docs/datasets/package_reference/loading_methods#datasets.load_dataset.name
|
| 53 |
+
split: all # Optional. Example: test
|
| 54 |
+
metrics:
|
| 55 |
+
- type: accuracy # Required. Example: wer. Use metric id from https://hf.co/metrics
|
| 56 |
+
value: 0.85 # Required. Example: 20.90
|
| 57 |
+
name: Accuracy # Optional. Example: Test WER
|
| 58 |
+
- type: precision # Required. Example: wer. Use metric id from https://hf.co/metrics
|
| 59 |
+
value: 0.86 # Required. Example: 20.90
|
| 60 |
+
name: Precision # Optional. Example: Test WER
|
| 61 |
+
- type: recall # Required. Example: wer. Use metric id from https://hf.co/metrics
|
| 62 |
+
value: 0.85 # Required. Example: 20.90
|
| 63 |
+
name: Recall # Optional. Example: Test WER
|
| 64 |
+
- type: f1_weighted # Required. Example: wer. Use metric id from https://hf.co/metrics
|
| 65 |
+
value: 0.84 # Required. Example: 20.90
|
| 66 |
+
name: Weighted F1 # Optional. Example: Test WER
|
| 67 |
+
- type: f1_macro # Required. Example: wer. Use metric id from https://hf.co/metrics
|
| 68 |
+
value: 0.84 # Required. Example: 20.90
|
| 69 |
+
name: Weighted F1 # Optional. Example: Test WER
|
| 70 |
+
|
| 71 |
---
|
| 72 |
|
| 73 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
|
|
|
|
| 84 |
- F1: 0.8652
|
| 85 |
- F1 Weighted: 0.8287
|
| 86 |
|
| 87 |
+
### Evaluation on other datasets
|
| 88 |
+
|
| 89 |
+
**SENT_RO**
|
| 90 |
+
|
| 91 |
+
|
| 92 |
## Model description
|
| 93 |
|
| 94 |
More information needed
|
|
|
|
| 113 |
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
|
| 114 |
- lr_scheduler_type: linear
|
| 115 |
- lr_scheduler_warmup_ratio: 0.2
|
| 116 |
+
- num_epochs: 10 (Early stop epoch 3, best epoch 2)
|
| 117 |
|
| 118 |
### Training results
|
| 119 |
|
| 120 |
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 | F1 Weighted |
|
| 121 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|:-----------:|
|
| 122 |
| 0.4198 | 1.0 | 1629 | 0.3983 | 0.8377 | 0.8791 | 0.8721 | 0.8756 | 0.8380 |
|
| 123 |
+
| 0.3861 | **2.0** | 3258 | 0.4312 | 0.8429 | 0.8963 | 0.8665 | 0.8812 | **0.8442** |
|
| 124 |
| 0.3189 | 3.0 | 4887 | 0.3923 | 0.8307 | 0.8366 | 0.8959 | 0.8652 | 0.8287 |
|
| 125 |
|
| 126 |
|