Model save
Browse files
README.md
CHANGED
|
@@ -4,6 +4,8 @@ license: apache-2.0
|
|
| 4 |
base_model: openai/whisper-base
|
| 5 |
tags:
|
| 6 |
- generated_from_trainer
|
|
|
|
|
|
|
| 7 |
model-index:
|
| 8 |
- name: ap-jxLlj8Cg75EMClO0ZGP7gX
|
| 9 |
results: []
|
|
@@ -16,7 +18,11 @@ should probably proofread and complete it, then remove this comment. -->
|
|
| 16 |
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/dhaval-shah-doordash/whisper-finetuning/runs/2jblqgyj)
|
| 17 |
# ap-jxLlj8Cg75EMClO0ZGP7gX
|
| 18 |
|
| 19 |
-
This model is a fine-tuned version of [openai/whisper-base](https://huggingface.co/openai/whisper-base) on
|
|
|
|
|
|
|
|
|
|
|
|
|
| 20 |
|
| 21 |
## Model description
|
| 22 |
|
|
@@ -47,6 +53,24 @@ The following hyperparameters were used during training:
|
|
| 47 |
- num_epochs: 12
|
| 48 |
- mixed_precision_training: Native AMP
|
| 49 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 50 |
### Framework versions
|
| 51 |
|
| 52 |
- Transformers 4.48.3
|
|
|
|
| 4 |
base_model: openai/whisper-base
|
| 5 |
tags:
|
| 6 |
- generated_from_trainer
|
| 7 |
+
metrics:
|
| 8 |
+
- wer
|
| 9 |
model-index:
|
| 10 |
- name: ap-jxLlj8Cg75EMClO0ZGP7gX
|
| 11 |
results: []
|
|
|
|
| 18 |
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/dhaval-shah-doordash/whisper-finetuning/runs/2jblqgyj)
|
| 19 |
# ap-jxLlj8Cg75EMClO0ZGP7gX
|
| 20 |
|
| 21 |
+
This model is a fine-tuned version of [openai/whisper-base](https://huggingface.co/openai/whisper-base) on the None dataset.
|
| 22 |
+
It achieves the following results on the evaluation set:
|
| 23 |
+
- Loss: 0.4326
|
| 24 |
+
- Model Preparation Time: 0.0053
|
| 25 |
+
- Wer: 0.1928
|
| 26 |
|
| 27 |
## Model description
|
| 28 |
|
|
|
|
| 53 |
- num_epochs: 12
|
| 54 |
- mixed_precision_training: Native AMP
|
| 55 |
|
| 56 |
+
### Training results
|
| 57 |
+
|
| 58 |
+
| Training Loss | Epoch | Step | Validation Loss | Model Preparation Time | Wer |
|
| 59 |
+
|:-------------:|:-----:|:----:|:---------------:|:----------------------:|:------:|
|
| 60 |
+
| 0.8692 | 1.0 | 13 | 1.5918 | 0.0053 | 0.2712 |
|
| 61 |
+
| 0.2971 | 2.0 | 26 | 0.5042 | 0.0053 | 0.2033 |
|
| 62 |
+
| 0.1803 | 3.0 | 39 | 0.2987 | 0.0053 | 0.1344 |
|
| 63 |
+
| 0.0646 | 4.0 | 52 | 0.2941 | 0.0053 | 0.1340 |
|
| 64 |
+
| 0.0386 | 5.0 | 65 | 0.3093 | 0.0053 | 0.1471 |
|
| 65 |
+
| 0.0102 | 6.0 | 78 | 0.3861 | 0.0053 | 0.1438 |
|
| 66 |
+
| 0.0183 | 7.0 | 91 | 0.3980 | 0.0053 | 0.1513 |
|
| 67 |
+
| 0.037 | 8.0 | 104 | 0.4312 | 0.0053 | 0.1538 |
|
| 68 |
+
| 0.0164 | 9.0 | 117 | 0.4298 | 0.0053 | 0.1699 |
|
| 69 |
+
| 0.0132 | 10.0 | 130 | 0.3946 | 0.0053 | 0.1349 |
|
| 70 |
+
| 0.018 | 11.0 | 143 | 0.4802 | 0.0053 | 0.3464 |
|
| 71 |
+
| 0.0317 | 11.08 | 144 | 0.4326 | 0.0053 | 0.1928 |
|
| 72 |
+
|
| 73 |
+
|
| 74 |
### Framework versions
|
| 75 |
|
| 76 |
- Transformers 4.48.3
|