Bisher commited on
Commit
6934212
·
verified ·
1 Parent(s): 0491a1f

whisper-small-base_experiment

Browse files
Files changed (2) hide show
  1. README.md +87 -195
  2. experiment_settings.json +9 -0
README.md CHANGED
@@ -1,200 +1,92 @@
1
  ---
2
- library_name: transformers
 
 
3
  tags:
4
  - unsloth
 
 
 
 
5
  ---
6
 
7
- # Model Card for Model ID
8
-
9
- <!-- Provide a quick summary of what the model is/does. -->
10
-
11
-
12
-
13
- ## Model Details
14
-
15
- ### Model Description
16
-
17
- <!-- Provide a longer summary of what this model is. -->
18
-
19
- This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
20
-
21
- - **Developed by:** [More Information Needed]
22
- - **Funded by [optional]:** [More Information Needed]
23
- - **Shared by [optional]:** [More Information Needed]
24
- - **Model type:** [More Information Needed]
25
- - **Language(s) (NLP):** [More Information Needed]
26
- - **License:** [More Information Needed]
27
- - **Finetuned from model [optional]:** [More Information Needed]
28
-
29
- ### Model Sources [optional]
30
-
31
- <!-- Provide the basic links for the model. -->
32
-
33
- - **Repository:** [More Information Needed]
34
- - **Paper [optional]:** [More Information Needed]
35
- - **Demo [optional]:** [More Information Needed]
36
-
37
- ## Uses
38
-
39
- <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
40
-
41
- ### Direct Use
42
-
43
- <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
44
-
45
- [More Information Needed]
46
-
47
- ### Downstream Use [optional]
48
-
49
- <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
50
-
51
- [More Information Needed]
52
-
53
- ### Out-of-Scope Use
54
-
55
- <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
56
-
57
- [More Information Needed]
58
-
59
- ## Bias, Risks, and Limitations
60
-
61
- <!-- This section is meant to convey both technical and sociotechnical limitations. -->
62
-
63
- [More Information Needed]
64
-
65
- ### Recommendations
66
-
67
- <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
68
-
69
- Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
70
-
71
- ## How to Get Started with the Model
72
-
73
- Use the code below to get started with the model.
74
-
75
- [More Information Needed]
76
-
77
- ## Training Details
78
-
79
- ### Training Data
80
-
81
- <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
82
-
83
- [More Information Needed]
84
-
85
- ### Training Procedure
86
-
87
- <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
88
-
89
- #### Preprocessing [optional]
90
-
91
- [More Information Needed]
92
-
93
-
94
- #### Training Hyperparameters
95
-
96
- - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
97
-
98
- #### Speeds, Sizes, Times [optional]
99
-
100
- <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
101
-
102
- [More Information Needed]
103
-
104
- ## Evaluation
105
-
106
- <!-- This section describes the evaluation protocols and provides the results. -->
107
-
108
- ### Testing Data, Factors & Metrics
109
-
110
- #### Testing Data
111
-
112
- <!-- This should link to a Dataset Card if possible. -->
113
-
114
- [More Information Needed]
115
-
116
- #### Factors
117
-
118
- <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
119
-
120
- [More Information Needed]
121
-
122
- #### Metrics
123
-
124
- <!-- These are the evaluation metrics being used, ideally with a description of why. -->
125
-
126
- [More Information Needed]
127
-
128
- ### Results
129
-
130
- [More Information Needed]
131
-
132
- #### Summary
133
-
134
-
135
-
136
- ## Model Examination [optional]
137
-
138
- <!-- Relevant interpretability work for the model goes here -->
139
-
140
- [More Information Needed]
141
-
142
- ## Environmental Impact
143
-
144
- <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
145
-
146
- Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
147
-
148
- - **Hardware Type:** [More Information Needed]
149
- - **Hours used:** [More Information Needed]
150
- - **Cloud Provider:** [More Information Needed]
151
- - **Compute Region:** [More Information Needed]
152
- - **Carbon Emitted:** [More Information Needed]
153
-
154
- ## Technical Specifications [optional]
155
-
156
- ### Model Architecture and Objective
157
-
158
- [More Information Needed]
159
-
160
- ### Compute Infrastructure
161
-
162
- [More Information Needed]
163
-
164
- #### Hardware
165
-
166
- [More Information Needed]
167
-
168
- #### Software
169
-
170
- [More Information Needed]
171
-
172
- ## Citation [optional]
173
-
174
- <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
175
-
176
- **BibTeX:**
177
-
178
- [More Information Needed]
179
-
180
- **APA:**
181
-
182
- [More Information Needed]
183
-
184
- ## Glossary [optional]
185
-
186
- <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
187
-
188
- [More Information Needed]
189
-
190
- ## More Information [optional]
191
-
192
- [More Information Needed]
193
-
194
- ## Model Card Authors [optional]
195
-
196
- [More Information Needed]
197
-
198
- ## Model Card Contact
199
-
200
- [More Information Needed]
 
1
  ---
2
+ library_name: peft
3
+ license: apache-2.0
4
+ base_model: unsloth/whisper-small
5
  tags:
6
  - unsloth
7
+ - generated_from_trainer
8
+ model-index:
9
+ - name: whisper-small-base_experiment
10
+ results: []
11
  ---
12
 
13
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
14
+ should probably proofread and complete it, then remove this comment. -->
15
+
16
+ # whisper-small-base_experiment
17
+
18
+ This model is a fine-tuned version of [unsloth/whisper-small](https://huggingface.co/unsloth/whisper-small) on an unknown dataset.
19
+ It achieves the following results on the evaluation set:
20
+ - Loss: 0.3049
21
+
22
+ ## Model description
23
+
24
+ More information needed
25
+
26
+ ## Intended uses & limitations
27
+
28
+ More information needed
29
+
30
+ ## Training and evaluation data
31
+
32
+ More information needed
33
+
34
+ ## Training procedure
35
+
36
+ ### Training hyperparameters
37
+
38
+ The following hyperparameters were used during training:
39
+ - learning_rate: 0.0001
40
+ - train_batch_size: 16
41
+ - eval_batch_size: 8
42
+ - seed: 3407
43
+ - optimizer: Use OptimizerNames.ADAMW_8BIT with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
44
+ - lr_scheduler_type: linear
45
+ - lr_scheduler_warmup_steps: 5
46
+ - num_epochs: 1
47
+ - mixed_precision_training: Native AMP
48
+
49
+ ### Training results
50
+
51
+ | Training Loss | Epoch | Step | Validation Loss |
52
+ |:-------------:|:------:|:----:|:---------------:|
53
+ | 2.3599 | 0.0315 | 10 | 1.8954 |
54
+ | 1.4553 | 0.0631 | 20 | 1.1826 |
55
+ | 1.0813 | 0.0946 | 30 | 0.9275 |
56
+ | 0.8147 | 0.1262 | 40 | 0.6745 |
57
+ | 0.5817 | 0.1577 | 50 | 0.5102 |
58
+ | 0.4833 | 0.1893 | 60 | 0.4359 |
59
+ | 0.447 | 0.2208 | 70 | 0.4060 |
60
+ | 0.4082 | 0.2524 | 80 | 0.3890 |
61
+ | 0.3891 | 0.2839 | 90 | 0.3805 |
62
+ | 0.3788 | 0.3155 | 100 | 0.3694 |
63
+ | 0.3576 | 0.3470 | 110 | 0.3622 |
64
+ | 0.3554 | 0.3785 | 120 | 0.3548 |
65
+ | 0.3408 | 0.4101 | 130 | 0.3507 |
66
+ | 0.3712 | 0.4416 | 140 | 0.3452 |
67
+ | 0.3564 | 0.4732 | 150 | 0.3391 |
68
+ | 0.3458 | 0.5047 | 160 | 0.3325 |
69
+ | 0.3171 | 0.5363 | 170 | 0.3289 |
70
+ | 0.3511 | 0.5678 | 180 | 0.3274 |
71
+ | 0.3438 | 0.5994 | 190 | 0.3250 |
72
+ | 0.3829 | 0.6309 | 200 | 0.3210 |
73
+ | 0.3329 | 0.6625 | 210 | 0.3191 |
74
+ | 0.3348 | 0.6940 | 220 | 0.3154 |
75
+ | 0.3066 | 0.7256 | 230 | 0.3130 |
76
+ | 0.3563 | 0.7571 | 240 | 0.3123 |
77
+ | 0.3187 | 0.7886 | 250 | 0.3099 |
78
+ | 0.2766 | 0.8202 | 260 | 0.3076 |
79
+ | 0.3368 | 0.8517 | 270 | 0.3061 |
80
+ | 0.2867 | 0.8833 | 280 | 0.3059 |
81
+ | 0.2989 | 0.9148 | 290 | 0.3057 |
82
+ | 0.3101 | 0.9464 | 300 | 0.3052 |
83
+ | 0.3148 | 0.9779 | 310 | 0.3049 |
84
+
85
+
86
+ ### Framework versions
87
+
88
+ - PEFT 0.15.2
89
+ - Transformers 4.51.3
90
+ - Pytorch 2.6.0+cu124
91
+ - Datasets 3.6.0
92
+ - Tokenizers 0.21.1
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
experiment_settings.json ADDED
@@ -0,0 +1,9 @@
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "model_kwargs": {},
3
+ "process_dataset_kwargs": {},
4
+ "trainer_kwargs": {
5
+ "run_name": "base_experiment_run_test",
6
+ "hub_model_id": "whisper-small-base_experiment"
7
+ },
8
+ "dataset": "KhateebAI/Khateeb_audio_44KH_split"
9
+ }