Update README.md
Browse files
README.md
CHANGED
|
@@ -144,11 +144,10 @@ The performance of Automatic Speech Recognition models is measuring using Word E
|
|
| 144 |
The following tables summarizes the performance of the model with the Transducer decoder. Performances of the ASR models are reported in terms of Word Error Rate (WER%) with greedy decoding.
|
| 145 |
|
| 146 |
|
| 147 |
-
|
| 148 |
-
|
| 149 |
-
|
| 150 |
-
|
|
| 151 |
-
| MCV TEST | 16.46 | 15.89 | 7.78 | 7.18 |
|
| 152 |
|
| 153 |
## Limitations
|
| 154 |
The model is non-streaming and outputs the speech as a string without capitalization and punctuation. Since this model was trained on publicly available speech datasets, the performance of this model might degrade for speech which includes technical terms, or vernacular that the model has not been trained on.
|
|
|
|
| 144 |
The following tables summarizes the performance of the model with the Transducer decoder. Performances of the ASR models are reported in terms of Word Error Rate (WER%) with greedy decoding.
|
| 145 |
|
| 146 |
|
| 147 |
+
| |**WER(%)**|**WER wo CAP**|**WER wo PUNCT**| **WER wo PUNCT CAP**|
|
| 148 |
+
|-------------------------------|:--------:|:------------:|:--------------:|:-------------------:|
|
| 149 |
+
| **FLEURS DEV (used as test)** | 17.52 | 16.20 | 12.20 | 10.73 |
|
| 150 |
+
| **MCV TEST** | 16.46 | 15.89 | 7.78 | 7.18 |
|
|
|
|
| 151 |
|
| 152 |
## Limitations
|
| 153 |
The model is non-streaming and outputs the speech as a string without capitalization and punctuation. Since this model was trained on publicly available speech datasets, the performance of this model might degrade for speech which includes technical terms, or vernacular that the model has not been trained on.
|