Datasets:
Modalities:
Text
Formats:
parquet
Languages:
English
Size:
1M - 10M
Upload README.md with huggingface_hub
Browse files
README.md
CHANGED
|
@@ -1,28 +1,180 @@
|
|
| 1 |
-
|
| 2 |
-
|
| 3 |
-
|
| 4 |
-
|
| 5 |
-
|
| 6 |
-
|
| 7 |
-
|
| 8 |
-
|
| 9 |
-
|
| 10 |
-
|
| 11 |
-
|
| 12 |
-
|
| 13 |
-
-
|
| 14 |
-
|
| 15 |
-
|
| 16 |
-
|
| 17 |
-
|
| 18 |
-
|
| 19 |
-
|
| 20 |
-
|
| 21 |
-
|
| 22 |
-
|
| 23 |
-
|
| 24 |
-
|
| 25 |
-
|
| 26 |
-
|
| 27 |
-
|
| 28 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Meta Speech Recognition English Dataset (v1)
|
| 2 |
+
|
| 3 |
+
This dataset contains only the metadata (JSON/Parquet) for English speech recognition samples.
|
| 4 |
+
**Audio files are NOT included.**
|
| 5 |
+
|
| 6 |
+
## Data Download Links
|
| 7 |
+
|
| 8 |
+
- **CommonVoice:** [https://commonvoice.mozilla.org/en/datasets](https://commonvoice.mozilla.org/en/datasets)
|
| 9 |
+
- **People's Speech:** [https://huggingface.co/datasets/MLCommons/peoples_speech](https://huggingface.co/datasets/MLCommons/peoples_speech)
|
| 10 |
+
- **LibriSpeech:**
|
| 11 |
+
- [train-clean-100](https://www.openslr.org/resources/12/train-clean-100.tar.gz)
|
| 12 |
+
- [train-clean-360](https://www.openslr.org/resources/12/train-clean-360.tar.gz)
|
| 13 |
+
- [train-other-500](https://www.openslr.org/resources/12/train-other-500.tar.gz)
|
| 14 |
+
|
| 15 |
+
## Setup Instructions
|
| 16 |
+
|
| 17 |
+
### 1. Download and Organize Audio Files
|
| 18 |
+
After downloading, organize your audio files as follows:
|
| 19 |
+
- `/cv` for CommonVoice audio
|
| 20 |
+
- `/peoplespeech_audio` for People's Speech audio
|
| 21 |
+
- `/librespeech-en` for LibriSpeech audio
|
| 22 |
+
|
| 23 |
+
### 2. Convert Parquet Files to NeMo Manifests
|
| 24 |
+
|
| 25 |
+
Create a script `parquet_to_manifest.py`:
|
| 26 |
+
```python
|
| 27 |
+
from datasets import load_dataset
|
| 28 |
+
import json
|
| 29 |
+
import os
|
| 30 |
+
|
| 31 |
+
def convert_to_manifest(dataset, split, output_file):
|
| 32 |
+
with open(output_file, 'w') as f:
|
| 33 |
+
for item in dataset[split]:
|
| 34 |
+
# Ensure paths match your mounted directories
|
| 35 |
+
if item['source'] == 'commonvoice':
|
| 36 |
+
item['audio_filepath'] = os.path.join('/cv', item['audio_filepath'])
|
| 37 |
+
elif item['source'] == 'peoplespeech':
|
| 38 |
+
item['audio_filepath'] = os.path.join('/peoplespeech_audio', item['audio_filepath'])
|
| 39 |
+
elif item['source'] == 'librespeech':
|
| 40 |
+
item['audio_filepath'] = os.path.join('/librespeech-en', item['audio_filepath'])
|
| 41 |
+
|
| 42 |
+
manifest_entry = {
|
| 43 |
+
'audio_filepath': item['audio_filepath'],
|
| 44 |
+
'text': item['text'],
|
| 45 |
+
'duration': item['duration']
|
| 46 |
+
}
|
| 47 |
+
f.write(json.dumps(manifest_entry) + '\n')
|
| 48 |
+
|
| 49 |
+
# Load the dataset from Hugging Face
|
| 50 |
+
dataset = load_dataset("WhissleAI/Meta_speech_recognition_EN_v1")
|
| 51 |
+
|
| 52 |
+
# Convert each split to manifest
|
| 53 |
+
for split in dataset.keys():
|
| 54 |
+
output_file = f"{split}_manifest.json"
|
| 55 |
+
convert_to_manifest(dataset, split, output_file)
|
| 56 |
+
print(f"Created manifest for {split}: {output_file}")
|
| 57 |
+
```
|
| 58 |
+
|
| 59 |
+
Run the conversion:
|
| 60 |
+
```bash
|
| 61 |
+
python parquet_to_manifest.py
|
| 62 |
+
```
|
| 63 |
+
|
| 64 |
+
This will create manifest files (`train_manifest.json`, `valid_manifest.json`, etc.) in NeMo format.
|
| 65 |
+
|
| 66 |
+
### 3. Pull and Run NeMo Docker
|
| 67 |
+
```bash
|
| 68 |
+
# Pull the NeMo Docker image
|
| 69 |
+
docker pull nvcr.io/nvidia/nemo:24.05
|
| 70 |
+
|
| 71 |
+
# Run the container with GPU support and mounted volumes
|
| 72 |
+
docker run --gpus all -it --rm \
|
| 73 |
+
-v /external1:/external1 \
|
| 74 |
+
-v /external2:/external2 \
|
| 75 |
+
-v /external3:/external3 \
|
| 76 |
+
-v /cv:/cv \
|
| 77 |
+
-v /peoplespeech_audio:/peoplespeech_audio \
|
| 78 |
+
-v /librespeech-en:/librespeech-en \
|
| 79 |
+
--shm-size=8g \
|
| 80 |
+
-p 8888:8888 -p 6006:6006 \
|
| 81 |
+
--ulimit memlock=-1 \
|
| 82 |
+
--ulimit stack=67108864 \
|
| 83 |
+
--device=/dev/snd \
|
| 84 |
+
nvcr.io/nvidia/nemo:24.05
|
| 85 |
+
```
|
| 86 |
+
|
| 87 |
+
### 4. Fine-tuning Instructions
|
| 88 |
+
|
| 89 |
+
#### A. Create a config file (e.g., `config.yaml`):
|
| 90 |
+
```yaml
|
| 91 |
+
model:
|
| 92 |
+
name: "ConformerCTC"
|
| 93 |
+
pretrained_model: "nvidia/stt_en_conformer_ctc_large" # or your preferred model
|
| 94 |
+
|
| 95 |
+
train_ds:
|
| 96 |
+
manifest_filepath: "train_manifest.json" # Path to the manifest created in step 2
|
| 97 |
+
batch_size: 32
|
| 98 |
+
|
| 99 |
+
validation_ds:
|
| 100 |
+
manifest_filepath: "valid_manifest.json" # Path to the manifest created in step 2
|
| 101 |
+
batch_size: 32
|
| 102 |
+
|
| 103 |
+
optim:
|
| 104 |
+
name: adamw
|
| 105 |
+
lr: 0.001
|
| 106 |
+
|
| 107 |
+
trainer:
|
| 108 |
+
devices: 1
|
| 109 |
+
accelerator: "gpu"
|
| 110 |
+
max_epochs: 100
|
| 111 |
+
```
|
| 112 |
+
|
| 113 |
+
#### B. Start Fine-tuning:
|
| 114 |
+
```bash
|
| 115 |
+
# Inside the NeMo container
|
| 116 |
+
python -m torch.distributed.launch --nproc_per_node=1 \
|
| 117 |
+
examples/asr/speech_to_text_finetune.py \
|
| 118 |
+
--config-path=. \
|
| 119 |
+
--config-name=config.yaml
|
| 120 |
+
```
|
| 121 |
+
|
| 122 |
+
## Dataset Statistics
|
| 123 |
+
|
| 124 |
+
### Splits and Sample Counts
|
| 125 |
+
- **train**: 2338349 samples
|
| 126 |
+
- **valid**: 77068 samples
|
| 127 |
+
|
| 128 |
+
## Example Samples
|
| 129 |
+
### train
|
| 130 |
+
```json
|
| 131 |
+
{
|
| 132 |
+
"audio_filepath": "/peoplespeech_audio/train-00769-of-00804_5.flac",
|
| 133 |
+
"text": "this is the same chart i just showed you a moment ago again seventy two percent is in good condition but what we found was that there were actually a decent amount of roads there so i had to surmount a sidewalk i'm sorry sidewalks that had were and very very good condition AGE_30_45 GER_MALE EMOTION_NEU INTENT_INFORM",
|
| 134 |
+
"duration": 14.8,
|
| 135 |
+
"source": "peoplespeech"
|
| 136 |
+
}
|
| 137 |
+
```
|
| 138 |
+
```json
|
| 139 |
+
{
|
| 140 |
+
"audio_filepath": "/peoplespeech_audio/train-00561-of-00804_1596.flac",
|
| 141 |
+
"text": "the board didn't rely on that theory and i guess what i was wondering is is it the board's conclusion that for some claims mr ENTITY_PERSON_NAME kit lens END alleged adverse acts were due to his military status whereas other AGE_30_45 GER_MALE EMOTION_NEU",
|
| 142 |
+
"duration": 14.65,
|
| 143 |
+
"source": "peoplespeech"
|
| 144 |
+
}
|
| 145 |
+
```
|
| 146 |
+
|
| 147 |
+
### valid
|
| 148 |
+
```json
|
| 149 |
+
{
|
| 150 |
+
"audio_filepath": "/librespeech-en/train-other-500/177/122839/177-122839-0044.flac",
|
| 151 |
+
"text": "But after all, that's better than one of us being short and fat and the other tall and lean, like Morgan Sloane and his wife, Missus, Lynde says, it always makes her think of the long and short of it when she sees them together, well, said Anne to herself that night, as she brushed her hair before her gilt framed mirror. AGE_30_45 GER_FEMALE EMOTION_HAP INTENT_REFLECT",
|
| 152 |
+
"duration": 16.78,
|
| 153 |
+
"source": "librespeech-en"
|
| 154 |
+
}
|
| 155 |
+
```
|
| 156 |
+
```json
|
| 157 |
+
{
|
| 158 |
+
"audio_filepath": "/librespeech-en/train-clean-360/5637/41170/5637-41170-0009.flac",
|
| 159 |
+
"text": "The justice of the peace is over thirty ENTITY_DURATION miles END from me, For some matter of ENTITY_PRICE two roubles END, I should have to send a lawyer who costs me ENTITY_PRICE fifteen, END, and he related how a peasant had stolen some flour from the miller. AGE_45_60 GER_MALE EMOTION_NEU INTENT_EXPLAIN",
|
| 160 |
+
"duration": 12.51,
|
| 161 |
+
"source": "librespeech-en"
|
| 162 |
+
}
|
| 163 |
+
```
|
| 164 |
+
|
| 165 |
+
|
| 166 |
+
## Usage Notes
|
| 167 |
+
|
| 168 |
+
1. The metadata in this repository contains paths to audio files that must match your local setup.
|
| 169 |
+
2. When fine-tuning, ensure your manifest files use the correct paths for your mounted directories.
|
| 170 |
+
3. For optimal performance:
|
| 171 |
+
- Use a GPU with at least 16GB VRAM
|
| 172 |
+
- Adjust batch size based on your GPU memory
|
| 173 |
+
- Consider gradient accumulation for larger effective batch sizes
|
| 174 |
+
- Monitor training with TensorBoard (accessible via port 6006)
|
| 175 |
+
|
| 176 |
+
## Common Issues and Solutions
|
| 177 |
+
|
| 178 |
+
1. **Path Mismatches**: Ensure audio file paths in manifests match the mounted directories in Docker
|
| 179 |
+
2. **Memory Issues**: Reduce batch size or use gradient accumulation
|
| 180 |
+
3. **Docker Permissions**: Ensure proper permissions for mounted volumes and audio devices
|