Datasets:
Modalities:
Text
Formats:
parquet
Languages:
English
Size:
1M - 10M
Update README.md
Browse files
README.md
CHANGED
|
@@ -26,112 +26,7 @@ This dataset contains only the metadata (JSON/Parquet) for English speech recogn
|
|
| 26 |
- [train-clean-360](https://www.openslr.org/resources/12/train-clean-360.tar.gz)
|
| 27 |
- [train-other-500](https://www.openslr.org/resources/12/train-other-500.tar.gz)
|
| 28 |
|
| 29 |
-
## Setup Instructions
|
| 30 |
|
| 31 |
-
### 1. Download and Organize Audio Files
|
| 32 |
-
After downloading, organize your audio files as follows:
|
| 33 |
-
- `/cv` for CommonVoice audio
|
| 34 |
-
- `/peoplespeech_audio` for People's Speech audio
|
| 35 |
-
- `/librespeech-en` for LibriSpeech audio
|
| 36 |
-
|
| 37 |
-
### 2. Convert Parquet Files to NeMo Manifests
|
| 38 |
-
|
| 39 |
-
Create a script `parquet_to_manifest.py`:
|
| 40 |
-
```python
|
| 41 |
-
from datasets import load_dataset
|
| 42 |
-
import json
|
| 43 |
-
import os
|
| 44 |
-
|
| 45 |
-
def convert_to_manifest(dataset, split, output_file):
|
| 46 |
-
with open(output_file, 'w') as f:
|
| 47 |
-
for item in dataset[split]:
|
| 48 |
-
# Ensure paths match your mounted directories
|
| 49 |
-
if item['source'] == 'commonvoice':
|
| 50 |
-
item['audio_filepath'] = os.path.join('/cv', item['audio_filepath'])
|
| 51 |
-
elif item['source'] == 'peoplespeech':
|
| 52 |
-
item['audio_filepath'] = os.path.join('/peoplespeech_audio', item['audio_filepath'])
|
| 53 |
-
elif item['source'] == 'librespeech':
|
| 54 |
-
item['audio_filepath'] = os.path.join('/librespeech-en', item['audio_filepath'])
|
| 55 |
-
|
| 56 |
-
manifest_entry = {
|
| 57 |
-
'audio_filepath': item['audio_filepath'],
|
| 58 |
-
'text': item['text'],
|
| 59 |
-
'duration': item['duration']
|
| 60 |
-
}
|
| 61 |
-
f.write(json.dumps(manifest_entry) + '\n')
|
| 62 |
-
|
| 63 |
-
# Load the dataset from Hugging Face
|
| 64 |
-
dataset = load_dataset("WhissleAI/Meta_speech_recognition_EN_v1")
|
| 65 |
-
|
| 66 |
-
# Convert each split to manifest
|
| 67 |
-
for split in dataset.keys():
|
| 68 |
-
output_file = f"{split}_manifest.json"
|
| 69 |
-
convert_to_manifest(dataset, split, output_file)
|
| 70 |
-
print(f"Created manifest for {split}: {output_file}")
|
| 71 |
-
```
|
| 72 |
-
|
| 73 |
-
Run the conversion:
|
| 74 |
-
```bash
|
| 75 |
-
python parquet_to_manifest.py
|
| 76 |
-
```
|
| 77 |
-
|
| 78 |
-
This will create manifest files (`train_manifest.json`, `valid_manifest.json`, etc.) in NeMo format.
|
| 79 |
-
|
| 80 |
-
### 3. Pull and Run NeMo Docker
|
| 81 |
-
```bash
|
| 82 |
-
# Pull the NeMo Docker image
|
| 83 |
-
docker pull nvcr.io/nvidia/nemo:24.05
|
| 84 |
-
|
| 85 |
-
# Run the container with GPU support and mounted volumes
|
| 86 |
-
docker run --gpus all -it --rm \
|
| 87 |
-
-v /external1:/external1 \
|
| 88 |
-
-v /external2:/external2 \
|
| 89 |
-
-v /external3:/external3 \
|
| 90 |
-
-v /cv:/cv \
|
| 91 |
-
-v /peoplespeech_audio:/peoplespeech_audio \
|
| 92 |
-
-v /librespeech-en:/librespeech-en \
|
| 93 |
-
--shm-size=8g \
|
| 94 |
-
-p 8888:8888 -p 6006:6006 \
|
| 95 |
-
--ulimit memlock=-1 \
|
| 96 |
-
--ulimit stack=67108864 \
|
| 97 |
-
--device=/dev/snd \
|
| 98 |
-
nvcr.io/nvidia/nemo:24.05
|
| 99 |
-
```
|
| 100 |
-
|
| 101 |
-
### 4. Fine-tuning Instructions
|
| 102 |
-
|
| 103 |
-
#### A. Create a config file (e.g., `config.yaml`):
|
| 104 |
-
```yaml
|
| 105 |
-
model:
|
| 106 |
-
name: "ConformerCTC"
|
| 107 |
-
pretrained_model: "nvidia/stt_en_conformer_ctc_large" # or your preferred model
|
| 108 |
-
|
| 109 |
-
train_ds:
|
| 110 |
-
manifest_filepath: "train_manifest.json" # Path to the manifest created in step 2
|
| 111 |
-
batch_size: 32
|
| 112 |
-
|
| 113 |
-
validation_ds:
|
| 114 |
-
manifest_filepath: "valid_manifest.json" # Path to the manifest created in step 2
|
| 115 |
-
batch_size: 32
|
| 116 |
-
|
| 117 |
-
optim:
|
| 118 |
-
name: adamw
|
| 119 |
-
lr: 0.001
|
| 120 |
-
|
| 121 |
-
trainer:
|
| 122 |
-
devices: 1
|
| 123 |
-
accelerator: "gpu"
|
| 124 |
-
max_epochs: 100
|
| 125 |
-
```
|
| 126 |
-
|
| 127 |
-
#### B. Start Fine-tuning:
|
| 128 |
-
```bash
|
| 129 |
-
# Inside the NeMo container
|
| 130 |
-
python -m torch.distributed.launch --nproc_per_node=1 \
|
| 131 |
-
examples/asr/speech_to_text_finetune.py \
|
| 132 |
-
--config-path=. \
|
| 133 |
-
--config-name=config.yaml
|
| 134 |
-
```
|
| 135 |
|
| 136 |
## Dataset Statistics
|
| 137 |
|
|
@@ -174,21 +69,4 @@ python -m torch.distributed.launch --nproc_per_node=1 \
|
|
| 174 |
"duration": 12.51,
|
| 175 |
"source": "librespeech-en"
|
| 176 |
}
|
| 177 |
-
```
|
| 178 |
-
|
| 179 |
-
|
| 180 |
-
## Usage Notes
|
| 181 |
-
|
| 182 |
-
1. The metadata in this repository contains paths to audio files that must match your local setup.
|
| 183 |
-
2. When fine-tuning, ensure your manifest files use the correct paths for your mounted directories.
|
| 184 |
-
3. For optimal performance:
|
| 185 |
-
- Use a GPU with at least 16GB VRAM
|
| 186 |
-
- Adjust batch size based on your GPU memory
|
| 187 |
-
- Consider gradient accumulation for larger effective batch sizes
|
| 188 |
-
- Monitor training with TensorBoard (accessible via port 6006)
|
| 189 |
-
|
| 190 |
-
## Common Issues and Solutions
|
| 191 |
-
|
| 192 |
-
1. **Path Mismatches**: Ensure audio file paths in manifests match the mounted directories in Docker
|
| 193 |
-
2. **Memory Issues**: Reduce batch size or use gradient accumulation
|
| 194 |
-
3. **Docker Permissions**: Ensure proper permissions for mounted volumes and audio devices
|
|
|
|
| 26 |
- [train-clean-360](https://www.openslr.org/resources/12/train-clean-360.tar.gz)
|
| 27 |
- [train-other-500](https://www.openslr.org/resources/12/train-other-500.tar.gz)
|
| 28 |
|
|
|
|
| 29 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 30 |
|
| 31 |
## Dataset Statistics
|
| 32 |
|
|
|
|
| 69 |
"duration": 12.51,
|
| 70 |
"source": "librespeech-en"
|
| 71 |
}
|
| 72 |
+
```
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|