dataset_info:
features:
- name: audio
dtype: audio
- name: transcription
dtype: string
splits:
- name: train
num_bytes: 1110063079
num_examples: 5000
- name: validation
num_bytes: 82102316
num_examples: 500
download_size: 1138402984
dataset_size: 1192165395
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
tags:
- masrispeech
- egyptian-arabic
- arabic
- speech
- audio
- asr
- automatic-speech-recognition
- speech-to-text
- stt
- dialectal-arabic
- egypt
- native-speakers
- spoken-arabic
- egyptian-dialect
- arabic-dialect
- audio-dataset
- language-resources
- low-resource-language
- phonetics
- speech-corpus
- voice
- transcription
- linguistic-data
- machine-learning
- natural-language-processing
- nlp
- huggingface
- open-dataset
- labeled-data
task_categories:
- automatic-speech-recognition
- audio-classification
- audio-to-audio
language:
- arz
- ar
pretty_name: MasriSpeech-ASR-Finetuning
π£οΈ MasriSpeech-ASR-Finetuning: Egyptian Arabic Speech Fine-Tuning Dataset
π Overview
MasriSpeech-ASR-Finetuning is a specialized subset of the MasriSpeech dataset, designed for fine-tuning Automatic Speech Recognition (ASR) models for Egyptian Arabic. This dataset contains 5,500 professionally annotated audio samples totaling over 100 hours of natural Egyptian Arabic speech.
π‘ Key Features:
- High-quality 16kHz speech recordings
- Natural conversational Egyptian Arabic
- Speaker-balanced train/validation splits
- Comprehensive linguistic coverage
- Apache 2.0 license
π Dataset Summary
| Feature | Value |
|---|---|
| Total Samples | 5,500 |
| Train Samples | 5,000 |
| Validation Samples | 500 |
| Sampling Rate | 16 kHz |
| Total Duration | ~100 hours |
| Languages | Egyptian Arabic (arz), Arabic (ar) |
| Format | Parquet |
| Dataset Size | 1.19 GB |
| Download Size | 1.13 GB |
| Annotations | Transcripts |
π§± Dataset Structure
The dataset follows Hugging Face datasets format with two splits:
DatasetDict({
train: Dataset({
features: ['audio', 'transcription'],
num_rows: 5000
})
validation: Dataset({
features: ['audio', 'transcription'],
num_rows: 500
})
})
Data Fields
- audio: Audio feature object containing:
Array: Raw speech waveform (1D float array)Path: Relative audio pathSampling_rate: 16,000 Hz
- transcription: string with Egyptian Arabic transcription
π Data Statistics
Split Distribution
| Split | Examples | Size (GB) | Avg. Words | Empty | Non-Arabic |
|---|---|---|---|---|---|
| Train | 5,000 | 1.11 | 13.34 | 0 | 0 |
| Validation | 500 | 0.08 | 9.60 | 0 | 0 |
Linguistic Analysis
| Feature | Train Set | Validation Set |
|---|---|---|
| Top Words | ΩΩ (2,025), Ω (1,698) | ΩΩ (52), Ψ£ΩΨ§ (41) |
| Top Bigrams | (Ψ₯Ω, Ψ£ΩΨ§) (130) | (Ψ΄Ψ§Ψ‘, Ψ§ΩΩΩ) (6) |
| Vocab Size | 3,845 | 789 |
| Unique Speakers | 114 | 10 |
Word Count Distributions (Left: Train, Right: Validation)
How to Use ? π§βπ»
Loading with Hugging Face
from datasets import load_dataset
import IPython.display as ipd
# Load dataset (streaming recommended for large datasets)
ds = load_dataset('NightPrince/MasriSpeech-ASR-Finetuning',
split='train',
streaming=True)
# Get first sample
sample = next(iter(ds))
print(f"Transcript: {sample['transcription']}")
# Play audio
ipd.Audio(sample['audio']['array'],
rate=sample['audio']['sampling_rate'])
Preprocessing the Dataset
from transformers import Wav2Vec2Processor, Wav2Vec2ForCTC
from datasets import load_dataset
import torch
model_name = "facebook/wav2vec2-base-960h" # Spanish example
# or "facebook/wav2vec2-large-xlsr-53-en" for English
processor = Wav2Vec2Processor.from_pretrained(model_name)
model = Wav2Vec2ForCTC.from_pretrained(model_name)
def prepare_dataset(batch):
audio = batch["audio"]
# Extract audio array and sampling rate
audio_array = audio["array"]
sampling_rate = audio["sampling_rate"]
# Process audio using feature extractor only
inputs = processor.feature_extractor(
audio_array,
sampling_rate=sampling_rate,
return_tensors="pt"
)
batch["input_values"] = inputs.input_values[0]
# Process transcription using tokenizer only
labels = processor.tokenizer(
batch["transcription"],
return_tensors="pt"
)
batch["labels"] = labels["input_ids"][0]
return batch
# Apply preprocessing to the entire dataset
print("Processing entire dataset...")
dataset = ds.map(prepare_dataset, remove_columns=["audio", "transcription"])
Fine-Tuning an ASR Model
from transformers import AutoModelForCTC, TrainingArguments, Trainer
# Load pre-trained model
model = AutoModelForCTC.from_pretrained("facebook/wav2vec2-base-960h")
# Define training arguments
training_args = TrainingArguments(
output_dir="./results",
evaluation_strategy="epoch",
learning_rate=2e-5,
per_device_train_batch_size=16,
num_train_epochs=3,
save_steps=10,
save_total_limit=2,
logging_dir="./logs",
logging_steps=10,
)
# Initialize Trainer
trainer = Trainer(
model=model,
args=training_args,
train_dataset=dataset,
eval_dataset=dataset,
)
# Train the model
trainer.train()
Evaluating the Model
# Evaluate the model
eval_results = trainer.evaluate()
print("Evaluation Results:", eval_results)
Exporting the Model
# Save the fine-tuned model
model.save_pretrained("./fine_tuned_model")
processor.save_pretrained("./fine_tuned_model")
π Citation
If you use MasriSpeech-ASR-Finetuning in your research or work, please cite it as follows:
@dataset{masrispeech_asr_finetuning,
author = {Yahya Muhammad Alnwsany},
title = {MasriSpeech-ASR-Finetuning: Egyptian Arabic Speech Fine-Tuning Dataset},
year = {2025},
publisher = {Hugging Face},
url = {https://huggingface.co/collections/NightPrince/masrispeech-dataset-68594e59e46fd12c723f1544}
}
π Licensing
This dataset is released under the Apache 2.0 License. You are free to use, modify, and distribute the dataset, provided you comply with the terms of the license. For more details, see the LICENSE.
π Acknowledgments
We would like to thank the following for their contributions and support:
- Annotators: For their meticulous work in creating high-quality transcriptions.
- Hugging Face: For providing tools and hosting the dataset.
- Open-Source Community: For their continuous support and feedback.
π‘ Use Cases
MasriSpeech-ASR-Finetuning can be used in various applications, including:
- Fine-tuning Automatic Speech Recognition (ASR) models for Egyptian Arabic.
- Dialectal Arabic linguistic research.
- Speech synthesis and voice cloning.
- Training and benchmarking machine learning models for low-resource languages.
π€ Contributing
We welcome contributions to improve MasriSpeech-ASR-Finetuning. If you have suggestions, find issues, or want to add new features, please:
- Fork the repository.
- Create a new branch for your changes.
- Submit a pull request with a detailed description of your changes.
For questions or feedback, feel free to contact the maintainer.
π Changelog
[1.0.0] - 2025-08-02
- Initial release of MasriSpeech-ASR-Finetuning.
- Includes 5,500 audio samples with transcriptions.
- Train/validation splits provided.
- Dataset hosted on Hugging Face.