Datasets:
Tasks:
Text-to-Speech
Modalities:
Audio
Formats:
soundfolder
Languages:
Hausa
Size:
1K - 10K
License:
Dataset Viewer
Search is not available for this dataset
audio
audio | label
class label |
|---|---|
097f373e8-f6e6-63b1-68ad-946915fb5757
|
|
097f373e8-f6e6-63b1-68ad-946915fb5757
|
|
097f373e8-f6e6-63b1-68ad-946915fb5757
|
|
097f373e8-f6e6-63b1-68ad-946915fb5757
|
|
097f373e8-f6e6-63b1-68ad-946915fb5757
|
|
097f373e8-f6e6-63b1-68ad-946915fb5757
|
|
097f373e8-f6e6-63b1-68ad-946915fb5757
|
|
097f373e8-f6e6-63b1-68ad-946915fb5757
|
|
097f373e8-f6e6-63b1-68ad-946915fb5757
|
|
097f373e8-f6e6-63b1-68ad-946915fb5757
|
|
097f373e8-f6e6-63b1-68ad-946915fb5757
|
|
097f373e8-f6e6-63b1-68ad-946915fb5757
|
|
097f373e8-f6e6-63b1-68ad-946915fb5757
|
|
097f373e8-f6e6-63b1-68ad-946915fb5757
|
|
097f373e8-f6e6-63b1-68ad-946915fb5757
|
|
097f373e8-f6e6-63b1-68ad-946915fb5757
|
|
097f373e8-f6e6-63b1-68ad-946915fb5757
|
|
097f373e8-f6e6-63b1-68ad-946915fb5757
|
|
097f373e8-f6e6-63b1-68ad-946915fb5757
|
|
097f373e8-f6e6-63b1-68ad-946915fb5757
|
|
097f373e8-f6e6-63b1-68ad-946915fb5757
|
|
097f373e8-f6e6-63b1-68ad-946915fb5757
|
|
097f373e8-f6e6-63b1-68ad-946915fb5757
|
|
097f373e8-f6e6-63b1-68ad-946915fb5757
|
|
097f373e8-f6e6-63b1-68ad-946915fb5757
|
|
097f373e8-f6e6-63b1-68ad-946915fb5757
|
|
097f373e8-f6e6-63b1-68ad-946915fb5757
|
|
097f373e8-f6e6-63b1-68ad-946915fb5757
|
|
097f373e8-f6e6-63b1-68ad-946915fb5757
|
|
097f373e8-f6e6-63b1-68ad-946915fb5757
|
|
097f373e8-f6e6-63b1-68ad-946915fb5757
|
|
097f373e8-f6e6-63b1-68ad-946915fb5757
|
|
097f373e8-f6e6-63b1-68ad-946915fb5757
|
|
097f373e8-f6e6-63b1-68ad-946915fb5757
|
|
097f373e8-f6e6-63b1-68ad-946915fb5757
|
|
097f373e8-f6e6-63b1-68ad-946915fb5757
|
|
097f373e8-f6e6-63b1-68ad-946915fb5757
|
|
097f373e8-f6e6-63b1-68ad-946915fb5757
|
|
097f373e8-f6e6-63b1-68ad-946915fb5757
|
|
097f373e8-f6e6-63b1-68ad-946915fb5757
|
|
097f373e8-f6e6-63b1-68ad-946915fb5757
|
|
097f373e8-f6e6-63b1-68ad-946915fb5757
|
|
097f373e8-f6e6-63b1-68ad-946915fb5757
|
|
097f373e8-f6e6-63b1-68ad-946915fb5757
|
|
097f373e8-f6e6-63b1-68ad-946915fb5757
|
|
097f373e8-f6e6-63b1-68ad-946915fb5757
|
|
097f373e8-f6e6-63b1-68ad-946915fb5757
|
|
097f373e8-f6e6-63b1-68ad-946915fb5757
|
|
097f373e8-f6e6-63b1-68ad-946915fb5757
|
|
097f373e8-f6e6-63b1-68ad-946915fb5757
|
|
097f373e8-f6e6-63b1-68ad-946915fb5757
|
|
097f373e8-f6e6-63b1-68ad-946915fb5757
|
|
097f373e8-f6e6-63b1-68ad-946915fb5757
|
|
097f373e8-f6e6-63b1-68ad-946915fb5757
|
|
097f373e8-f6e6-63b1-68ad-946915fb5757
|
|
097f373e8-f6e6-63b1-68ad-946915fb5757
|
|
097f373e8-f6e6-63b1-68ad-946915fb5757
|
|
097f373e8-f6e6-63b1-68ad-946915fb5757
|
|
097f373e8-f6e6-63b1-68ad-946915fb5757
|
|
097f373e8-f6e6-63b1-68ad-946915fb5757
|
|
097f373e8-f6e6-63b1-68ad-946915fb5757
|
|
097f373e8-f6e6-63b1-68ad-946915fb5757
|
|
097f373e8-f6e6-63b1-68ad-946915fb5757
|
|
097f373e8-f6e6-63b1-68ad-946915fb5757
|
|
097f373e8-f6e6-63b1-68ad-946915fb5757
|
|
097f373e8-f6e6-63b1-68ad-946915fb5757
|
|
097f373e8-f6e6-63b1-68ad-946915fb5757
|
|
097f373e8-f6e6-63b1-68ad-946915fb5757
|
|
097f373e8-f6e6-63b1-68ad-946915fb5757
|
|
097f373e8-f6e6-63b1-68ad-946915fb5757
|
|
097f373e8-f6e6-63b1-68ad-946915fb5757
|
|
097f373e8-f6e6-63b1-68ad-946915fb5757
|
|
097f373e8-f6e6-63b1-68ad-946915fb5757
|
|
097f373e8-f6e6-63b1-68ad-946915fb5757
|
|
097f373e8-f6e6-63b1-68ad-946915fb5757
|
|
097f373e8-f6e6-63b1-68ad-946915fb5757
|
|
097f373e8-f6e6-63b1-68ad-946915fb5757
|
|
097f373e8-f6e6-63b1-68ad-946915fb5757
|
|
097f373e8-f6e6-63b1-68ad-946915fb5757
|
|
097f373e8-f6e6-63b1-68ad-946915fb5757
|
|
097f373e8-f6e6-63b1-68ad-946915fb5757
|
|
097f373e8-f6e6-63b1-68ad-946915fb5757
|
|
097f373e8-f6e6-63b1-68ad-946915fb5757
|
|
097f373e8-f6e6-63b1-68ad-946915fb5757
|
|
097f373e8-f6e6-63b1-68ad-946915fb5757
|
|
097f373e8-f6e6-63b1-68ad-946915fb5757
|
|
097f373e8-f6e6-63b1-68ad-946915fb5757
|
|
097f373e8-f6e6-63b1-68ad-946915fb5757
|
|
097f373e8-f6e6-63b1-68ad-946915fb5757
|
|
097f373e8-f6e6-63b1-68ad-946915fb5757
|
|
097f373e8-f6e6-63b1-68ad-946915fb5757
|
|
097f373e8-f6e6-63b1-68ad-946915fb5757
|
|
097f373e8-f6e6-63b1-68ad-946915fb5757
|
|
097f373e8-f6e6-63b1-68ad-946915fb5757
|
|
097f373e8-f6e6-63b1-68ad-946915fb5757
|
|
097f373e8-f6e6-63b1-68ad-946915fb5757
|
|
097f373e8-f6e6-63b1-68ad-946915fb5757
|
|
097f373e8-f6e6-63b1-68ad-946915fb5757
|
|
097f373e8-f6e6-63b1-68ad-946915fb5757
|
|
097f373e8-f6e6-63b1-68ad-946915fb5757
|
End of preview. Expand
in Data Studio
Hausa TTS Dataset (HausaTTSEmbed)
This dataset contains 1,283 Hausa language audio recordings with transcriptions for Text-to-Speech (TTS) model training.
Dataset Details
- Language: Hausa (ha)
- Total Samples: 1,283
- Speakers: 3 unique speakers
- Audio Format: WAV files
- Sample Rate: Original recordings (will be resampled to 24kHz during training)
- Text Length: 4-141 characters (average: 24 characters)
Dataset Structure
Each example contains:
- audio: Audio file in WAV format
- text: Hausa transcription with proper diacritics (e.g., "Ansamu ɓaraka acikin shirin")
- speaker_id: UUID of the speaker (3 unique values)
Data Fields
{
'audio': {
'path': str, # Path to audio file
'array': ndarray, # Audio waveform
'sampling_rate': int
},
'text': str, # Hausa transcription
'speaker_id': str # Speaker identifier
}
Usage
Recommended: Download All Files First
To ensure all audio files are available and avoid rate limits, authenticate first:
from huggingface_hub import snapshot_download, HfApi
from datasets import load_dataset, Audio
import os
import time
# IMPORTANT: Login FIRST and WAIT for confirmation
# Method 1: Using token directly (RECOMMENDED for Colab)
from huggingface_hub import login
HF_TOKEN = "hf_YourTokenHere" # Get from https://huggingface.co/settings/tokens
login(token=HF_TOKEN)
# Verify login worked
api = HfApi()
user_info = api.whoami(token=HF_TOKEN)
print(f"✓ Logged in as: {user_info['name']}")
# Small delay to ensure auth propagates
time.sleep(2)
# Download entire dataset (parquet + all audio files)
print("\nDownloading dataset (~2GB)...")
local_dir = snapshot_download(
"Aybee5/HausaTTSEmbed",
repo_type="dataset",
local_dir="hausa_tts_data",
token=HF_TOKEN, # Pass token explicitly
max_workers=1, # Reduce concurrent requests to avoid rate limits
resume_download=True # Resume if interrupted
)
# Load from downloaded files
dataset = load_dataset(
"parquet",
data_files=f"{local_dir}/data/*.parquet",
split="train"
)
# Fix audio paths to absolute paths
dataset = dataset.map(lambda x: {"audio": os.path.join(local_dir, x["audio"]), **x})
# Cast to Audio type
dataset = dataset.cast_column("audio", Audio(sampling_rate=22050))
print(f"✓ Loaded {len(dataset)} samples")
# Access sample
sample = dataset[0]
print(f"Text: {sample['text']}")
print(f"Audio shape: {sample['audio']['array'].shape}")
Alternative: Interactive Login (prompts for token)
from huggingface_hub import login, snapshot_download
import time
# This will prompt you to paste your token
login()
time.sleep(2) # Wait for auth to propagate
# Then download
local_dir = snapshot_download(
"Aybee5/HausaTTSEmbed",
repo_type="dataset",
local_dir="hausa_tts_data",
max_workers=1 # Reduce concurrent requests
)
For Unsloth TTS Training (Complete Code)
Use this complete code in your Unsloth/Colab notebook:
from huggingface_hub import snapshot_download, login, HfApi
from datasets import load_dataset, Audio
import os
import time
# ==================== STEP 1: AUTHENTICATE ====================
# Replace with your actual token from https://huggingface.co/settings/tokens
HF_TOKEN = "hf_YourTokenHere"
print("Authenticating with HuggingFace...")
login(token=HF_TOKEN)
# Verify authentication
api = HfApi()
user_info = api.whoami(token=HF_TOKEN)
print(f"✓ Logged in as: {user_info['name']}\n")
# Wait for auth to propagate
time.sleep(2)
# ==================== STEP 2: DOWNLOAD DATASET ====================
print("Downloading Hausa TTS dataset (~2GB)...")
print("Using reduced concurrency to avoid rate limits...\n")
local_dir = snapshot_download(
"Aybee5/HausaTTSEmbed",
repo_type="dataset",
local_dir="/content/hausa_tts", # Use /content/ for Colab
token=HF_TOKEN, # Pass token explicitly
max_workers=1, # Single threaded to avoid rate limits
resume_download=True
)
print(f"✓ Downloaded to: {local_dir}\n")
# ==================== STEP 3: LOAD DATASET ====================
raw_ds = load_dataset(
"parquet",
data_files=f"{local_dir}/data/*.parquet",
split="train"
)
# ==================== STEP 4: FIX AUDIO PATHS ====================
raw_ds = raw_ds.map(lambda x: {"audio": os.path.join(local_dir, x["audio"]), **x})
# ==================== STEP 5: HANDLE SPEAKERS ====================
speaker_key = "source"
if "source" not in raw_ds.column_names and "speaker_id" not in raw_ds.column_names:
print("Unsloth: No speaker found, adding default source")
new_column = ["0"] * len(raw_ds)
raw_ds = raw_ds.add_column("source", new_column)
elif "source" not in raw_ds.column_names and "speaker_id" in raw_ds.column_names:
speaker_key = "speaker_id"
# ==================== STEP 6: RESAMPLE AUDIO ====================
target_sampling_rate = 24000
raw_ds = raw_ds.cast_column("audio", Audio(sampling_rate=target_sampling_rate))
print(f"✓ Dataset ready: {len(raw_ds)} samples")
print(f"✓ Speaker column: {speaker_key}\n")
# ==================== STEP 7: OPTIONAL SPLIT ====================
split_ds = raw_ds.train_test_split(test_size=0.1, seed=42)
train_ds = split_ds['train']
val_ds = split_ds['test']
print(f"✓ Train: {len(train_ds)} samples")
print(f"✓ Validation: {len(val_ds)} samples")
# Continue with your Unsloth training!
Key Changes to Avoid Rate Limits:
- ✅ Pass
token=HF_TOKENexplicitly tosnapshot_download() - ✅ Set
max_workers=1to reduce concurrent requests - ✅ Add
time.sleep(2)after login to ensure auth propagates - ✅ Verify authentication with
api.whoami()before downloading - ✅ Use
resume_download=Trueto handle interruptions
With Transformers
from transformers import AutoProcessor
processor = AutoProcessor.from_pretrained("your-tts-model")
def preprocess_function(examples):
audio_arrays = [x["array"] for x in examples["audio"]]
inputs = processor(
text=examples["text"],
audio=audio_arrays,
sampling_rate=24000,
return_tensors="pt",
padding=True
)
return inputs
# Apply preprocessing
processed_ds = dataset.map(
preprocess_function,
batched=True,
remove_columns=dataset.column_names
)
Dataset Statistics
- Total Samples: 1,283
- Unique Speakers: 3
- Text Statistics:
- Average length: 24.0 characters
- Min length: 4 characters
- Max length: 141 characters
- Language: Hausa with proper Unicode diacritics
Data Source
This dataset was created using Mimic Studio recordings for Hausa language TTS development.
Intended Use
This dataset is intended for:
- Training Hausa Text-to-Speech models
- Fine-tuning multilingual TTS models on Hausa
- Research in low-resource language TTS
- Multi-speaker TTS model development
Limitations
- Limited to 3 speakers (may affect speaker diversity in trained models)
- Relatively small dataset size (1,283 samples)
- Audio quality depends on recording conditions
Citation
If you use this dataset, please cite:
@dataset{hausa_tts_embed,
author = {Aybee5},
title = {Hausa TTS Dataset (HausaTTSEmbed)},
year = {2025},
publisher = {Hugging Face},
url = {https://huggingface.co/datasets/Aybee5/HausaTTSEmbed}
}
License
Please specify your license here.
Contact
For questions or issues regarding this dataset, please open an issue in the dataset repository.
- Downloads last month
- 99