Datasets:
metadata
license: mit
task_categories:
- automatic-speech-recognition
language:
- en
tags:
- audio
- speech
- transcription
- asr
- voice-recording
size_categories:
- n<1K
dataset_info:
features:
- name: audio
dtype: audio
sample_rate: 16000
- name: transcript
dtype: string
splits:
- name: train
num_examples: 297
Audio Transcription Dataset
This dataset contains 297 audio recordings with their corresponding transcriptions for automatic speech recognition (ASR) tasks.
Dataset Description
This dataset includes:
- Audio files: High-quality voice recordings (.wav format)
- Transcriptions: Accurate text transcriptions of the spoken content
- Proper Audio feature type: Ready for model training (not just file paths!)
Dataset Statistics
- Total samples: 297
- Audio format: WAV files at 16kHz sampling rate
- Average transcript length: 50.1 characters
- Language: English
Sample Data
| Audio File | Transcript |
|---|---|
| R1.wav | Hello! My name is Rocky. |
| R10.wav | I am speaking English for a voice recording. |
| R11.wav | This is a test sentence for training the model. |
Usage
from datasets import load_dataset
# Load the dataset
dataset = load_dataset("Aashish17405/audio-dataset")
# Access audio data (proper Audio type, not string!)
audio_sample = dataset['train'][0]['audio']
print(f"Sampling rate: {audio_sample['sampling_rate']}")
print(f"Audio array shape: {audio_sample['array'].shape}")
print(f"Transcript: {dataset['train'][0]['transcript']}")
# Ready for model training with transformers
from transformers import WhisperProcessor, WhisperForConditionalGeneration
processor = WhisperProcessor.from_pretrained("openai/whisper-tiny")
model = WhisperForConditionalGeneration.from_pretrained("openai/whisper-tiny")
# Process audio
inputs = processor(audio_sample["array"], sampling_rate=audio_sample["sampling_rate"], return_tensors="pt")
Features
✅ Proper Audio Type: Audio column shows as "Audio" feature, not "string"
✅ High Quality: Clear voice recordings
✅ Diverse Content: Various sentences and topics
✅ Training Ready: Formatted for immediate use with speech models
Use Cases
- Fine-tuning speech recognition models (Whisper, Wav2Vec2, etc.)
- Voice training and accent recognition
- Speech-to-text model development
- Audio processing research
License
MIT License - Free to use for research and commercial purposes.