Datasets:
task_categories:
- translation
- audio-to-audio
- automatic-speech-recognition
- text-to-speech
language:
- ar
- ja
- pl
- en
Multilingual Speech & Translation Dataset — EN↔JA/AR/PL (10 phrases, dual-take)
Description 10 English source phrases with expert human translations into Japanese, Egyptian Arabic, and Polish. Each target phrase is recorded by 5 native speakers per language (two takes each). Audio is WAV 48 kHz mono, 16‑bit PCM. Translations are produced and QA’d by professional linguists; recordings follow consistent orthography/style (AR: Egyptian; JA/PL: standard). Each row carries an explicit consent flag.
Contents
- 10 source phrases (English)
- 3 translation CSVs (JA, AR-EG, PL), 50 rows each (10 phrases × 5 speakers)
- 2 audio takes per row (per speaker)
- Speaker demographics (gender, age) and consent info
Use Cases Multilingual TTS/ASR fine‑tuning, cross‑lingual speech translation, multimodal alignment.
License & Consent
Each translation row includes consent (TRUE in this release). Optional forms:
/consent/{speaker_id}/.
Audio Specs WAV 48 kHz mono, 16-bit PCM, −23 LUFS ±2 LU.
Quality & Provenance Professional translation with bilingual QA; native‑speaker recordings (studio conditions); two takes per item; terminology/orthography consistency; verified consent.
Data layout
/alconost-multilingual-speech-en-ja-ar-pl-v1/
├── README.md
├── LICENSE.txt
├── en_source.csv
├── ja_translations.csv
├── ar_translations.csv
├── pl_translations.csv
└── audio/
├── ja/
├── ar/
└── pl/
Audio file naming per speaker folder:
audio/{lang}/{speaker_id}/seg_####_take{1|2}.wav
CSV schemas
en_source.csv:id, domain, source_lang, source_text{ja,ar,pl}_translations.csv:id, source_id, domain, target_text, take1, take2, speaker_id, gender, age, consent
Citation
Alconost, 2025. “Multilingual Speech & Translation Dataset (EN↔JA/AR/PL).”
Alconost Linguistic Data Labeling Services
Alconost provides high-quality linguistic data labeling for AI teams worldwide — including parallel corpora, parallel texts, error annotation, multilingual glossaries, LQA/LQT, and finely curated audio and video datasets. Leveraging 20+ years of linguistic expertise, native-speaking specialists, and AI-enhanced workflows, we build clean, scalable datasets for ASR/TTS, LLMs, chatbots, and multilingual NLP. From annotation to QA, we help you train models that understand nuance, context, and dialect variation.
Contact: [email protected] to order or purchase datasets.