You need to agree to share your contact information to access this dataset

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this dataset content.

Silencio Network: Multilingual Accent Speech Dataset (Sample)

Overview

This dataset is a crowdsourced multilingual–accented English and non-English speech dataset designed for model training, benchmarking, and acoustic analysis. It emphasizes accent variation, short-form scripted prompts, and spontaneous free speech. All recordings were produced by contributors using their own devices, with Whisper-generated transcripts provided for every sample.

The dataset is structured for direct use in ASR, TTS, accent-classification, diarization-adjacent analysis, speech segmentation, and embedding evaluation.

Languages and Accents

This dataset covers five language–region pairs (to find out more about other combinations please reach out to us):

  • English (China): English spoken with Mandarin-influenced accent
  • English (Nigeria): Nigerian-accented English
  • English (United States): American English
  • German (Germany): Native German speakers
  • Spanish (Mexico): Native Mexican Spanish speakers

All recordings are stored as 48 kHz WAV files.

Speech Types

Each sample belongs to one of three categories:

  • free_speech: unscripted speech on a provided topic
  • keywords: short isolated prompts containing specific phrases or terms
  • monologues: longer scripted passages

These values appear in the field type_of_script.

Recording Conditions

All data is crowdsourced. Contributors record themselves using their available hardware and environment; conditions therefore vary naturally across microphones, devices, and noise profiles. No studio-grade normalisation or homogenisation is applied.

Transcription

Transcriptions are machine-generated using OpenAI Whisper, preserving its segmentation structure where applicable.

Dataset Statistics

Durations are given in hours. Counts reflect samples within each (language, region, type_of_script) partition.

English (China)

type_of_script duration_hrs recordings speakers
free_speech 0.99 72 19
keywords 0.48 57 10
monologues 0.98 56 11

English (Nigeria)

type_of_script duration_hrs recordings speakers
free_speech 0.98 75 65
keywords 0.99 141 101
monologues 0.99 49 32

English (United States)

type_of_script duration_hrs recordings speakers
free_speech 0.99 80 35
keywords 0.99 119 40
monologues 0.99 78 27

German (Germany)

type_of_script duration_hrs recordings speakers
free_speech 0.98 99 34
keywords 0.99 152 37
monologues 0.98 77 27

Spanish (Mexico)

type_of_script duration_hrs recordings speakers
free_speech 0.98 90 6
keywords 0.05 6 2
monologues 0.70 45 9

File Structure

data/
    english_china/
        train-0000.parquet
    english_nigeria/
        train-0000.parquet
    english_united_states/
        train-0000.parquet
    german_germany/
        train-0000.parquet
    spanish_mexico/
        train-0000.parquet

Each parquet contains a mixture of free_speech, keywords, and monologues.

Feature Schema

All configurations share the same feature structure:

  • id: integer (unique identifier)
  • speaker_id: string (hashed or anonymized speaker ID)
  • gender: string (speaker gender)
  • ethnicity: string (speaker ethnicity)
  • occupation: float (occupation or profession, stored as float per original schema)
  • country_code: string (ISO 3166-1 alpha-2 code)
  • birth_place: string (country or region of birth)
  • mother_tongue: string (native language)
  • dialect: string (regional dialect)
  • year_of_birth: int (birth year, YYYY)
  • years_at_birth_place: int (years lived at birth place)
  • languages_data: string (serialized language–proficiency data)
  • os: string (recording operating system)
  • device: string (recording device type)
  • browser: string (browser used if web-based)
  • duration: float (seconds) (audio length)
  • emotions: string (brace-formatted emotion labels)
  • language: string (primary language of the recording)
  • location: string (recording location category)
  • noise_sources: string (brace-formatted background noise labels)
  • script_id: int (script template identifier)
  • type_of_script: string {free_speech, keywords, monologues} (script category)
  • script: string (text intended to be spoken)
  • transcript: string (Whisper-generated transcription)
  • transcription_segments: string (serialized segmentation with timing and word data)
  • audio: WAV audio object (associated audio file)

Licensing

Released under CC BY-NC 4.0.
Commercial use is not permitted. Attribution to Silencio Network is required for any publication or derivative dataset.

Intended Use

Suitable for:

  • accent-conditioned ASR training
  • multilingual speech recognition
  • TTS voicebank generation
  • speaker embedding and similarity evaluation
  • robustness benchmarking
  • keyword-spotting models
  • segmentation and VAD evaluation

Limitations

  • Transcripts are automatically generated. Errors may be present.
  • Crowdsourced device diversity introduces variable noise levels.

Citation

@dataset{silencio_network_speech_2025,
    title        = {Silencio Network Multilingual Accent Speech Corpus},
    author       = {Silencio Network},
    year         = {2025},
    license      = {CC BY-NC 4.0}
}
Downloads last month
36