Datasets:
QCRI
/

Modalities:
Audio
Text
Formats:
json
ArXiv:
Libraries:
Datasets
pandas
License:
SpokenNativQA / README.md
Firoj's picture
arxiv paper
b2e9094 verified
|
raw
history blame
6.13 kB
metadata
license: cc-by-nc-sa-4.0
task_categories:
  - question-answering
language:
  - ar
  - en
tags:
  - question-answering
  - cultural-aligned
pretty_name: 'SpokenNativQA: Multilingual Everyday Spoken Queries for LLMs'
size_categories:
  - 10K<n<100K
dataset_info:
  - config_name: Arabic-ASR-Azure
    splits:
      - name: test
        num_examples: 988
  - config_name: Arabic-ASR-Whisper
    splits:
      - name: test
        num_examples: 985
  - config_name: Arabic-ASR-Fanar-Aura
    splits:
      - name: test
        num_examples: 988
  - config_name: Arabic-ASR-Google
    splits:
      - name: test
        num_examples: 985
  - config_name: English-ASR-Azure
    splits:
      - name: test
        num_examples: 2322
  - config_name: English-ASR-Whisper
    splits:
      - name: test
        num_examples: 2322
  - config_name: English-ASR-Fanar-Aura
    splits:
      - name: test
        num_examples: 2322
  - config_name: English-ASR-Google
    splits:
      - name: test
        num_examples: 2322
configs:
  - config_name: Arabic-ASR-Azure
    data_files:
      - split: test
        path: arabic_qa/spokenqa_arabic_qa_test_azure_asr.json
  - config_name: Arabic-ASR-Whisper
    data_files:
      - split: test
        path: arabic_qa/spokenqa_arabic_qa_test_whisper_asr.json
  - config_name: Arabic-ASR-Fanar-Aura
    data_files:
      - split: test
        path: arabic_qa/spokenqa_arabic_qa_test_fanar_asr.json
  - config_name: Arabic-ASR-Google
    data_files:
      - split: test
        path: arabic_qa/spokenqa_arabic_qa_test_google_asr.json
  - config_name: English-ASR-Azure
    data_files:
      - split: test
        path: english_qa/spokenqa_english_qa_test_azure_asr.json
  - config_name: English-ASR-Whisper
    data_files:
      - split: test
        path: english_qa/spokenqa_english_qa_test_whisper_asr.json
  - config_name: English-ASR-Fanar-Aura
    data_files:
      - split: test
        path: english_qa/spokenqa_english_qa_test_fanar_asr.json
  - config_name: English-ASR-Google
    data_files:
      - split: test
        path: english_qa/spokenqa_english_qa_test_google_asr.json

SpokenNativQA: Multilingual Everyday Spoken Queries for LLMs

The SpokenNativQA dataset consists of question-answer (QA) pairs, where queries are sourced from real users and answers are manually reviewed and edited. The dataset covers a diverse range of 18 topics that reflect culturally and regionally specific knowledge, as well as everyday queries. These topics include animals, business, clothing, education, events, food and drinks, general knowledge, geography, immigration, language, literature, names and persons, plants, religion, sports and games, tradition, travel, and weather.

SpokenNativQA provides multilingual test sets of everyday spoken questions to evaluate large language models (LLMs) and speech processing systems. The dataset contains Arabic and English queries, each transcribed by multiple automatic speech recognition (ASR) systems.

Note: As a part of this repository we only shared the wav files that are only used for evaluation. For the entire dataset that we reported in the paper might be accessible after contacting with the authors.

Directory Overview

The dataset is organized into two main directories:

  • arabic_qa/

    • spokenqa_arabic_qa_test_azure_asr.jsonl
    • spokenqa_arabic_qa_test_fanar_asr.jsonl
    • spokenqa_arabic_qa_test_google_asr.jsonl
    • spokenqa_arabic_qa_test_whisper_asr.jsonl
    • spokenqa_arabic_qa_test.jsonl
    • speech/ -- wav files
  • english_qa/

    • spokenqa_english_qa_test_azure_asr.jsonl
    • spokenqa_english_qa_test_fanar_asr.jsonl
    • spokenqa_english_qa_test_google_asr.jsonl
    • spokenqa_english_qa_test.jsonl
    • speech/ -- wav files

Dataset Structure and Format

Each .jsonl file contains a list of JSON objects, one per line. The typical structure includes:

  • lang: The language of the spoken query (e.g., "arabic", "english").
  • data_id: A unique identifier for the data instance.
  • file_name: The name of the audio file.
  • file_path: The relative path of the audio file.
  • question: The intended question in text form (reference).
  • answer: The expected answer or reference answer for the question.
  • location: The geographical location where the query was recorded.
  • asr_text: The text output from the ASR system.

Example of a JSON Entry

{
  "lang": "arabic",
  "data_id": "3cdfcfd1acb722617ec8bbe6808114bc",
  "file_name": "3cdfcfd1acb722617ec8bbe6808114bc_1724222501325.wav",
  "file_path": "speech/3cdfcfd1acb722617ec8bbe6808114bc_1724222501325.wav",
  "question": "من هو الشاعر الذي سجن؟",
  "answer": "وهي القصائد التي كتبها أبو فراس الحمداني فترة أسره عند الروم في سجن خرشنة، وعرفت باسم الروميات نسبة لمكان أسره، وقد تميزت هذا القصائد بجزالتها وقوتها ورصانتها، وصدق عاطفتها.",
  "location": "qatar",
  "asr_text": "من هو الشاعر الذي زعل؟"
}

Experimental Scripts:

All of the experimental scripts are available as a part of https://github.com/qcri/LLMeBench framework.

License

This dataset is licensed under the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0).

Citation

If you are using this dataset in your research, we kindly ask that you cite our paper.

@inproceedings{alam2025spokennativqa,
  title     = {SpokenNativQA: Multilingual Everyday Spoken Queries for LLMs},
  author    = {Firoj Alam and Md Arid Hasan and Shammur Absar Chowdhury},
  booktitle = {Proceedings of the 26th Interspeech Conference (Interspeech 2025)},
  year      = {2025},
  address   = {Rotterdam, The Netherlands},
  month     = aug,
  organization = {ISCA},
}