metadata
dataset_info:
features:
- name: question_id
dtype: string
- name: question
dtype: string
- name: options
struct:
- name: A
dtype: string
- name: B
dtype: string
- name: C
dtype: string
- name: D
dtype: string
- name: E
dtype: string
- name: F
dtype: string
- name: G
dtype: string
- name: answer
dtype: string
- name: src
dtype: string
splits:
- name: test
num_bytes: 109817
num_examples: 100
download_size: 49322
dataset_size: 109817
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
license: apache-2.0
M-ARC
HuggingFace upload of a clinical QA benchmark designed to exploit LLMs' "inductive biases toward inflexible pattern matching from their training data rather than engaging in flexible reasoning." If used, please cite the original authors using the citation below.
Dataset Details
Dataset Description
The dataset contains one split:
- test: up to seven-option multiple-choice QA (choices A-G)
Dataset Sources
- Repository: https://github.com/dbernardo05/medARC-QA
- Paper: https://arxiv.org/pdf/2502.04381
Direct Use
import json
from datasets import load_dataset
if __name__ == "__main__":
# load the test split
dataset_test = load_dataset("mkieffer/M-ARC", split="test")
print("\test split:\n", dataset_test)
print("\ntest sample:\n", json.dumps(dataset_test[0], indent=2))
Citation
@misc{kim2025limitationslargelanguagemodels,
title={Limitations of Large Language Models in Clinical Problem-Solving Arising from Inflexible Reasoning},
author={Jonathan Kim and Anna Podlasek and Kie Shidara and Feng Liu and Ahmed Alaa and Danilo Bernardo},
year={2025},
eprint={2502.04381},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2502.04381},
}