Datasets:
metadata
license: apache-2.0
task_categories:
- text-generation
language:
- en
tags:
- code
- python
- long-context
- coding
size_categories:
- 1K<n<10K
configs:
- config_name: 0k
data_files:
- split: test
path: data/0k/test/data-*
- split: train
path: data/0k/train/data-*
- split: validation
path: data/0k/validation/data-*
- split: prompt
path: data/0k/prompt/data-*
- config_name: 1k
data_files:
- split: test
path: data/1k/test/data-*
- split: train
path: data/1k/train/data-*
- split: validation
path: data/1k/validation/data-*
- split: prompt
path: data/1k/prompt/data-*
- config_name: 2k
data_files:
- split: test
path: data/2k/test/data-*
- split: train
path: data/2k/train/data-*
- split: validation
path: data/2k/validation/data-*
- split: prompt
path: data/2k/prompt/data-*
- config_name: 4k
data_files:
- split: test
path: data/4k/test/data-*
- split: train
path: data/4k/train/data-*
- split: validation
path: data/4k/validation/data-*
- split: prompt
path: data/4k/prompt/data-*
- config_name: 8k
data_files:
- split: test
path: data/8k/test/data-*
- split: train
path: data/8k/train/data-*
- split: validation
path: data/8k/validation/data-*
- split: prompt
path: data/8k/prompt/data-*
- config_name: 16k
data_files:
- split: test
path: data/16k/test/data-*
- split: train
path: data/16k/train/data-*
- split: validation
path: data/16k/validation/data-*
- split: prompt
path: data/16k/prompt/data-*
- config_name: 32k
data_files:
- split: test
path: data/32k/test/data-*
- split: train
path: data/32k/train/data-*
- split: validation
path: data/32k/validation/data-*
- split: prompt
path: data/32k/prompt/data-*
- config_name: 64k
data_files:
- split: test
path: data/64k/test/data-*
- split: train
path: data/64k/train/data-*
- split: validation
path: data/64k/validation/data-*
- split: prompt
path: data/64k/prompt/data-*
- config_name: 128k
data_files:
- split: test
path: data/128k/test/data-*
- split: train
path: data/128k/train/data-*
- split: validation
path: data/128k/validation/data-*
- split: prompt
path: data/128k/prompt/data-*
- config_name: 196k
data_files:
- split: test
path: data/196k/test/data-*
- split: train
path: data/196k/train/data-*
- split: validation
path: data/196k/validation/data-*
- split: prompt
path: data/196k/prompt/data-*
- config_name: 256k
data_files:
- split: test
path: data/256k/test/data-*
- split: train
path: data/256k/train/data-*
- split: validation
path: data/256k/validation/data-*
- split: prompt
path: data/256k/prompt/data-*
- config_name: 512k
data_files:
- split: test
path: data/512k/test/data-*
- split: train
path: data/512k/train/data-*
- split: validation
path: data/512k/validation/data-*
- split: prompt
path: data/512k/prompt/data-*
- config_name: 1m
data_files:
- split: test
path: data/1m/test/data-*
- split: train
path: data/1m/train/data-*
- split: validation
path: data/1m/validation/data-*
- split: prompt
path: data/1m/prompt/data-*
dataset_info:
features:
- name: task_id
dtype: int64
- name: text
dtype: string
- name: code
dtype: string
- name: test_list
sequence: string
- name: test_setup_code
dtype: string
- name: challenge_test_list
sequence: string
- name: context
dtype: string
- name: context_id
dtype: string
- name: context_length_tokens
dtype: int64
- name: code_length_chars
dtype: int64
- name: dataset_version
dtype: string
splits:
- name: test
num_examples: 500
- name: train
num_examples: 374
- name: validation
num_examples: 90
- name: prompt
num_examples: 10
MBPP Long-Context Dataset
Overview
MBPP Long-Context is a benchmark dataset that combines coding problems from the MBPP (Mostly Basic Python Problems) dataset with long-context distractors from BABILong. This dataset evaluates code generation performance under long-context conditions, testing whether models can maintain coding ability with stuffed context.
Dataset Structure
Data Fields
Each sample contains:
Original MBPP Fields
task_id(int): Unique task identifiertext(str): Problem descriptioncode(str): Reference solutiontest_list(List[str]): Test cases (assertions)test_setup_code(str): Optional setup codechallenge_test_list(List[str]): Additional test cases
Long-Context Fields
context(str): Prepended distractor text from BABILong, ranging from 0k to 1M.context_id(str): BABILong source identifier (e.g., "babilong_128k_qa1_sample_42")context_length_tokens(int): Token count using Llama tokenizer
Metadata
code_length_chars(int): Reference solution length for difficulty tracking
Data Splits
All configurations follow the original MBPP split structure:
- test: 500 samples (primary evaluation set)
- train: 374 samples
- validation: 90 samples
- prompt: 10 samples (few-shot examples)
Creating the dataset
To avoid confounding variables, this dataset uses stratified random assignment, where:
- Sort MBPP tasks by code length
- Get text from BABILong qa1-qa10 splits
- Duplicate contexts to match task count (974 samples)
- Shuffle contexts and assign to sorted tasks
Source Datasets
MBPP (Mostly Basic Python Problems)
- Source: google-research-datasets/mbpp
- Size: 974 problems
- Paper: Program Synthesis with Large Language Models
BABILong
- Source: RMT-team/babilong
- Content:
inputfield from qa1-qa10 splits - Paper: BABILong: Testing the Limits of LLMs with Long Context Reasoning-in-a-Haystack