Datasets:

ArXiv:
DOI:
License:
File size: 4,637 Bytes
0a25199
 
df0cac0
 
 
f9bcc69
0a25199
df0cac0
 
 
ba4fb60
df0cac0
 
 
ae5070d
df0cac0
 
 
 
 
 
 
 
 
ba4fb60
4be41e6
df0cac0
 
 
 
 
 
 
 
 
 
 
 
ae5070d
df0cac0
ae5070d
df0cac0
 
 
 
 
ae5070d
df0cac0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
---

license: cc-by-4.0
size_categories: [10G<n<100G]
source_datasets: [original]
pretty_name: JunoBench
viewer: false
---


# 📘 JunoBench: A Benchmark Dataset of Crashes in Machine Learning Python Jupyter Notebooks

This repository is the compiled benchmark dataset of paper "[JunoBench: A Benchmark Dataset of Crashes in Machine Learning Python Jupyter Notebooks](https://arxiv.org/abs/2510.18013)". The code repository of constructing the benchmark is on GitHub: [JunoBench_construct](https://github.com/PELAB-LiU/JunoBench_construct). This dataset is suitable for studying crashes (e.g., bug reproducing, detection, diagnose, localization, and repair) in ML notebooks.

## 📂 Contents

This benchmark dataset contains 111 reproducible ML notebooks with crashes, each paired with a verifiable fix. It covers a broad range of popular ML libraries, including TensorFlow/Keras, PyTorch, Scikit-learn, Pandas, and NumPy, as well as notebook-specific out-of-order execution issue. To support reproducibility and ease of use, JunoBench offers a unified execution environment in which all crashes and fixes can be reliably reproduced.

The ML library distribution is as follows:

<p align="center">
<img src="src/crash_dist.png" alt="Crash distribution" width="60%">
</p>

The structure of the benchmark repository is the following:
- [benchmark_desc.xlsx](./benchmark_desc.xlsx): attributes of the benchmark dataset
- [ground_truth_crash_prediction.xlsx](./ground_truth_crash_prediction.xlsx): crash detection and diagnosis labels of the benchmark dataset
- [benchmark/](https://huggingface.co/datasets/PELAB-LiU/JunoBench/tree/main/benchmark): Kaggle notebooks with crashes, categorized by ML library used when the crashes occur
    - `library_id/`:
        - `library_id.ipynb`: original Kaggle notebook
        - `library_id_reproduced.ipynb` crash reproduced version of the original Kaggle notebook
        - `library_id_fixed.ipynb` crash fixed version of the original Kaggle notebook
        - `data/` or `data_small/` dataset used by the notebook
        - `README.md` description of source and license of the dataset used by the notebook 
- [requirements.txt](./requirements.txt): Python environment dependencies
- [auto_notebook_executer.py](./auto_notebook_executer.py): a tool for automatically executing notebooks. Please see the following on how to use it


## 🛠 Usage

### 📄 Environment: using our shared Docker image

We compiled a docker image to ensure full reproducibility (including Python version, system libraries, and all dependencies):

```

docker pull yarinamomo/kaggle_python_env:latest

```

To run the image:

```

docker run -v [volumn mount windows path]:/junobench_env -w=/junobench_env -p 8888:8888 -it yarinamomo/kaggle_python_env:latest jupyter notebook --no-browser --ip="0.0.0.0" --notebook-dir=/junobench_env --allow-root

```

### 🐳 Execute notebooks with `auto_notebook_executer.py`:

You can execute the notebooks by using this tool. This tool can be used by command line:

```sh

python auto_notebook_executer.py path/to/target_notebooks.ipynb

```

This tool will automatically execute cells in the target notebook according to the current execution counts (i.e., the exact order of how the notebook was executed last time). If the target notebook is a reproduced version, you will see the reproduced crash in the console. If the target notebook is a fixed version, no error should appear. Instead, you will see a generated notebook with all executed cells and their output under the same folder.

### 🐳 Execute notebooks manually

You can also execute the notebooks via a Jupyter notebook platform. The execution order should follow the execution counts in the notebooks (ascending order, the cells without execution count can be skipped executing). There maybe skipped numbers, but it is the order that matters.

For notebook specific errors (NBspecific_1-20), there maybe due to one cell being executed more than once. For those cases, the cells with an identifier "[reexecute]" or "[re-execute]" should be executed twice consecutively.



## 📝 License & Attribution



This project is for academic research purposes only.



Please consult [Kaggle's Terms of Use](https://www.kaggle.com/terms) for notebook reuse policies. For datasets used in each notebook, please refer to the README.md file under each notebook folder for details.



If you believe any content should be removed, please contact us directly.



## 📬 Contact



For questions or concerns, please contact: [[email protected]]([email protected])