--- license: mit task_categories: - text-generation tags: - reinforcement-learning - llm - agents - tool-use - reasoning - knowledge - deep-search library_name: datasets --- # Agentic Reinforced Policy Optimization (ARPO) Dataset This repository contains the datasets associated with the paper **[Agentic Reinforced Policy Optimization](https://huggingface.co/papers/2507.19849)**. ## Abstract Large-scale reinforcement learning with verifiable rewards (RLVR) has demonstrated its effectiveness in harnessing the potential of large language models (LLMs) for single-turn reasoning tasks. In realistic reasoning scenarios, LLMs can often utilize external tools to assist in task-solving processes. However, current RL algorithms inadequately balance the models' intrinsic long-horizon reasoning capabilities and their proficiency in multi-turn tool interactions. To bridge this gap, we propose Agentic Reinforced Policy Optimization (ARPO), a novel agentic RL algorithm tailored for training multi-turn LLM-based agents. Through preliminary experiments, we observe that LLMs tend to exhibit highly uncertain behavior, characterized by an increase in the entropy distribution of generated tokens, immediately following interactions with external tools. Motivated by this observation, ARPO incorporates an entropy-based adaptive rollout mechanism, dynamically balancing global trajectory sampling and step-level sampling, thereby promoting exploration at steps with high uncertainty after tool usage. By integrating an advantage attribution estimation, ARPO enables LLMs to internalize advantage differences in stepwise tool-use interactions. Our experiments across 13 challenging benchmarks in computational reasoning, knowledge reasoning, and deep search domains demonstrate ARPO's superiority over trajectory-level RL algorithms. Remarkably, ARPO achieves improved performance using only half of the tool-use budget required by existing methods, offering a scalable solution for aligning LLM-based agents with real-time dynamic environments. Our code and datasets are released at this https URL ## Paper and Code * **Paper on Hugging Face:** [Agentic Reinforced Policy Optimization](https://huggingface.co/papers/2507.19849) * **arXiv:** [https://arxiv.org/abs/2507.19849](https://arxiv.org/abs/2507.19849) * **GitHub Repository:** [https://github.com/dongguanting/ARPO](https://github.com/dongguanting/ARPO) https://huggingface.co/papers/2510.14545 ## Dataset Description This dataset is integral to the Agentic Reinforced Policy Optimization (ARPO) framework, supporting the training and evaluation of multi-turn LLM-based agents. It is designed to enhance LLMs' capabilities in long-horizon reasoning and proficiency in multi-turn tool interactions across various challenging domains. ## Dataset Structure The ARPO dataset is released as a collection of parquet files, categorized for different stages of training and evaluation: 1. **Reasoning and Knowledge Dataset**: * `train_10k.parquet`: Contains 10,000 samples for mathematical and knowledge reasoning tasks. * `test.parquet`: Comprises 300 test samples from 8 datasets, including AIME24, AIME25, MATH500, GSM8k, HotpotQA, 2Wiki, Misque, and Bamboogle. 2. **Deep Search Dataset**: * `hard_search.parquet`: Contains 1,000 samples, including 800 from `simpledeepsearch` and 200 from `webdancer`. * `gaia_test.parquet`/`hle_test.parquet`: Contains test samples from the GAIA and Humanity Last Exam (HLE) benchmarks. These datasets facilitate cold-start Supervised Fine-Tuning (SFT) and subsequent Agentic Reinforced Policy Optimization (ARPO) training. ## Sample Usage You can download the various ARPO datasets directly from Hugging Face using `git lfs`: ```bash git lfs install git clone https://huggingface.co/datasets/dongguanting/ARPO-SFT-54K git clone https://huggingface.co/datasets/dongguanting/ARPO-RL-Reasoning-10K git clone https://huggingface.co/datasets/dongguanting/ARPO-RL-DeepSearch-1K ``` For detailed instructions on how to use these datasets for SFT, ARPO training, and evaluation, please refer to the [official GitHub repository](https://github.com/dongguanting/ARPO). ## Citation If you find this work helpful, please cite our paper: ```bibtex @misc{dong2025arpo, title={Agentic Reinforced Policy Optimization}, author={Guanting Dong and Hangyu Mao and Kai Ma and Licheng Bao and Yifei Chen and Zhongyuan Wang and Zhongxia Chen and Jiazhen Du and Huiyang Wang and Fuzheng Zhang and Guorui Zhou and Yutao Zhu and Ji-Rong Wen and Zhicheng Dou}, year={2025}, eprint={2507.19849}, archivePrefix={arXiv}, primaryClass={cs.LG}, url={https://arxiv.org/abs/2507.19849}, } ```