nielsr HF Staff commited on
Commit
66f8eb4
·
verified ·
1 Parent(s): d8b54eb

Improve model card: add abstract, usage, features, and fix license metadata

Browse files

This PR significantly enhances the model card by:

* **Correcting a typo in the metadata** (`licence` to `license`) to ensure proper classification and display on the Hub.
* **Adding the paper's abstract** to provide a comprehensive overview of the model's capabilities and research.
* **Including a detailed \"Features\" section** derived from the abstract to highlight key aspects of CODA.
* **Providing a \"Usage\" section** with installation and inference shell commands directly from the GitHub repository, making it easier for users to try out the model.
* **Adding the full BibTeX citation** for easier referencing.
* **Expanding the license information** in the content for clarity.

These changes will make the model card more informative and user-friendly.

Files changed (1) hide show
  1. README.md +115 -8
README.md CHANGED
@@ -1,16 +1,123 @@
1
  ---
 
 
2
  datasets:
3
  - OS-Copilot/ScienceBoard-Traj
4
  library_name: transformers
 
 
5
  tags:
6
  - generated_from_trainer
7
  - R1-V
8
- licence: license
9
- license: apache-2.0
10
- base_model:
11
- - Qwen/Qwen2.5-VL-32B-Instruct
12
- pipeline_tag: image-text-to-text
13
  ---
14
- CODA: Coordinating the Cerebrum and Cerebellum for a Dual-Brain Computer Use Agent with Decoupled Reinforcement Learning
15
- https://huggingface.co/papers/2508.20096
16
- Check out our [repo](https://github.com/OpenIXCLab/CODA) and [paper](https://arxiv.org/abs/2508.20096) for more implementation details!
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ base_model:
3
+ - Qwen/Qwen2.5-VL-32B-Instruct
4
  datasets:
5
  - OS-Copilot/ScienceBoard-Traj
6
  library_name: transformers
7
+ license: apache-2.0
8
+ pipeline_tag: image-text-to-text
9
  tags:
10
  - generated_from_trainer
11
  - R1-V
 
 
 
 
 
12
  ---
13
+
14
+ # CODA: Coordinating the Cerebrum and Cerebellum for a Dual-Brain Computer Use Agent with Decoupled Reinforcement Learning
15
+
16
+ This repository contains the `CODA-PLANNER-TARS-32B` model, presented in the paper [CODA: Coordinating the Cerebrum and Cerebellum for a Dual-Brain Computer Use Agent with Decoupled Reinforcement Learning](https://huggingface.co/papers/2508.20096).
17
+
18
+ Check out our [GitHub repository](https://github.com/OpenIXCLab/CODA) for more implementation details! You can also find the paper on [arXiv](https://arxiv.org/abs/2508.20096).
19
+
20
+ ## Abstract
21
+ Autonomous agents for Graphical User Interfaces (GUIs) face significant challenges in specialized domains such as scientific computing, where both long-horizon planning and precise execution are required. Existing approaches suffer from a trade-off: generalist agents excel at planning but perform poorly in execution, while specialized agents demonstrate the opposite weakness. Recent compositional frameworks attempt to bridge this gap by combining a planner and an actor, but they are typically static and non-trainable, which prevents adaptation from experience. This is a critical limitation given the scarcity of high-quality data in scientific domains. To address these limitations, we introduce CODA, a novel and trainable compositional framework that integrates a generalist planner (Cerebrum) with a specialist executor (Cerebellum), trained via a dedicated two-stage pipeline. In the first stage, Specialization, we apply a decoupled GRPO approach to train an expert planner for each scientific application individually, bootstrapping from a small set of task trajectories. In the second stage, Generalization, we aggregate all successful trajectories from the specialized experts to build a consolidated dataset, which is then used for supervised fine-tuning of the final planner. This equips CODA with both robust execution and cross-domain generalization. Evaluated on four challenging applications from the ScienceBoard benchmark, CODA significantly outperforms baselines and establishes a new state of the art among open-source models.
22
+
23
+ ## Features
24
+
25
+ CODA introduces a novel and trainable compositional framework for GUI agents, designed with the following key features:
26
+
27
+ * **Dual-Brain Architecture**: Integrates a generalist planner (Cerebrum) with a specialist executor (Cerebellum).
28
+ * **Decoupled Reinforcement Learning**: Employs a dedicated two-stage pipeline (Specialization and Generalization) for training.
29
+ * **Robust Execution**: Achieves precise execution in specialized scientific computing domains.
30
+ * **Cross-Domain Generalization**: Demonstrates strong generalization capabilities across various scientific applications.
31
+ * **State-of-the-Art Performance**: Significantly outperforms baselines on the ScienceBoard benchmark.
32
+
33
+ ## Usage
34
+ For detailed installation instructions and inference examples, please refer to the official GitHub repository.
35
+
36
+ ### Installation
37
+ ```shell
38
+ conda create -n coda python=3.11
39
+ conda activate coda
40
+ pip install vllm==0.8.5.post1
41
+ ```
42
+
43
+ ## Inference
44
+ Prepare [ScienceBoard](https://github.com/OS-Copilot/ScienceBoard) environment
45
+ replace `sci` folder in ScienceBoard with our `ScienceBoard_CODA/sci` and put `qwenvl_test.py` under ScienceBoard base folder.
46
+
47
+ ```shell
48
+ # use conda (vllm==0.8.5.post1) to deploy model to reproduce our results.
49
+ # deploy CODA-PLANER-TARS-32B
50
+ vllm serve OpenIXCLab/CODA-PLANNER-TARS-32B \
51
+ --served-model-name "qwen32b" \
52
+ --host 0.0.0.0 \
53
+ --port "${PORT_1}" \
54
+ --tensor-parallel-size 4 &
55
+
56
+ # deploy executor UI-TARS-1.5-7B
57
+ CUDA_VISIBLE_DEVICES=4,5 vllm serve ByteDance-Seed/UI-TARS-1.5-7B \
58
+ --served-model-name "tars1.5-grounding" \
59
+ --host 0.0.0.0 \
60
+ --port "${PORT_2}" \
61
+ --tensor-parallel-size 2 &
62
+
63
+ # in sciboard env, perform agent evaluation.
64
+ export SOFTWARE='Celestia'
65
+ export SUBFOLDER="planner_ans"
66
+ export DEBUG_LOG=0
67
+ export SERVER_URL="http://YOUR.PLANER.ADDR:PORT_1/v1/chat/completions" # qwen32b for baseline and coda-1.0-32b for our planner
68
+ export EXECUTOR_URL="http://YOUR.EXECUTOR.ADDR:PORT_2" # uitars-1.5 addr
69
+ export MODEL_NAME="qwen32b"
70
+ export NO_CONTEXT_IMAGE=0
71
+ export SPLITE=8
72
+ export QWEN_PLANNER=1
73
+ export PLANNER_ANS=1
74
+
75
+ for i in {0..7}; do # parallel for 8 VMs
76
+ export VM_PATH="vmware_vm_data/Ubuntu${i}/Ubuntu${i}.vmx"
77
+ # Set port based on i value
78
+ export INDEX=$i
79
+ if [ $i -eq 0 ]; then
80
+ # Process i=0: show output in terminal
81
+ timeout 90m python qwenvl_test.py &
82
+ else
83
+ # Process i>0: redirect output to log file
84
+ timeout 90m python qwenvl_test.py > "logs/vm${i}_output.log" 2>&1 &
85
+ fi
86
+
87
+ sleep 10s
88
+ done
89
+ wait
90
+ sleep 10s
91
+ echo "All tasks completed."
92
+ ```
93
+
94
+ ## Citation
95
+ If you find our work helpful, please consider citing:
96
+ ```bibtex
97
+ @misc{sun2025codacoordinatingcerebrumcerebellum,
98
+ title={CODA: Coordinating the Cerebrum and Cerebellum for a Dual-Brain Computer Use Agent with Decoupled Reinforcement Learning},
99
+ author={Zeyi Sun and Yuhang Cao and Jianze Liang and Qiushi Sun and Ziyu Liu and Zhixiong Zhang and Yuhang Zang and Xiaoyi Dong and Kai Chen and Dahua Lin and Jiaqi Wang},
100
+ year={2025},
101
+ eprint={2508.20096},
102
+ archivePrefix={arXiv},
103
+ primaryClass={cs.CV},
104
+ url={https://arxiv.org/abs/2508.20096},
105
+ }
106
+
107
+
108
+ @misc{sun2025seagentselfevolvingcomputeruse,
109
+ title={SEAgent: Self-Evolving Computer Use Agent with Autonomous Learning from Experience},
110
+ author={Zeyi Sun and Ziyu Liu and Yuhang Zang and Yuhang Cao and Xiaoyi Dong and Tong Wu and Dahua Lin and Jiaqi Wang},
111
+ year={2025},
112
+ eprint={2508.04700},
113
+ archivePrefix={arXiv},
114
+ primaryClass={cs.AI},
115
+ url={https://arxiv.org/abs/2508.04700},
116
+ }
117
+ ```
118
+
119
+ ## License
120
+ ![Code License](https://img.shields.io/badge/Code%20License-Apache_2.0-green.svg) ![Data License](https://img.shields.io/badge/Data%20License-CC%20By%20NC%204.0-red.svg) **Usage and License Notices**: The code is licensed under the Apache 2.0 License. The data is licensed for research use only under the Attribution-NonCommercial 4.0 International (CC-BY-NC-4.0) License. It should also abide by the policy of OpenAI: https://openai.com/policies/terms-of-use
121
+
122
+ ## Acknowledgement
123
+ We sincerely thank projects [UI-TARS](https://github.com/bytedance/UI-TARS), [ScienceBoard](https://qiushisun.github.io/ScienceBoard-Home/), [R1-V](https://github.com/Deep-Agent/R1-V), for providing their open-source resources.