--- license: apache-2.0 task_categories: - visual-question-answering - image-to-text language: - en tags: - mobile-ui - gui-grounding - android - ui-automation - multimodal size_categories: - 10KClick on the Recording 2" }, { "role": "assistant", "content": "{\"action_type\": \"click\", \"x\": 561, \"y\": 535}" } ], "images": ["and_ctrl/out_episode_18557_step_001.png"] } ``` ## Setup Instructions To use these datasets in LLaMA-Factory: 1. **Create the image directory**: ```bash mkdir -p data/and_ctrl ``` 2. **Download images**: Run the provided `download_android_control.ipynb` notebook to download and process the original images. The notebook will: - Download TFRecord files from Google Storage (`gs://gresearch/android_control/`) - Extract images and save them directly to `and_ctrl/` directory - Automatically organize images with the naming convention: `out_episode_{episode_id}_step_{step_number}.png` - Generate an `and_ctrl.json` file with the processed data 3. **Dataset files**: - Images: Stored in `data/and_ctrl/` folder - Training dataset: `and_ctrl_train.json` in `data/datasets/` - Test dataset: `and_ctrl_test.json` in `data/datasets/` ## Dataset Statistics **Total samples**: Train: 82,944 | Test: 904 | Action Type | Train | Test | |-------------|-------|------| | click | 51,793 (62.44%) | 125 (13.83%) | | scroll | 11,005 (13.27%) | 125 (13.83%) | | input_text | 5,966 (7.19%) | 125 (13.83%) | | wait | 5,657 (6.82%) | 125 (13.83%) | | open_app | 5,572 (6.72%) | 125 (13.83%) | | navigate_back | 2,909 (3.51%) | 125 (13.83%) | | long_press | 42 (0.05%) | 125 (13.83%) | | navigate_home | 0 (0.00%) | 29 (3.21%) | **Note**: The training dataset shows a natural distribution with click actions being dominant (62.44%), while the test dataset is intentionally balanced with most action types having equal representation (~13.83% each). The `navigate_home` action appears only in the test set. ## Training Usage These datasets are specifically formatted for training multimodal language models to: - Understand mobile UI screenshots - Ground natural language instructions to specific UI elements - Generate precise action coordinates for UI automation - Learn mobile app interaction patterns ## Source and Attribution Original dataset: [Google Research Android Control](https://github.com/google-research/google-research/tree/master/android_control) The Android Control dataset was created by Google Research for advancing mobile UI understanding and automation research. ### License This dataset is derived from Google Research's Android Control dataset, which is licensed under the [Apache License 2.0](https://www.apache.org/licenses/LICENSE-2.0). The reformatted version for LLaMA-Factory maintains the same Apache 2.0 license terms. Copyright for the original dataset belongs to Google LLC. Any modifications or reformatting for LLaMA-Factory compatibility are also provided under Apache License 2.0. ## Notes - The images are referenced with relative paths starting with `and_ctrl/` - Each action includes the action type and necessary parameters (coordinates, text, direction, etc.) - The test set can be used for evaluating model performance on unseen mobile UI interactions