Safetensors
English
qwen2_5_omni_thinker
sinwang commited on
Commit
223823a
·
verified ·
1 Parent(s): 053b5e1

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +43 -3
README.md CHANGED
@@ -1,3 +1,43 @@
1
- ---
2
- license: cc-by-nc-4.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: cc-by-nc-4.0
3
+ datasets:
4
+ - fnlp/OmniAction-LIBERO
5
+ language:
6
+ - en
7
+ base_model:
8
+ - fnlp/RoboOmni
9
+ ---
10
+
11
+
12
+
13
+ <div align="center">
14
+ <h1>
15
+ RoboOmni: Proactive Robot Manipulation in Omni-modal Context
16
+ </h1>
17
+ </div>
18
+
19
+
20
+ ![logo](https://cdn-uploads.huggingface.co/production/uploads/64c3c631e77ea9f28111172a/Lb55aSaitdpNl1iSC8xrm.png)
21
+
22
+ ---
23
+
24
+ Recent advances in Multimodal Large Language Models (MLLMs) have driven rapid progress in Vision–Language–Action (VLA) models for robotic manipulation. Although effective in many scenarios, current approaches largely rely on explicit instructions, whereas in real-world interactions, humans rarely issue instructions directly. Effective collaboration requires robots to infer user intentions proactively.
25
+ In this work, we introduce *cross-modal contextual instructions, a new setting where intent is derived from spoken dialogue, environmental sounds, and visual cues rather than explicit commands.* To address this new setting, we present **RoboOmni**, a *Perceiver-Thinker-Talker-Executor* framework based on end-to-end omni-modal LLMs that unifies intention recognition, interaction confirmation, and action execution. RoboOmni fuses auditory and visual signals spatiotemporally for robust intention recognition, while supporting direct speech interaction.
26
+ To address the absence of training data for proactive intention recognition in robotic manipulation, we build **OmniAction** comprising 140k episodes, 5k+ speakers, 2.4k event sounds, 640 backgrounds, and six contextual instruction types. Experiments in simulation and real-world settings show that RoboOmni surpasses text- and ASR-based baselines in success rate, inference speed, intention recognition, and proactive assistance.
27
+
28
+ ---
29
+
30
+
31
+ ## ⭐️ Architecture
32
+
33
+ At the heart of RoboOmni lies the Perceiver-Thinker-Talker-Executor architecture, which unifies multiple modalities (vision, speech, environmental sounds) into a single, seamless framework for robot action execution.
34
+
35
+
36
+
37
+ ![WechatIMG2567](https://cdn-uploads.huggingface.co/production/uploads/64c3c631e77ea9f28111172a/z5hqTgAPU0BiFtdrKwq8A.jpeg)
38
+
39
+
40
+
41
+
42
+
43
+