File size: 10,059 Bytes
be8f366
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
---
datasets:
- dataset: tianyumyum/AOE  
  data_files:
  - split: all     
    path: table_data/all_AOE_tables.jsonl
---

# ๐Ÿ† AOE: Arranged and Organized Extraction Benchmark

**๐Ÿ“š For full reproducibility, all source code is available in our [GitHub repository](https://github.com/tianyumyum/AOE).**

> **๐ŸŽฏ Challenge**: Can AI models construct structured tables from complex, real-world documents? AOE tests this critical capability across legal, financial, and academic domains.

## ๐Ÿš€ What is AOE?

The **AOE (Arranged and Organized Extraction) Benchmark** addresses a critical gap in existing text-to-table evaluation frameworks. Unlike synthetic benchmarks, AOE challenges modern LLMs with **authentic, complex, and practically relevant** data extraction tasks.

> ๐Ÿ’ฅ **Why "AOE"?** Like Area of Effect damage in gaming that impacts everything within range, our benchmark reveals that current AI models struggle across *all* aspects of structured extraction - from basic parsing to complex reasoning. No model escapes unscathed!

### ๐ŸŽฏ Core Innovation

**Beyond Isolated Information**: AOE doesn't just test information retrievalโ€”it evaluates models' ability to:
- ๐Ÿง  **Understand** complex task requirements and construct appropriate schemas
- ๐Ÿ” **Locate** scattered information across multiple lengthy documents  
- ๐Ÿ—๏ธ **Integrate** diverse data points into coherent, structured tables
- ๐Ÿงฎ **Perform** numerical reasoning and cross-document analysis

### ๐Ÿ“Š Key Statistics

| Metric | Value |
|--------|-------|
| **Total Tasks** | 373 benchmark instances |
| **Domains** | 3 (Legal, Financial, Academic) |
| **Document Sources** | 100% real-world, authentic content |
| **Total Documents** | 1,914 source documents |
| **Languages** | English & Chinese |

#### ๐Ÿ“ˆ Detailed Domain Statistics

| Domain | Language | Tables | Documents | Avg Tokens | Docs/Table |
|--------|----------|--------|-----------|------------|------------|
| **Academic** | EN | 74 | 257 | 69k | 3.5/5 |
| **Financial** | ZH,EN | 224 | 944 | 437k | 4.2/5 |
| **Legal** | ZH | 75 | 713 | 7k | 9.6/13 |



## ๐Ÿ“ Dataset Structure

```python
{
    "record_id": "academic_10_0_en",
    "query": "Identify possible citation relationships among the following articles...",
    "doc_length": {
        "paper_1.md": 141566,         # Character count per document
        "paper_2.md": 885505,
        "paper_3.md": 48869,
        "paper_4.md": 65430,
        "paper_5.md": 53987
    },
    "table_schema": {               # Dynamic schema definition
        "columns": [
            {"name": "Cited paper title", "about": "the name of the paper"},
            {"name": "Referencing paper title", "about": "Referencing paper title"},
            {"name": "Referenced content", "about": "the context of the cited paper"},
            {"name": "Label", "about": "reference type: background/methodology/additional"}
        ]
    },
    "answers": [                    # Ground truth structured output
        {
            "Cited paper title": "Large Language Model Is Not a Good Few-shot Information Extractor...",
            "Referencing paper title": "What Makes Good In-Context Examples for GPT-3?",
            "Referenced content": "(2) Sentence-embedding (Liu et al., 2022; Su et al., 2022): retrieving...",
            "Label": "background"
        }
    ]
}
```


## ๐Ÿญ Data Sources & Domains

<div align="center">
  <img src="fig_data_process-0516-v4.jpg" alt="AOE Benchmark Construction Process" width="800">
  <p><em>Figure: AOE benchmark construction pipeline from raw documents to structured evaluation tasks</em></p>
</div>

### ๐Ÿ“š **Academic Domain**
- **Sources**: Semantic Scholar, Papers With Code
- **Content**: Research papers, citation networks, performance leaderboards
- **Tasks**: Citation relationship extraction, methodology performance analysis

### ๐Ÿ’ฐ **Financial Domain**  
- **Source**: CNINFO (China's official financial disclosure platform)
- **Content**: Annual reports (2020-2023) from A-share listed companies
- **Tasks**: Longitudinal financial analysis, cross-company comparisons

### โš–๏ธ **Legal Domain**
- **Sources**: People's Court Case Library, National Legal Database
- **Content**: Chinese civil law judgments, official statutes  
- **Tasks**: Legal provision retrieval, defendant verdict extraction



## ๐ŸŽฏ Benchmark Tasks Overview

### ๐Ÿ“Š Task Categories

| Domain | Task ID | Description | Challenge Level |
|--------|---------|-------------|-----------------|
| **Academic** | $Aca_0$ | Citation Context Extraction | ๐Ÿ”ฅ๐Ÿ”ฅ๐Ÿ”ฅ |
| | $Aca_1$ | Methodology Performance Extraction | ๐Ÿ”ฅ๐Ÿ”ฅ |
| **Legal** | $Legal_0$ | Legal Provision Retrieval | ๐Ÿ”ฅ๐Ÿ”ฅ๐Ÿ”ฅ๐Ÿ”ฅ |
| | $Legal_1$ | Defendant Verdict Extraction | ๐Ÿ”ฅ๐Ÿ”ฅ๐Ÿ”ฅ |
| **Financial** | $Fin_{0-3}$ | Single Company Longitudinal Analysis | ๐Ÿ”ฅ๐Ÿ”ฅ |
| | $Fin_{4-6}$ | Multi-Company Comparative Analysis | ๐Ÿ”ฅ๐Ÿ”ฅ๐Ÿ”ฅ |

### ๐Ÿ—๏ธ Data Processing Pipeline

- **๐Ÿ“„ Document Preservation**: Advanced parsing with `markitdown`, `Marker`, and OCR
- **๐Ÿท๏ธ Human-in-the-Loop**: Expert annotation for legal document processing  
- **โœ… Quality Assurance**: Multi-stage validation ensuring accuracy and completeness

## ๐Ÿ’ก Example Tasks

### โš–๏ธ Legal Analysis Example  
**Task**: Extract structured verdict information from complex trademark infringement cases

<details>
<summary><strong>๐Ÿ“‹ View Ground Truth Table</strong></summary>

**Input Query**: "ไฝœไธบๆณ•ๅพ‹ๆ–‡ๆœฌๅˆ†ๆžไธ“ๅฎถ๏ผŒ่ฏทๆŒ‰็…งๆŒ‡ๅฎšๆ ผๅผไปŽๅˆคๅ†ณไฟกๆฏไธญๅ‡†็กฎๆๅ–ๆฏไฝ่ขซๅ‘Š็š„ๆœ€็ปˆๅˆคๅ†ณ็ป“ๆžœ"

**Source Documents**:complex legal cases (678-2391 tokens each)

```csv
ๆกˆไปถๅ,่ขซๅ‘Š,็ฝชๅ,ๅˆ‘ๆœŸ,็ผ“ๅˆ‘,ๅค„็ฝš้‡‘,ๅ…ถไป–ๅˆคๅ†ณ
ๅˆ˜ๆŸๅ‡ๅ†’ๆณจๅ†Œๅ•†ๆ ‡ๆกˆ,ๅˆ˜ๆŸ,ๅ‡ๅ†’ๆณจๅ†Œๅ•†ๆ ‡็ฝช,ๆœ‰ๆœŸๅพ’ๅˆ‘ๅ››ๅนด,,ๅค„็ฝš้‡‘ไบบๆฐ‘ๅธๅไบ”ไธ‡ๅ…ƒ,ๆ‰ฃๆŠผ่ฝฆ่พ†ใ€ๆ‰‹ๆœบ็ญ‰ๅ˜ไปทๆŠตไฝœ็ฝš้‡‘
ๆฌงๆŸ่พ‰ใ€ๅผ ๆŸๅฆนๅ‡ๅ†’ๆณจๅ†Œๅ•†ๆ ‡ๆกˆ,ๆฌงๆŸ่พ‰,ๅ‡ๅ†’ๆณจๅ†Œๅ•†ๆ ‡็ฝช,ๆœ‰ๆœŸๅพ’ๅˆ‘ไบ”ๅนดๅ…ญไธชๆœˆ,,ๅค„็ฝš้‡‘ไบบๆฐ‘ๅธๅ…ญๅไบ”ไธ‡ๅ…ƒ,่ฟฝ็ผด่ฟๆณ•ๆ‰€ๅพ—100.6583ไธ‡ๅ…ƒ
่ฐขๆŸๆŸ็”ฒ็ญ‰ๅ‡ๅ†’ๆณจๅ†Œๅ•†ๆ ‡ๆกˆ,่ฐขๆŸๆŸ็”ฒ,ๆ— ็ฝช,,,, 
้ฉฌๆŸๅŽ็ญ‰ๅ‡ๅ†’ๆณจๅ†Œๅ•†ๆ ‡ๆกˆ,้ฉฌๆŸๅŽ,ๅ‡ๅ†’ๆณจๅ†Œๅ•†ๆ ‡็ฝช,ๆœ‰ๆœŸๅพ’ๅˆ‘ๅ…ญๅนด,,ๅค„็ฝš้‡‘ไบบๆฐ‘ๅธๅ…ญ็™พๅ…ซๅไธ‡ๅ…ƒ,
โ€ฆโ€ฆ
```

**Challenge**: Models must parse complex legal language from multiple case documents (avg 9.6 docs per table), handle joint defendant cases with up to 16 defendants per case, distinguish between different verdict outcomes (guilty vs. acquitted), and extract structured information from unstructured legal narratives involving trademark infringement worth millions.

</details>


### ๐Ÿ“š Academic Analysis Example
**Task**: Extract methodology performance from research papers on WikiText-103 dataset

<details>
<summary><strong>๐Ÿ“Š View Ground Truth Table</strong></summary>

**Input Query**: "List the Test perplexity performance of the proposed methods in the paper on the WikiText-103 dataset."

**Source Documents**: research papers (36k-96k tokens each)

```csv
paper_name,method,result,models_and_settings
Primal-Attention: Self-attention through Asymmetric Kernel SVD,Primal.+Trans.,31,
Language Modeling with Gated Convolutional Networks,GCNN-8,44.9,
GATELOOP: FULLY DATA-CONTROLLED LINEAR RECURRENCE,GateLoop,13.4,
```

**Challenge**: Models must parse complex academic papers, identify specific methodologies, locate performance tables, and extract numerical results while handling various formatting styles.

</details>

### ๐Ÿฆ Financial Analysis Example  
**Task**: Extract and compare financial metrics across multiple company annual reports

<details>
<summary><strong>๐Ÿ“Š View Ground Truth Table</strong></summary>

```csv
Company,Revenue (CNY),Net Profit (CNY),Operating Cash Flow (CNY)
Gree Electric,203979266387,29017387604,56398426354
Midea Group,372037280000,33719935000,57902611000
Haier Smart Home,261427783050,16596615046,25262376228
TCL Technology,174366657015,4781000000,25314756105
GONGNIU GROUP,15694755600,3870135376,4827282090
```

**Challenge**: Models must locate financial data scattered across lengthy annual reports (avg 437k tokens), handle different formatting conventions, and ensure numerical accuracy across multiple documents.

</details>



## ๐Ÿ”ฌ Research Applications

### ๐ŸŽฏ Ideal for Evaluating:
- **Multi-document Understanding**: Information synthesis across long-form texts
- **Schema Construction**: Dynamic table structure generation
- **Domain Adaptation**: Performance across specialized fields
- **Numerical Reasoning**: Financial calculations and quantitative analysis
- **Cross-lingual Capabilities**: English and Chinese document processing

### ๐Ÿ“ˆ Benchmark Insights:
- **Even SOTA models struggle**: Best performers achieve only ~68% accuracy
- **Domain specificity matters**: Performance varies significantly across fields  
- **Length matters**: Document complexity correlates with task difficulty
- **RAG limitations revealed**: Standard retrieval often fails for structured tasks


## ๐Ÿš€ Getting Started

### Quick Usage
```python
from datasets import load_dataset

# Load the complete benchmark
dataset = load_dataset("tianyumyum/AOE")

# Access specific splits
all_tasks = dataset["all"]

# Example task
task = all_tasks[0]
print(f"Documents: {len(task['doc_length'])}")
print(f"Expected output: {task['answers']}")
```

### ๐Ÿ“Š Evaluation Framework
AOE provides a comprehensive 3-tier evaluation system:
1. **๐ŸŽฏ CSV Parsability**: Basic structure compliance (Pass Rate)
2. **๐Ÿ† Overall Quality**: LLM-assessed holistic evaluation (0-100%)  
3. **๐Ÿ”ฌ Cell-Level Accuracy**: Granular content precision (F1-Score)



## ๐Ÿค Contributing & Support

- ๐Ÿ› **Issues**: [GitHub Issues](https://github.com/tianyumyum/AOE/issues)
- ๐Ÿ’ฌ **Discussions**: [GitHub Discussions](https://github.com/tianyumyum/AOE/discussions)  

<div align="center">

**โญ Star our [GitHub repo](https://github.com/tianyumyum/AOE) if you find AOE useful! โญ**

*Pushing the boundaries of structured knowledge extraction* ๐Ÿš€

</div>