autumncc commited on
Commit
df542e5
·
verified ·
1 Parent(s): 004f886

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +52 -0
README.md ADDED
@@ -0,0 +1,52 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ task_categories:
4
+ - question-answering
5
+ language:
6
+ - en
7
+ ---
8
+
9
+ This is the Repo for ViDoSeek, a benchmark specifically designed for visually rich document retrieval-reason-answer, fully suited for evaluation of RAG within large document corpus.
10
+
11
+ The paper of ViDoRAG is available at [arXiv](https://arxiv.org/abs/2502.18017).
12
+
13
+ ViDoSeek sets itself apart with its heightened difficulty level, attributed to the multi-document context and the intricate nature of its content types, particularly the Layout category. The dataset contains both single-hop and multi-hop queries, presenting a diverse set of challenges.
14
+
15
+ We have also released the SlideVQA dataset, refined through our pipeline, which we refer to as SlideVQA-Refined. This dataset is suitable for evaluating retrieval-augmented generation tasks as well.
16
+
17
+
18
+ The annotation is in the form of a JSON file.
19
+ ```json
20
+ {
21
+ "uid": "04d8bb0db929110f204723c56e5386c1d8d21587_2",
22
+ // Unique identifier to distinguish different queries
23
+ "query": "What is the temperature of Steam explosion of Pretreatment for Switchgrass and Sugarcane bagasse preparation?",
24
+ // Query content
25
+ "reference_answer": "195-205 Centigrade",
26
+ // Reference answer to the query
27
+ "meta_info": {
28
+ "file_name": "Pretreatment_of_Switchgrass.pdf",
29
+ // Original file name, typically a PDF file
30
+ "reference_page": [10, 11],
31
+ // Reference page numbers represented as an array
32
+ "source_type": "Text",
33
+ // Type of data source, 2d_layout\Text\Table\Chart
34
+ "query_type": "Multi-Hop"
35
+ // Query type, Multi-Hop or Single-Hop
36
+ }
37
+ }
38
+ ```
39
+
40
+
41
+ If you find this dataset useful, please consider citing our paper:
42
+ ```bigquery
43
+ @misc{wang2025vidoragvisualdocumentretrievalaugmented,
44
+ title={ViDoRAG: Visual Document Retrieval-Augmented Generation via Dynamic Iterative Reasoning Agents},
45
+ author={Qiuchen Wang and Ruixue Ding and Zehui Chen and Weiqi Wu and Shihang Wang and Pengjun Xie and Feng Zhao},
46
+ year={2025},
47
+ eprint={2502.18017},
48
+ archivePrefix={arXiv},
49
+ primaryClass={cs.CV},
50
+ url={https://arxiv.org/abs/2502.18017},
51
+ }
52
+ ```