Datasets:

Modalities:
Image
Languages:
English
ArXiv:
Libraries:
Datasets
License:
l1997i commited on
Commit
40ca4dd
·
verified ·
1 Parent(s): 02738e5

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +248 -0
README.md ADDED
@@ -0,0 +1,248 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: cc-by-4.0
3
+ task_categories:
4
+ - depth-estimation
5
+ language:
6
+ - en
7
+ pretty_name: DurLAR Dataset - exemplar dataset (600 frames)
8
+ size_categories:
9
+ - n<1K
10
+ ---
11
+
12
+ ![DurLAR](https://github.com/l1997i/DurLAR/blob/main/head.png?raw=true)
13
+
14
+ # DurLAR: A High-Fidelity 128-Channel LiDAR Dataset
15
+
16
+
17
+
18
+ ## News
19
+
20
+ - [2024/12/05] We provide the **intrinsic parameters** of our OS1-128 LiDAR [[download]](https://github.com/l1997i/DurLAR/raw/refs/heads/main/os1-128.json).
21
+
22
+ ## Sensor placement
23
+
24
+ - **LiDAR**: [Ouster OS1-128 LiDAR sensor](https://ouster.com/products/os1-lidar-sensor/) with 128 channels vertical resolution
25
+
26
+ - **Stereo** Camera: [Carnegie Robotics MultiSense S21 stereo camera](https://carnegierobotics.com/products/multisense-s21/) with grayscale, colour, and IR enhanced imagers, 2048x1088 @ 2MP resolution
27
+
28
+ - **GNSS/INS**: [OxTS RT3000v3](https://www.oxts.com/products/rt3000-v3/) global navigation satellite and inertial navigation system, supporting localization from GPS, GLONASS, BeiDou, Galileo, PPP and SBAS constellations
29
+
30
+ - **Lux Meter**: [Yocto Light V3](http://www.yoctopuce.com/EN/products/usb-environmental-sensors/yocto-light-v3), a USB ambient light sensor (lux meter), measuring ambient light up to 100,000 lux
31
+
32
+
33
+ ## Panoramic Imagery
34
+
35
+ <br>
36
+ <p align="center">
37
+ <img src="https://github.com/l1997i/DurLAR/blob/main/reflect_center.gif?raw=true" width="100%"/>
38
+ <h5 id="title" align="center">Reflectivity imagery</h5>
39
+ </br>
40
+
41
+ <br>
42
+ <p align="center">
43
+ <img src="https://github.com/l1997i/DurLAR/blob/main/ambient_center.gif?raw=true" width="100%"/>
44
+ <h5 id="title" align="center">Ambient imagery</h5>
45
+ </br>
46
+
47
+
48
+ ## File Description
49
+
50
+ Each file contains 8 topics for each frame in DurLAR dataset,
51
+
52
+ - `ambient/`: panoramic ambient imagery
53
+ - `reflec/`: panoramic reflectivity imagery
54
+ - `image_01/`: right camera (grayscale+synced+rectified)
55
+ - `image_02/`: left RGB camera (synced+rectified)
56
+ - `ouster_points`: ouster LiDAR point cloud (KITTI-compatible binary format)
57
+ - `gps`, `imu`, `lux`: csv file format
58
+
59
+ The structure of the provided DurLAR full dataset zip file,
60
+
61
+ ```
62
+ DurLAR_<date>/
63
+ ├── ambient/
64
+ │   ├── data/
65
+ │   │   └── <frame_number.png> [ ..... ]
66
+ │   └── timestamp.txt
67
+ ├── gps/
68
+ │   └── data.csv
69
+ ├── image_01/
70
+ │   ├── data/
71
+ │   │   └── <frame_number.png> [ ..... ]
72
+ │   └── timestamp.txt
73
+ ├── image_02/
74
+ │   ├── data/
75
+ │   │   └── <frame_number.png> [ ..... ]
76
+ │   └── timestamp.txt
77
+ ├── imu/
78
+ │   └── data.csv
79
+ ├── lux/
80
+ │   └── data.csv
81
+ ├── ouster_points/
82
+ │   ├── data/
83
+ │   │   └── <frame_number.bin> [ ..... ]
84
+ │   └── timestamp.txt
85
+ ├── reflec/
86
+ │   ├── data/
87
+ │   │   └── <frame_number.png> [ ..... ]
88
+ │   └── timestamp.txt
89
+ └── readme.md [ this README file ]
90
+ ```
91
+
92
+ The structure of the provided calibration zip file,
93
+
94
+ ```
95
+ DurLAR_calibs/
96
+ ├── calib_cam_to_cam.txt [ Camera to camera calibration results ]
97
+ ├── calib_imu_to_lidar.txt [ IMU to LiDAR calibration results ]
98
+ └── calib_lidar_to_cam.txt [ LiDAR to camera calibration results ]
99
+ ```
100
+
101
+ ## Get Started
102
+
103
+ - [Download the **calibration files**](https://github.com/l1997i/DurLAR/raw/main/DurLAR_calibs.zip)
104
+ - [Download the **calibration files** (v2, targetless)](https://github.com/l1997i/DurLAR/raw/main/DurLAR_calibs_v2.zip)
105
+ - [Download the **exemplar ROS bag** (for targetless calibration)](https://durhamuniversity-my.sharepoint.com/:f:/g/personal/mznv82_durham_ac_uk/Ei28Yy-Gb_BKoavvJ6R_jLcBfTZ_xM5cZhEFgMFNK9HhyQ?e=rxPgI9)
106
+ - [Download the **exemplar dataset** (600 frames)](https://collections.durham.ac.uk/collections/r2gq67jr192)
107
+ - [Download the **full dataset**](https://github.com/l1997i/DurLAR?tab=readme-ov-file#access-for-the-full-dataset) (Fill in the form to request access to the full dataset)
108
+
109
+ > Note that [we did not include CSV header information](https://github.com/l1997i/DurLAR/issues/9) in the [**exemplar dataset** (600 frames)](https://collections.durham.ac.uk/collections/r2gq67jr192). You can refer to [Header of csv files](https://github.com/l1997i/DurLAR?tab=readme-ov-file#header-of-csv-files) to get the first line of the `csv` files.
110
+
111
+ > **calibration files** (v2, targetless): Following the publication of the proposed DurLAR dataset and the corresponding paper, we identify a more advanced [targetless calibration method](https://github.com/koide3/direct_visual_lidar_calibration) ([#4](https://github.com/l1997i/DurLAR/issues/4)) that surpasses the LiDAR-camera calibration technique previously employed. We provide [**exemplar ROS bag**](https://durhamuniversity-my.sharepoint.com/:f:/g/personal/mznv82_durham_ac_uk/Ei28Yy-Gb_BKoavvJ6R_jLcBfTZ_xM5cZhEFgMFNK9HhyQ?e=rxPgI9) for [targetless calibration](https://github.com/koide3/direct_visual_lidar_calibration), and also corresponding [calibration results (v2)](https://github.com/l1997i/DurLAR/raw/main/DurLAR_calibs_v2.zip). Please refer to [Appendix (arXiv)](https://arxiv.org/pdf/2406.10068) for more details.
112
+
113
+ ### Access to the full dataset
114
+
115
+ Access to the complete DurLAR dataset can be requested through **one** of the following ways. 您可任选以下其中**任意**链接申请访问完整数据集。
116
+
117
+ [1. Access for the full dataset](https://forms.gle/ZjSs3PWeGjjnXmwg9)
118
+
119
+ [2. 申请访问完整数据集](https://wj.qq.com/s2/9459309/4cdd/)
120
+
121
+ ### Usage of the downloading script
122
+
123
+ Upon completion of the form, the download script `durlar_download` and accompanying instructions will be **automatically** provided. The DurLAR dataset can then be downloaded via the command line.
124
+
125
+ For the first use, it is highly likely that the `durlar_download` file will need to be made
126
+ executable:
127
+
128
+ ``` bash
129
+ chmod +x durlar_download
130
+ ```
131
+
132
+ By default, this script downloads the small subset for simple testing. Use the following command:
133
+
134
+ ```bash
135
+ ./durlar_download
136
+ ```
137
+
138
+ It is also possible to select and download various test drives:
139
+ ```
140
+ usage: ./durlar_download [dataset_sample_size] [drive]
141
+ dataset_sample_size = [ small | medium | full ]
142
+ drive = 1 ... 5
143
+ ```
144
+
145
+ Given the substantial size of the DurLAR dataset, please download the complete dataset
146
+ only when necessary:
147
+ ```bash
148
+ ./durlar_download full 5
149
+ ```
150
+
151
+ Throughout the entire download process, it is important that your network remains
152
+ stable and free from any interruptions. In the event of network issues, please delete all
153
+ DurLAR dataset folders and rerun the download script. Currently, our script supports
154
+ only Ubuntu (tested on Ubuntu 18.04 and Ubuntu 20.04, amd64). For downloading the
155
+ DurLAR dataset on other operating systems, please refer to [Durham Collections](https://collections.durham.ac.uk/collections/r2gq67jr192) for instructions.
156
+
157
+ ## CSV format for `imu`, `gps`, and `lux` topics
158
+
159
+ ### Format description
160
+
161
+ Our `imu`, `gps`, and `lux` data are all in `CSV` format. The **first row** of the `CSV` file contains headers that **describe the meaning of each column**. Taking `imu` csv file for example (only the first 9 rows are displayed),
162
+
163
+ 1. `%time`: Timestamps in Unix epoch format.
164
+ 2. `field.header.seq`: Sequence numbers.
165
+ 3. `field.header.stamp`: Header timestamps.
166
+ 4. `field.header.frame_id`: Frame of reference, labeled as "gps".
167
+ 5. `field.orientation.x`: X-component of the orientation quaternion.
168
+ 6. `field.orientation.y`: Y-component of the orientation quaternion.
169
+ 7. `field.orientation.z`: Z-component of the orientation quaternion.
170
+ 8. `field.orientation.w`: W-component of the orientation quaternion.
171
+ 9. `field.orientation_covariance0`: Covariance of the orientation data.
172
+
173
+ ![image](https://github.com/l1997i/DurLAR/assets/35445094/18c1e563-c137-44ba-9834-345120026db0)
174
+
175
+ ### Header of `csv` files
176
+
177
+ The first line of the `csv` files is shown as follows.
178
+
179
+ For the GPS,
180
+ ```csv
181
+ time,field.header.seq,field.header.stamp,field.header.frame_id,field.status.status,field.status.service,field.latitude,field.longitude,field.altitude,field.position_covariance0,field.position_covariance1,field.position_covariance2,field.position_covariance3,field.position_covariance4,field.position_covariance5,field.position_covariance6,field.position_covariance7,field.position_covariance8,field.position_covariance_type
182
+ ```
183
+
184
+ For the IMU,
185
+ ```
186
+ time,field.header.seq,field.header.stamp,field.header.frame_id,field.orientation.x,field.orientation.y,field.orientation.z,field.orientation.w,field.orientation_covariance0,field.orientation_covariance1,field.orientation_covariance2,field.orientation_covariance3,field.orientation_covariance4,field.orientation_covariance5,field.orientation_covariance6,field.orientation_covariance7,field.orientation_covariance8,field.angular_velocity.x,field.angular_velocity.y,field.angular_velocity.z,field.angular_velocity_covariance0,field.angular_velocity_covariance1,field.angular_velocity_covariance2,field.angular_velocity_covariance3,field.angular_velocity_covariance4,field.angular_velocity_covariance5,field.angular_velocity_covariance6,field.angular_velocity_covariance7,field.angular_velocity_covariance8,field.linear_acceleration.x,field.linear_acceleration.y,field.linear_acceleration.z,field.linear_acceleration_covariance0,field.linear_acceleration_covariance1,field.linear_acceleration_covariance2,field.linear_acceleration_covariance3,field.linear_acceleration_covariance4,field.linear_acceleration_covariance5,field.linear_acceleration_covariance6,field.linear_acceleration_covariance7,field.linear_acceleration_covariance8
187
+ ```
188
+
189
+ For the LUX,
190
+ ```csv
191
+ time,field.header.seq,field.header.stamp,field.header.frame_id,field.illuminance,field.variance
192
+ ```
193
+
194
+ ### To process the `csv` files
195
+
196
+ To process the `csv` files, you can use multiple ways. For example,
197
+
198
+ **Python**: Use the pandas library to read the CSV file with the following code:
199
+ ```python
200
+ import pandas as pd
201
+ df = pd.read_csv('data.csv')
202
+ print(df)
203
+ ```
204
+
205
+ **Text Editors**: Simple text editors like `Notepad` (Windows) or `TextEdit` (Mac) can also open `CSV` files, though they are less suited for data analysis.
206
+
207
+
208
+ ## Folder \#Frame Verification
209
+
210
+ For easy verification of folder data and integrity, we provide the number of frames in each drive folder, as well as the [MD5 checksums](https://collections.durham.ac.uk/collections/r2gq67jr192?utf8=%E2%9C%93&cq=MD5&sort=) of the zip files.
211
+
212
+ | Folder | # of Frames |
213
+ |----------|-------------|
214
+ | 20210716 | 41993 |
215
+ | 20210901 | 23347 |
216
+ | 20211012 | 28642 |
217
+ | 20211208 | 26850 |
218
+ | 20211209 | 25079 |
219
+ |**total** | **145911** |
220
+
221
+ ## Intrinsic Parameters of Our Ouster OS1-128 LiDAR
222
+
223
+ The intrinsic JSON file of our LiDAR can be downloaded at [this link](https://github.com/l1997i/DurLAR/raw/refs/heads/main/os1-128.json). For more information, visit the [official user manual of OS1-128](https://data.ouster.io/downloads/software-user-manual/firmware-user-manual-v3.1.0.pdf).
224
+
225
+ Please note that **sensitive information, such as the serial number and unique device ID, has been redacted** (indicated as XXXXXXX).
226
+
227
+ ---
228
+
229
+ ## Reference
230
+
231
+ If you are making use of this work in any way (including our dataset and toolkits), you must please reference the following paper in any report, publication, presentation, software release or any other associated materials:
232
+
233
+ [DurLAR: A High-fidelity 128-channel LiDAR Dataset with Panoramic Ambient and Reflectivity Imagery for Multi-modal Autonomous Driving Applications](https://dro.dur.ac.uk/34293/)
234
+ (Li Li, Khalid N. Ismail, Hubert P. H. Shum and Toby P. Breckon), In Int. Conf. 3D Vision, 2021. [[pdf](https://www.l1997i.com/assets/pdf/li21durlar_arxiv_compressed.pdf)] [[video](https://youtu.be/1IAC9RbNYjY)][[poster](https://www.l1997i.com/assets/pdf/li21durlar_poster_v2_compressed.pdf)]
235
+
236
+ ```
237
+ @inproceedings{li21durlar,
238
+ author = {Li, L. and Ismail, K.N. and Shum, H.P.H. and Breckon, T.P.},
239
+ title = {DurLAR: A High-fidelity 128-channel LiDAR Dataset with Panoramic Ambient and Reflectivity Imagery for Multi-modal Autonomous Driving Applications},
240
+ booktitle = {Proc. Int. Conf. on 3D Vision},
241
+ year = {2021},
242
+ month = {December},
243
+ publisher = {IEEE},
244
+ keywords = {autonomous driving, dataset, high-resolution LiDAR, flash LiDAR, ground truth depth, dense depth, monocular depth estimation, stereo vision, 3D},
245
+ category = {automotive 3Dvision},
246
+ }
247
+ ```
248
+ ---