Datasets:

Modalities:
Text
Formats:
json
Languages:
English
ArXiv:
Libraries:
Datasets
Dask
nkandpa2 commited on
Commit
b29e6d4
·
verified ·
1 Parent(s): 18c35aa

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +47 -0
README.md ADDED
@@ -0,0 +1,47 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ task_categories:
3
+ - text-generation
4
+ language:
5
+ - en
6
+ pretty_name: Project Gutenberg
7
+ ---
8
+ # Project Gutenberg
9
+
10
+ ## Description
11
+ [Project Gutenberg](https://www.gutenberg.org) is an online collection of over 75,000 digitized books available as plain text.
12
+ We use all books that are 1) English and 2) marked as in the Public Domain according to the provided metadata.
13
+ Additionally, we include any books that are part of the [PG19](https://huggingface.co/datasets/deepmind/pg19) dataset, which only includes books that are over 100 years old.
14
+ Minimal preprocessing is applied to remove the Project Gutenberg header and footers, but many scanned books include preamble information about who digitized them.
15
+
16
+ ## Dataset Statistics
17
+ | Documents | UTF-8 GB |
18
+ |-----------|----------|
19
+ | 71,810 | 26.2 |
20
+
21
+ ## License Issues
22
+ While we aim to produce datasets with completely accurate licensing information, license laundering and inaccurate metadata can cause us to erroneously assign the incorrect license to some documents (for further discussion of this limitation, please see [our paper](TODO link)). If you believe you have found an instance of incorrect licensing in this dataset, please [start a discussion](https://github.com/r-three/common-pile/discussions/new) on this repository.
23
+
24
+ ## Other Versions
25
+ This is the "raw" version of the Project Gutenberg dataset.
26
+ If you are looking for the filtered version used to train [Comma v0.1](https://huggingface.co/common-pile/comma-v0.1), you can find it [here](https://huggingface.co/datasets/common-pile/project_gutenberg_filtered).
27
+
28
+ ## Citation
29
+ If you use this dataset, please cite:
30
+ ```bibtex
31
+ @article{kandpal2025common,
32
+ title={{The Common Pile v0.1: An 8TB Dataset of Public Domain and Openly Licensed Text}},
33
+ author={Nikhil Kandpal and Brian Lester and Colin Raffel and Sebastian Majstorovic and Stella Biderman and Baber Abbasi and Luca Soldaini and Enrico Shippole and A. Feder Cooper and Aviya Skowron and Shayne Longpre and Lintang Sutawika and Alon Albalak and Zhenlin Xu and Guilherme Penedo and Loubna Ben and Elie Bakouch and John David and Honglu Fan and Dashiell Stander and Guangyu Song and Aaron Gokaslan and John Kirchenbauer and Tom Goldstein and Brian R and Bhavya Kailkhura and Tyler Murray},
34
+ journal={arXiv preprint},
35
+ year={2025}
36
+ }
37
+ ```
38
+ ```bibtex
39
+ @article{raecompressive2019,
40
+ author = {Rae, Jack W and Potapenko, Anna and Jayakumar, Siddhant M and
41
+ Hillier, Chloe and Lillicrap, Timothy P},
42
+ title = {Compressive Transformers for Long-Range Sequence Modelling},
43
+ journal = {arXiv preprint},
44
+ url = {https://arxiv.org/abs/1911.05507},
45
+ year = {2019},
46
+ }
47
+ ```