Update README.md
Browse files
README.md
CHANGED
|
@@ -32,6 +32,7 @@ configs:
|
|
| 32 |
- [π Overview](#overview)
|
| 33 |
- [π Preparation](#preparation)
|
| 34 |
- [β³ Data Selection](#data_selection)
|
|
|
|
| 35 |
- [π Citation](#citation)
|
| 36 |
|
| 37 |
|
|
@@ -40,7 +41,6 @@ configs:
|
|
| 40 |
## π Overview
|
| 41 |
|
| 42 |
Long-context modeling has drawn more and more attention in the area of Large Language Models (LLMs). Continual training with long-context data becomes the de-facto method to equip LLMs with the ability to process long inputs. However, it still remains an open challenge to measure the quality of long-context training data. To address this issue, we propose a **L**ong-context data selection framework with **A**ttention-based **D**ependency **M**easurement (**LADM**), which can efficiently identify high-quality long-context data from a large-scale, multi-domain pre-training corpus. LADM leverages the retrieval capabilities of the attention mechanism to capture contextual dependencies, ensuring a comprehensive quality measurement of long-context data. Experimental results show that our LADM framework significantly boosts the performance of LLMs on multiple long-context tasks with only 1B tokens for continual training.
|
| 43 |
-

|
| 44 |
|
| 45 |
|
| 46 |
<a name="preparation"></a>
|
|
@@ -77,6 +77,13 @@ For full usage:
|
|
| 77 |
bash launch.sh
|
| 78 |
```
|
| 79 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 80 |
<a name="citation"></a>
|
| 81 |
|
| 82 |
## π Citation
|
|
@@ -89,4 +96,4 @@ If you find this repo useful for your research, please consider citing the paper
|
|
| 89 |
journal={arXiv preprint arXiv:2503.02502},
|
| 90 |
year={2025}
|
| 91 |
}
|
| 92 |
-
```
|
|
|
|
| 32 |
- [π Overview](#overview)
|
| 33 |
- [π Preparation](#preparation)
|
| 34 |
- [β³ Data Selection](#data_selection)
|
| 35 |
+
- [π Training](#training)
|
| 36 |
- [π Citation](#citation)
|
| 37 |
|
| 38 |
|
|
|
|
| 41 |
## π Overview
|
| 42 |
|
| 43 |
Long-context modeling has drawn more and more attention in the area of Large Language Models (LLMs). Continual training with long-context data becomes the de-facto method to equip LLMs with the ability to process long inputs. However, it still remains an open challenge to measure the quality of long-context training data. To address this issue, we propose a **L**ong-context data selection framework with **A**ttention-based **D**ependency **M**easurement (**LADM**), which can efficiently identify high-quality long-context data from a large-scale, multi-domain pre-training corpus. LADM leverages the retrieval capabilities of the attention mechanism to capture contextual dependencies, ensuring a comprehensive quality measurement of long-context data. Experimental results show that our LADM framework significantly boosts the performance of LLMs on multiple long-context tasks with only 1B tokens for continual training.
|
|
|
|
| 44 |
|
| 45 |
|
| 46 |
<a name="preparation"></a>
|
|
|
|
| 77 |
bash launch.sh
|
| 78 |
```
|
| 79 |
|
| 80 |
+
<a name="training"></a>
|
| 81 |
+
|
| 82 |
+
## π Training
|
| 83 |
+
|
| 84 |
+
Our training mainly follows [Huggingface Trainer](https://github.com/huggingface/transformers/tree/main/examples/pytorch/language-modeling) code base. Please refer to that repo for more details.
|
| 85 |
+
|
| 86 |
+
|
| 87 |
<a name="citation"></a>
|
| 88 |
|
| 89 |
## π Citation
|
|
|
|
| 96 |
journal={arXiv preprint arXiv:2503.02502},
|
| 97 |
year={2025}
|
| 98 |
}
|
| 99 |
+
```
|