Update README.md
Browse files
README.md
CHANGED
|
@@ -130,16 +130,20 @@ print(tokenizer.decode(outputs[0], skip_special_tokens=True))
|
|
| 130 |
|
| 131 |
|
| 132 |
|
|
|
|
|
|
|
|
|
|
|
|
|
| 133 |
## Citation
|
| 134 |
|
| 135 |
```bibtex
|
| 136 |
-
@article{
|
| 137 |
title={Scaling Latent Reasoning via Looped Language Models},
|
| 138 |
-
author={Zhu, Rui-Jie and Wang, Zixuan and Hua, Kai and Zhang, Tianyu and Li, Ziniu and Que, Haoran and Wei, Boyi and
|
| 139 |
-
journal={arXiv preprint},
|
| 140 |
year={2025}
|
| 141 |
}
|
| 142 |
-
|
| 143 |
|
| 144 |
## License
|
| 145 |
|
|
|
|
| 130 |
|
| 131 |
|
| 132 |
|
| 133 |
+
## Acknowledgments
|
| 134 |
+
|
| 135 |
+
We thank [@Antizana](https://github.com/Antizana) for the KV cache fix merged from [ouro-cache-fix](https://github.com/Antizana/ouro-cache-fix), which resolved a critical compatibility issue with transformers>=4.56.0.
|
| 136 |
+
|
| 137 |
## Citation
|
| 138 |
|
| 139 |
```bibtex
|
| 140 |
+
@article{zhu2025scaling,
|
| 141 |
title={Scaling Latent Reasoning via Looped Language Models},
|
| 142 |
+
author={Zhu, Rui-Jie and Wang, Zixuan and Hua, Kai and Zhang, Tianyu and Li, Ziniu and Que, Haoran and Wei, Boyi and Wen, Zixin and Yin, Fan and Xing, He and others},
|
| 143 |
+
journal={arXiv preprint arXiv:2510.25741},
|
| 144 |
year={2025}
|
| 145 |
}
|
| 146 |
+
|
| 147 |
|
| 148 |
## License
|
| 149 |
|