Safetensors
llama
Lza12a's picture
Upload README.md
fb9ccbb verified
---
license: apache-2.0
---
<div style="text-align:center;">
<strong>WWW2025. OntoTune: Ontology-Driven Self-training for Aligning Large Language Models</strong>
</div>
### πŸ”” Introduction
1. This is the model parameter of [OntoTune$_{dpo}$](https://arxiv.org/abs/2502.05478) fine-tuned based on Llama3 8B-Instruct.
2. This work was supported by Ant Group and Zhejiang University - Ant Group Joint Laboratory of Knowledge Graph
### πŸ“– Citation
Please consider citing this paper if you find our work useful.
```bibtex
@inproceedings{DBLP:conf/www/LiuGWZBSC025,
title = {OntoTune: Ontology-Driven Self-training for Aligning Large Language Models},
author = {Zhiqiang Liu and
Chengtao Gan and
Junjie Wang and
Yichi Zhang and
Zhongpu Bo and
Mengshu Sun and
Huajun Chen and
Wen Zhang},
editor = {Guodong Long and
Michale Blumestein and
Yi Chang and
Liane Lewin{-}Eytan and
Zi Helen Huang and
Elad Yom{-}Tov},
booktitle = {Proceedings of the {ACM} on Web Conference 2025, {WWW} 2025, Sydney, NSW, Australia, 28 April 2025- 2 May 2025},
pages = {119--133},
publisher = {{ACM}},
year = {2025},
url = {https://doi.org/10.1145/3696410.3714816},
doi = {10.1145/3696410.3714816},
timestamp = {Wed, 23 Apr 2025 16:35:50 +0200},
biburl = {https://dblp.org/rec/conf/www/LiuGWZBSC025.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```