You need to agree to share your contact information to access this model

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this model content.

This is the official weight (4-bit quantized) of LightMamba.
Our paper LightMamba: Efficient Mamba Acceleration on FPGA with Quantization and Hardware Co-design is available at: https://arxiv.org/abs/2502.15260.
Our code is available at GitHub: https://github.com/PKU-SEC-Lab/LightMamba.

If our work assists your research, feel free to cite us with:
@inproceedings{wei2025lightmamba,
title={Lightmamba: Efficient mamba acceleration on fpga with quantization and hardware co-design},
author={Wei, Renjie and Xu, Songqiang and Zhong, Linfeng and Yang, Zebin and Guo, Qingyu and Wang, Yuan and Wang, Runsheng and Li, Meng},
booktitle={2025 Design, Automation & Test in Europe Conference (DATE)},
pages={1--7},
year={2025},
organization={IEEE}
}

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support