safouaneelg's picture
update model card
3444b13 verified
---
model_name: Apertus-8B-Instruct-2509-AQUA-RAT-SFT
base_model: swiss-ai/Apertus-8B-Instruct-2509
license: apache-2.0
language:
- en
- multilingual
tags:
- multilingual
- compliant
- swiss-ai
- apertus
- fine-tuned
- aqua_rat
- text-generation
pipeline_tag: text-generation
library_name: transformers
author: Safouane El Ghazouali
author_email: [email protected]
model_creator: Safouane El Ghazouali
location: Switzerland
datasets:
- deepmind/aqua_rat
---
# safouaneelg/Apertus-8B-Instruct-2509-AQUA-RAT-SFT
Apertus has released two models: 70B and 8B parameter multi-language model.
Check out the model info here: [Swiss-AI/LLM](https://huggingface.co/collections/swiss-ai/apertus-llm-68b699e65415c231ace3b059)
# Finetuned on AQUA-RAT
This repo contains the fine-tuned version of Apertus on [AQuA-RAT dataset](https://huggingface.co/datasets/deepmind/aqua_rat).
The fine-tuning was performed using Unsloth on one GPU (RTX A6000 48 GB) with the following parameters:
- per_device_train_batch_size: 8
- gradient_accumulation_steps: 4 (effective batch size: 32)
- warmup_steps: 10
- num_train_epochs: 1
- learning_rate: 5e-5
- fp16/bf16: Enabled based on hardware support
- logging_steps: 1
- optimizer: adamw_8bit
- weight_decay: 0.01
- lr_scheduler_type: linear
- seed: 3407
- eval_strategy: steps
- eval_steps: 150
- packing: True
## How to use
You can run this fine-tuned version using the below instructions:
1. `Transformers 4.56.0` are required to run the model.
```bash
pip install -U transformers Unsloth
```
2. I have personally managed to run it after setting the xiELU activation function which can theoretically be installed via the below command line.
```bash
pip install git+https://github.com/rubber-duck-debug/xielu
```
If you struggle, check the xiELU installation in my other tune model ([safouaneelg/Apertus-8B-Instruct-2509-GSM8k-SFT](https://huggingface.co/safouaneelg/Apertus-8B-Instruct-2509-GSM8k-SFT)).
3. Run inference using:
- Transformers pipeline
- Unsloth pipeline (This works better, if you have `StaticLayer` error, comment/uncomment the arg `prompt_lookup_num_tokens=None`)
```python
from unsloth import FastLanguageModel
import torch
# Load the model and tokenizer
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="safouaneelg/Apertus-8B-Instruct-2509-AQUA-RAT-SFT",
max_seq_length=2048,
load_in_4bit=True,
)
# Move to device
device = "cuda" if torch.cuda.is_available() else "cpu"
# Example prompt from AQUA-RAT
prompt = """Question: A grocery sells a bag of ice for $1.25, and makes 20% profit. If it sells 500 bags of ice, how much total profit does it make?
Options: A)125
B)150
C)225
D)250
E)275
Rationale:"""
messages_think = [
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages_think,
tokenize=False,
add_generation_prompt=True,
)
model_inputs = tokenizer([text], return_tensors="pt", add_special_tokens=False).to(model.device)
outputs = model.generate(
**model_inputs,
max_new_tokens=256,
temperature=0.8,
top_p=0.9,
use_cache=True,
do_sample=True,
prompt_lookup_num_tokens=None #for some reasoning this sometimes solve the inferencing errors
)
generated_text = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(generated_text)
```
```python
import os
from transformers import AutoModelForCausalLM, AutoTokenizer, TextStreamer
model_name = "safouaneelg/Apertus-8B-Instruct-2509-AQUA-RAT-SFT"
device = "cuda" if torch.cuda.is_available() else "cpu"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
model_name,
).to(device)
# prepare the model input
prompt = """Question: A grocery sells a bag of ice for $1.25, and makes 20% profit. If it sells 500 bags of ice, how much total profit does it make?
Options: A)125
B)150
C)225
D)250
E)275
Rationale:"""
messages_think = [
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages_think,
tokenize=False,
add_generation_prompt=True,
)
streamer = TextStreamer(tokenizer)
model_inputs = tokenizer([text], return_tensors="pt", add_special_tokens=False).to(model.device)
# Generate the output
generated_ids = model.generate(**model_inputs, streamer=streamer, max_new_tokens=2024)
# Get and decode the output
output_ids = generated_ids[0][len(model_inputs.input_ids[0]) :]
print(tokenizer.decode(output_ids, skip_special_tokens=True))
```
## Citation
```bibtex
@misc{swissai2025apertus,
title={{Apertus: Democratizing Open and Compliant LLMs for Global Language Environments}},
author={Apertus Team},
year={2025},
howpublished={\url{https://huggingface.co/swiss-ai/Apertus-8B-Instruct-2509}}
}
```