danielhanchen commited on
Commit
ed56e19
·
verified ·
1 Parent(s): a9ee7ba

Upload folder using huggingface_hub

Browse files
.gitattributes CHANGED
@@ -33,3 +33,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ tokenizer.json filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,186 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ base_model:
4
+ - swiss-ai/Apertus-70B-2509
5
+ pipeline_tag: text-generation
6
+ library_name: transformers
7
+ tags:
8
+ - multilingual
9
+ - compliant
10
+ - swiss-ai
11
+ - apertus
12
+
13
+ extra_gated_prompt: "### Apertus LLM Acceptable Use Policy \n(1.0 | September 1, 2025)\n\"Agreement\" The Swiss National AI Institute (SNAI) is a partnership between the two Swiss Federal Institutes of Technology, ETH Zurich and EPFL. \n\nBy using the Apertus LLM you agree to indemnify, defend, and hold harmless ETH Zurich and EPFL against any third-party claims arising from your use of Apertus LLM. \n\nThe training data and the Apertus LLM may contain or generate information that directly or indirectly refers to an identifiable individual (Personal Data). You process Personal Data as independent controller in accordance with applicable data protection law. SNAI will regularly provide a file with hash values for download which you can apply as an output filter to your use of our Apertus LLM. The file reflects data protection deletion requests which have been addressed to SNAI as the developer of the Apertus LLM. It allows you to remove Personal Data contained in the model output. We strongly advise downloading and applying this output filter from SNAI every six months following the release of the model. "
14
+ extra_gated_fields:
15
+ Your Name: text
16
+ Country: country
17
+ Affiliation: text
18
+ geo: ip_location
19
+ By clicking Submit below I accept the terms of use: checkbox
20
+ extra_gated_button_content: Submit
21
+ ---
22
+
23
+ # Apertus
24
+
25
+ ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/6639f08490b7db8dcbf1a2aa/YKux3SpTciL4O60L3Ol-6.jpeg)
26
+
27
+ ## Table of Contents
28
+
29
+ 1. [Model Summary](#model-summary)
30
+ 2. [How to use](#how-to-use)
31
+ 3. [Evaluation](#evaluation)
32
+ 4. [Training](#training)
33
+ 5. [Limitations](#limitations)
34
+ 6. [Legal Aspects](#legal-aspects)
35
+
36
+ ## Model Summary
37
+
38
+ Apertus is a 70B and 8B parameter language model designed to push the boundaries of fully-open multilingual and transparent models.
39
+ The model supports over 1000 languages and long context, it uses only fully compliant and open training data, and achieves comparable performance to models trained behind closed doors.
40
+
41
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/654baf61d625e083383dfd00/gKDv_6dpIpvmgyquenbXt.png)
42
+
43
+ The model is a decoder-only transformer, pretrained on 15T tokens with a staged curriculum of web, code and math data. The model uses a new xIELU activation function and is trained from scratch with the AdEMAMix optimizer. Post-training included supervised fine-tuning and alignment via QRPO.
44
+
45
+ ### Key features
46
+ - **Fully open model**: open weights + open data + full training details including all data and training recipes
47
+ - **Massively Multilingual**: 1811 natively supported languages
48
+ - **Compliant** Apertus is trained while respecting opt-out consent of data owners (even retrospectivey), and avoiding memorization of training data
49
+
50
+ For more details refer to our [technical report](https://arxiv.org/abs/2509.14233)
51
+
52
+ ## How to use
53
+
54
+ The modeling code for Apertus is available in transformers `v4.56.0` and later, so make sure to upgrade your transformers version. You can also load the model with the latest `vLLM` which uses transformers as a backend.
55
+ ```bash
56
+ pip install -U transformers
57
+ ```
58
+
59
+ ```python
60
+ from transformers import AutoModelForCausalLM, AutoTokenizer
61
+
62
+ model_name = "swiss-ai/Apertus-70B-Instruct-2509"
63
+ device = "cuda" # for GPU usage or "cpu" for CPU usage
64
+
65
+ # load the tokenizer and the model
66
+ tokenizer = AutoTokenizer.from_pretrained(model_name)
67
+ model = AutoModelForCausalLM.from_pretrained(
68
+ model_name,
69
+ ).to(device)
70
+
71
+ # prepare the model input
72
+ prompt = "Give me a brief explanation of gravity in simple terms."
73
+ messages_think = [
74
+ {"role": "user", "content": prompt}
75
+ ]
76
+
77
+ text = tokenizer.apply_chat_template(
78
+ messages_think,
79
+ tokenize=False,
80
+ add_generation_prompt=True,
81
+ )
82
+ model_inputs = tokenizer([text], return_tensors="pt", add_special_tokens=False).to(model.device)
83
+
84
+ # Generate the output
85
+ generated_ids = model.generate(**model_inputs, max_new_tokens=32768)
86
+
87
+ # Get and decode the output
88
+ output_ids = generated_ids[0][len(model_inputs.input_ids[0]) :]
89
+ print(tokenizer.decode(output_ids, skip_special_tokens=True))
90
+ ```
91
+
92
+ >[!TIP]
93
+ > We recommend setting `temperature=0.8` and `top_p=0.9` in the sampling parameters.
94
+
95
+ ### Long context processing
96
+
97
+ Apertus by default supports a context length up to 65,536 tokens.
98
+
99
+ ### Agentic Usage
100
+
101
+ Apertus supports tool use
102
+
103
+ ### Deployment
104
+
105
+ Deployment of the models is directly supported by the newest versions of [Transformers](https://github.com/huggingface/transformers), [vLLM](https://github.com/vllm-project/vllm), [SGLang](https://github.com/sgl-project/sglang), and also for running on-device with [MLX](https://github.com/ml-explore/mlx-lm),
106
+
107
+ ## Evaluation
108
+
109
+ **Pretraining Evaluation:** Performance (%) of Apertus models on *general language understanding* tasks (higher is better) compared to other pretrained models.
110
+
111
+ | **Model** | **Avg** | **ARC** | **HellaSwag** | **WinoGrande** | **XNLI** | **XCOPA** | **PIQA** |
112
+ | :--- | :---: | :---: | :---: | :---: | :---: | :---: | :---: |
113
+ | **Fully Open Models** | | | | | | | |
114
+ | **Apertus-8B** | 65.8 | 72.7 | 59.8 | 70.6 | 45.2 | 66.5 | 79.8 |
115
+ | **Apertus-70B** | 67.5 | 70.6 | 64.0 | 73.3 | 45.3 | 69.8 | 81.9 |
116
+ | OLMo2-7B | 64.0 | 72.9 | 60.4 | 74.5 | 40.4 | 55.2 | 80.9 |
117
+ | OLMo2-32B | 67.7 | 76.2 | 66.7 | 78.6 | 42.9 | 60.1 | 82.1 |
118
+ | EuroLLM-1.7B | 54.8 | 57.2 | 44.9 | 58.1 | 40.7 | 55.7 | 72.4 |
119
+ | EuroLLM-9B | 62.8 | 67.9 | 57.9 | 68.8 | 41.5 | 61.1 | 79.6 |
120
+ | SmolLM2-1.7B | 58.5 | 66.1 | 52.4 | 65.6 | 37.6 | 52.3 | 77.0 |
121
+ | SmolLM3-3B | 61.6 | 68.6 | 56.4 | 68.1 | 40.5 | 58.2 | 77.7 |
122
+ | Poro-34B | 61.7 | 65.7 | 57.9 | 70.6 | 41.6 | 56.0 | 78.5 |
123
+ | **Open-Weight Models** | | | | | | | |
124
+ | Llama3.1-8B | 65.4 | 71.6 | 60.0 | 73.4 | 45.3 | 61.8 | 80.1 |
125
+ | Llama3.1-70B | 67.3 | 74.4 | 56.5 | 79.4 | 44.3 | 66.7 | 82.3 |
126
+ | Qwen2.5-7B | 64.4 | 69.6 | 60.1 | 72.8 | 43.3 | 61.7 | 78.7 |
127
+ | Qwen2.5-72B | 69.8 | 76.2 | 67.5 | 78.0 | 46.9 | 68.2 | 82.0 |
128
+ | Qwen3-32B | 67.8 | 75.6 | 64.0 | 73.8 | 44.4 | 67.9 | 80.9 |
129
+ | Llama4-Scout-16x17B | 67.9 | 74.7 | 66.8 | 73.2 | 43.5 | 67.7 | 81.2 |
130
+ | GPT-OSS-20B | 58.1 | 67.0 | 41.5 | 66.5 | 37.4 | 60.4 | 75.6 |
131
+
132
+ Many additional benchmark evaluations, for pretraining and posttraining phases, multilingual evaluations in around hundred languages, and long context evaluations are provided in Section 5 of the [Apertus_Tech_Report.pdf](https://github.com/swiss-ai/apertus-tech-report/blob/main/Apertus_Tech_Report.pdf)
133
+
134
+ ## Training
135
+
136
+ ### Model
137
+
138
+ - **Architecture:** Transformer decoder
139
+ - **Pretraining tokens:** 15T
140
+ - **Precision:** bfloat16
141
+
142
+ ### Software & hardware
143
+
144
+ - **GPUs:** 4096 GH200
145
+ - **Training Framework:** [Megatron-LM](https://github.com/swiss-ai/Megatron-LM)
146
+ - ...
147
+
148
+ ### Open resources
149
+ All elements used in the training process are made openly available
150
+ - **Training data reconstruction scripts:** [github.com/swiss-ai/pretrain-data](https://github.com/swiss-ai/pretrain-data)
151
+ - The training intermediate checkpoints are available on the different branches of this same repository
152
+
153
+
154
+ ## Limitations
155
+
156
+ Apertus can produce text on a variety of topics, but the generated content may not always be factually accurate, logically consistent, or free from biases present in the training data. These models should be used as assistive tools rather than definitive sources of information. Users should always verify important information and critically evaluate any generated content.
157
+
158
+
159
+ ## Legal Aspects
160
+
161
+ #### EU AI Act Transparency Documentation and Code of Practice
162
+ - [Apertus_EU_Public_Summary.pdf](https://huggingface.co/swiss-ai/Apertus-70B-2509/blob/main/Apertus_EU_Public_Summary.pdf)
163
+ - [Apertus_EU_Code_of_Practice.pdf](https://huggingface.co/swiss-ai/Apertus-70B-2509/blob/main/Apertus_EU_Code_of_Practice.pdf)
164
+
165
+ #### Data Protection and Copyright Requests
166
+ For removal requests of personally identifiable information (PII) or of copyrighted content, please contact the respective dataset owners or us directly
167
168
169
+
170
+ #### Output Filter for PII
171
+ - Currently no output filter is provided.
172
+ - Please check this site regularly for an output filter that can be used on top of the Apertus LLM. The filter reflects data protection deletion requests which have been addressed to us as the developer of the Apertus LLM. It allows you to remove Personal Data contained in the model output. We strongly advise downloading and applying this output filter from this site every six months.
173
+
174
+ ## Contact
175
+ To contact us, please send an email to
176
177
+
178
+ ## Citation
179
+ ```bash
180
+ @misc{swissai2025apertus,
181
+ title={{Apertus: Democratizing Open and Compliant LLMs for Global Language Environments}},
182
+ author={Alejandro Hernández-Cano and Alexander Hägele and Allen Hao Huang and Angelika Romanou and Antoni-Joan Solergibert and Barna Pasztor and Bettina Messmer and Dhia Garbaya and Eduard Frank Ďurech and Ido Hakimi and Juan García Giraldo and Mete Ismayilzada and Negar Foroutan and Skander Moalla and Tiancheng Chen and Vinko Sabolčec and Yixuan Xu and Michael Aerni and Badr AlKhamissi and Ines Altemir Marinas and Mohammad Hossein Amani and Matin Ansaripour and Ilia Badanin and Harold Benoit and Emanuela Boros and Nicholas Browning and Fabian Bösch and Maximilian Böther and Niklas Canova and Camille Challier and Clement Charmillot and Jonathan Coles and Jan Deriu and Arnout Devos and Lukas Drescher and Daniil Dzenhaliou and Maud Ehrmann and Dongyang Fan and Simin Fan and Silin Gao and Miguel Gila and María Grandury and Diba Hashemi and Alexander Hoyle and Jiaming Jiang and Mark Klein and Andrei Kucharavy and Anastasiia Kucherenko and Frederike Lübeck and Roman Machacek and Theofilos Manitaras and Andreas Marfurt and Kyle Matoba and Simon Matrenok and Henrique Mendoncça and Fawzi Roberto Mohamed and Syrielle Montariol and Luca Mouchel and Sven Najem-Meyer and Jingwei Ni and Gennaro Oliva and Matteo Pagliardini and Elia Palme and Andrei Panferov and Léo Paoletti and Marco Passerini and Ivan Pavlov and Auguste Poiroux and Kaustubh Ponkshe and Nathan Ranchin and Javi Rando and Mathieu Sauser and Jakhongir Saydaliev and Muhammad Ali Sayfiddinov and Marian Schneider and Stefano Schuppli and Marco Scialanga and Andrei Semenov and Kumar Shridhar and Raghav Singhal and Anna Sotnikova and Alexander Sternfeld and Ayush Kumar Tarun and Paul Teiletche and Jannis Vamvas and Xiaozhe Yao and Hao Zhao Alexander Ilic and Ana Klimovic and Andreas Krause and Caglar Gulcehre and David Rosenthal and Elliott Ash and Florian Tramèr and Joost VandeVondele and Livio Veraldi and Martin Rajman and Thomas Schulthess and Torsten Hoefler and Antoine Bosselut and Martin Jaggi and Imanol Schlag},
183
+ year={2025},
184
+ howpublished={\url{https://arxiv.org/abs/2509.14233}}
185
+ }
186
+ ```
chat_template.jinja ADDED
@@ -0,0 +1,330 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {# Unsloth template fixes #}
2
+ {%- macro render_typescript_type(param_spec, required_params, is_nullable=false) -%}
3
+ {%- if param_spec.type == "array" -%}
4
+ {%- if param_spec['items'] -%}
5
+ {%- if param_spec['items']['type'] == "string" -%}
6
+ {{- "string[]" }}
7
+ {%- elif param_spec['items']['type'] == "number" -%}
8
+ {{- "number[]" }}
9
+ {%- elif param_spec['items']['type'] == "integer" -%}
10
+ {{- "number[]" }}
11
+ {%- elif param_spec['items']['type'] == "boolean" -%}
12
+ {{- "boolean[]" }}
13
+ {%- else -%}
14
+ {%- set inner_type = render_typescript_type(param_spec['items'], required_params) -%}
15
+ {%- if inner_type == "object | object" or inner_type|length > 50 -%}
16
+ {{- "any[]" }}
17
+ {%- else -%}
18
+ {{- inner_type + "[]" }}
19
+ {%- endif -%}
20
+ {%- endif -%}
21
+ {%- if param_spec.nullable -%}
22
+ {{- " | null" }}
23
+ {%- endif -%}
24
+ {%- else -%}
25
+ {{- "any[]" }}
26
+ {%- if param_spec.nullable -%}
27
+ {{- " | null" }}
28
+ {%- endif -%}
29
+ {%- endif -%}
30
+ {%- elif param_spec.type is defined and param_spec.type is iterable and param_spec.type is not string and param_spec.type is not mapping and param_spec.type[0] is defined -%}
31
+ {#- Handle array of types like ["object", "object"] from Union[dict, list] #}
32
+ {%- if param_spec.type | length > 1 -%}
33
+ {{- param_spec.type | join(" | ") }}
34
+ {%- else -%}
35
+ {{- param_spec.type[0] }}
36
+ {%- endif -%}
37
+ {%- elif param_spec.oneOf -%}
38
+ {#- Handle oneOf schemas - check for complex unions and fallback to any #}
39
+ {%- set has_object_variants = false -%}
40
+ {%- for variant in param_spec.oneOf -%}
41
+ {%- if variant.type == "object" -%}
42
+ {%- set has_object_variants = true -%}
43
+ {%- endif -%}
44
+ {%- endfor -%}
45
+ {%- if has_object_variants and param_spec.oneOf|length > 1 -%}
46
+ {{- "any" }}
47
+ {%- else -%}
48
+ {%- for variant in param_spec.oneOf -%}
49
+ {{- render_typescript_type(variant, required_params) -}}
50
+ {%- if variant.description %}
51
+ {{- "// " + variant.description }}
52
+ {%- endif -%}
53
+ {%- if variant.default is defined %}
54
+ {{ "// default: " + variant.default|tojson }}
55
+ {%- endif -%}
56
+ {%- if not loop.last %}
57
+ {{- " | " }}
58
+ {% endif -%}
59
+ {%- endfor -%}
60
+ {%- endif -%}
61
+ {%- elif param_spec.type == "string" -%}
62
+ {%- if param_spec.enum -%}
63
+ {{- '"' + param_spec.enum|join('" | "') + '"' -}}
64
+ {%- else -%}
65
+ {{- "string" }}
66
+ {%- if param_spec.nullable %}
67
+ {{- " | null" }}
68
+ {%- endif -%}
69
+ {%- endif -%}
70
+ {%- elif param_spec.type == "number" -%}
71
+ {{- "number" }}
72
+ {%- elif param_spec.type == "integer" -%}
73
+ {{- "number" }}
74
+ {%- elif param_spec.type == "boolean" -%}
75
+ {{- "boolean" }}
76
+ {%- elif param_spec.type == "object" -%}
77
+ {%- if param_spec.properties -%}
78
+ {{- "{\n" }}
79
+ {%- for prop_name, prop_spec in param_spec.properties.items() -%}
80
+ {{- prop_name -}}
81
+ {%- if prop_name not in (param_spec.required or []) -%}
82
+ {{- "?" }}
83
+ {%- endif -%}
84
+ {{- ": " }}
85
+ {{ render_typescript_type(prop_spec, param_spec.required or []) }}
86
+ {%- if not loop.last -%}
87
+ {{-", " }}
88
+ {%- endif -%}
89
+ {%- endfor -%}
90
+ {{- "}" }}
91
+ {%- else -%}
92
+ {{- "object" }}
93
+ {%- endif -%}
94
+ {%- else -%}
95
+ {{- "any" }}
96
+ {%- endif -%}
97
+ {%- endmacro -%}
98
+
99
+ {%- macro render_tools(tools) -%}
100
+ {%- for tool in tools %}
101
+ {%- if tool is mapping and tool.function is defined %}
102
+ {%- set tool = tool.function -%}
103
+ {%- endif %}
104
+ {{- "// " + tool.description + "\n" }}
105
+ {{- "type "+ tool.name + " = " }}
106
+ {%- if tool.parameters and tool.parameters.properties %}
107
+ {{- "(_: {\n" }}
108
+ {%- for param_name, param_spec in tool.parameters.properties.items() %}
109
+ {%- if param_spec.description %}
110
+ {{- "// " + param_spec.description + "\n" }}
111
+ {%- endif %}
112
+ {{- param_name }}
113
+ {%- if param_name not in (tool.parameters.required or []) -%}
114
+ {{- "?" }}
115
+ {%- endif -%}
116
+ {{- ": " }}
117
+ {{- render_typescript_type(param_spec, tool.parameters.required or []) }}
118
+ {%- if param_spec.default is defined -%}
119
+ {%- if param_spec.enum %}
120
+ {{- ", // default: " + param_spec.default }}
121
+ {%- elif param_spec.oneOf %}
122
+ {{- "// default: " + param_spec.default }}
123
+ {%- else %}
124
+ {{- ", // default: " + param_spec.default|tojson }}
125
+ {%- endif -%}
126
+ {%- endif -%}
127
+ {%- if not loop.last %}
128
+ {{- ",\n" }}
129
+ {%- else %}
130
+ {{- "\n" }}
131
+ {%- endif -%}
132
+ {%- endfor %}
133
+ {{- "}) => any;" }}
134
+ {%- else -%}
135
+ {{- "() => any;" }}
136
+ {%- endif -%}
137
+ {%- if not loop.last -%}
138
+ {{- "\n" }}
139
+ {%- endif -%}
140
+ {%- endfor %}
141
+ {%- endmacro -%}
142
+
143
+ {{ bos_token }}
144
+
145
+ {%- set system_token = '<|system_start|>' -%}
146
+ {%- set end_system_token = '<|system_end|>' -%}
147
+ {%- set developer_token = '<|developer_start|>' -%}
148
+ {%- set end_developer_token = '<|developer_end|>' -%}
149
+ {%- set user_token = '<|user_start|>' -%}
150
+ {%- set end_user_token = '<|user_end|>' -%}
151
+ {%- set assistant_token = '<|assistant_start|>' -%}
152
+ {%- set end_assistant_token = '<|assistant_end|>' -%}
153
+ {%- set inner_token = '<|inner_prefix|>' -%}
154
+ {%- set outer_token = '<|inner_suffix|>' -%}
155
+ {%- set tool_calls_token = '<|tools_prefix|>' -%}
156
+ {%- set end_tool_calls_token = '<|tools_suffix|>' -%}
157
+
158
+ {%- set ns = namespace(in_assistant=false, in_tool=false, in_inner=false, assistant_format=none) -%}
159
+
160
+ {%- if messages and messages[0].role == 'system' -%}
161
+ {%- if "content" in messages[0] -%}
162
+ {%- if messages[0].content is string -%}
163
+ {{ system_token + messages[0].content + end_system_token }}
164
+ {%- elif messages[0].content is mapping and "text" in messages[0].content -%}
165
+ {{ system_token + messages[0].content.text + end_system_token }}
166
+ {%- else -%}
167
+ {{- raise_exception("Invalid system message") -}}
168
+ {%- endif -%}
169
+ {%- else -%}
170
+ {{- raise_exception("Invalid system message") -}}
171
+ {%- endif -%}
172
+ {%- set loop_messages = messages[1:] -%}
173
+ {%- else -%}
174
+ {{ system_token + 'You are Apertus, a helpful assistant created by the SwissAI initiative.\nKnowledge cutoff: 2024-04\nCurrent date: ' + strftime_now('%Y-%m-%d') + end_system_token }}
175
+ {%- set loop_messages = messages -%}
176
+ {%- endif -%}
177
+
178
+ {{ developer_token + 'Deliberation: ' }}
179
+ {%- if enable_thinking is defined and enable_thinking -%}
180
+ {{ 'enabled\n' }}
181
+ {%- else -%}
182
+ {{ 'disabled\n' }}
183
+ {%- endif -%}
184
+ {%- if tools is defined and tools -%}
185
+ {{ 'Tool Capabilities:\n' + render_tools(tools) }}
186
+ {%- else -%}
187
+ {{ 'Tool Capabilities: disabled' }}
188
+ {%- endif -%}
189
+ {{ end_developer_token }}
190
+
191
+ {%- for message in loop_messages -%}
192
+ {%- if message.role == 'user' -%}
193
+ {%- set ns.in_inner = false -%}
194
+ {%- if ns.in_tool -%}
195
+ {{ ']' }}
196
+ {%- set ns.in_tool = false -%}
197
+ {%- endif -%}
198
+ {%- if ns.in_assistant -%}
199
+ {{ end_assistant_token }}
200
+ {%- set ns.in_assistant = false -%}
201
+ {%- endif -%}
202
+ {%- if "content" in message -%}
203
+ {{ user_token }}
204
+ {%- if message.content is string -%}
205
+ {{ message.content }}
206
+ {%- elif message.content is mapping and "parts" in message.content -%}
207
+ {%- set parts = message.content.parts -%}
208
+ {%- for part in parts -%}
209
+ {%- if part.type == "text" -%}
210
+ {{ part.text }}
211
+ {%- else -%}
212
+ {{- raise_exception("Invalid user part: " + part.type) -}}
213
+ {%- endif -%}
214
+ {%- endfor -%}
215
+ {%- else -%}
216
+ {{- raise_exception("Invalid user message: " + message.role) -}}
217
+ {%- endif -%}
218
+ {{ end_user_token }}
219
+ {%- endif -%}
220
+ {%- elif message.role == 'assistant' -%}
221
+ {%- if not ns.in_assistant -%}
222
+ {{ assistant_token }}
223
+ {%- set ns.in_assistant = true -%}
224
+ {%- endif -%}
225
+ {%- if "content" in message -%}
226
+ {%- if message.content is string and (ns.assistant_format is none or ns.assistant_format == "string") -%}
227
+ {%- if ns.in_tool -%}
228
+ {{ ']' }}
229
+ {%- set ns.in_tool = false -%}
230
+ {%- endif -%}
231
+ {%- set ns.assistant_format = "string" -%}
232
+ {{ message.content }}
233
+ {%- elif message.content is mapping and "blocks" in message.content and (ns.assistant_format is none or ns.assistant_format == "mapping") -%}
234
+ {%- set ns.assistant_format = "mapping" -%}
235
+ {%- set blocks = message.content.blocks -%}
236
+ {%- for block in blocks -%}
237
+ {%- if block.type == 'thoughts' -%}
238
+ {%- if ns.in_tool -%}
239
+ {{ ']' }}
240
+ {%- set ns.in_tool = false -%}
241
+ {%- endif -%}
242
+ {%- if not ns.in_inner -%}
243
+ {%- set ns.in_inner = true -%}
244
+ {{ inner_token }}
245
+ {%- endif -%}
246
+ {{ block.text }}
247
+ {%- elif block.type == 'tool_calls' -%}
248
+ {%- if ns.in_tool -%}
249
+ {{ ']' }}
250
+ {%- set ns.in_tool = false -%}
251
+ {%- endif -%}
252
+ {%- if ns.in_inner and not loop.first and block.calls|length == 1 and block.calls[0].name == 'display_answers' -%}
253
+ {%- set ns.in_inner = false -%}
254
+ {{ outer_token }}
255
+ {%- endif -%}
256
+ {{ tool_calls_token + '[' }}
257
+ {%- for tool_call in block.calls -%}
258
+ {{- '{"' + tool_call.name + '": ' + (tool_call.arguments if tool_call.arguments is string else tool_call.arguments|tojson) + '}' }}
259
+ {%- if not loop.last -%}
260
+ {{- ", " }}
261
+ {%- endif -%}
262
+ {%- endfor -%}
263
+ {{ ']' + end_tool_calls_token }}
264
+ {%- elif block.type == 'tool_outputs' -%}
265
+ {%- if ns.in_tool -%}
266
+ {{- raise_exception("Cannot have both tool outputs as separate messages and tool outputs as blocks") -}}
267
+ {%- endif -%}
268
+ {{ '[' }}
269
+ {%- for tool_output in block.outputs -%}
270
+ {{- tool_output.output }}
271
+ {%- if not loop.last -%}
272
+ {{- ", " }}
273
+ {%- endif -%}
274
+ {%- endfor -%}
275
+ {{- ']' }}
276
+ {%- elif block.type == 'response' -%}
277
+ {%- if ns.in_tool -%}
278
+ {{ ']' }}
279
+ {%- set ns.in_tool = false -%}
280
+ {%- endif -%}
281
+ {%- if (not loop.first and ns.in_inner) or (ns.in_assistant and ns.in_inner) -%}
282
+ {%- set ns.in_inner = false -%}
283
+ {{ outer_token }}
284
+ {%- endif -%}
285
+ {{ block.text }}
286
+ {%- else -%}
287
+ {{- raise_exception("Invalid assistant block type: " + block.type) -}}
288
+ {%- endif -%}
289
+ {%- endfor -%}
290
+ {%- endif -%}
291
+ {%- else -%}
292
+ {{- raise_exception("Invalid assistant message") -}}
293
+ {%- endif -%}
294
+ {%- if "tool_calls" in message and message.tool_calls -%}
295
+ {{ tool_calls_token + '[' }}
296
+ {%- for tool_call in message.tool_calls -%}
297
+ {%- if tool_call.type == 'function' -%}
298
+ {%- set function = tool_call.function -%}
299
+ {{- '{"' + function.name + '": ' + (function.arguments if function.arguments is string else function.arguments|tojson) + '}' }}
300
+ {%- if not loop.last -%}
301
+ {{- ", " }}
302
+ {%- endif -%}
303
+ {%- else -%}
304
+ {{- raise_exception("Invalid tool call type: " + tool_call.type) -}}
305
+ {%- endif -%}
306
+ {%- endfor -%}
307
+ {{ ']' + end_tool_calls_token }}
308
+ {%- endif -%}
309
+ {%- elif message.role == 'tool' -%}
310
+ {%- if not ns.in_assistant -%}
311
+ {{- raise_exception("Tool message outside of assistant") -}}
312
+ {%- endif -%}
313
+ {%- if not ns.in_tool -%}
314
+ {{ '[' }}
315
+ {%- set ns.in_tool = true -%}
316
+ {%- else -%}
317
+ {{ ", "}}
318
+ {%- endif -%}
319
+ {{ message.content }}
320
+ {%- else -%}
321
+ {{- raise_exception("Invalid message role") -}}
322
+ {%- endif -%}
323
+ {%- endfor -%}
324
+ {%- if ns.in_tool -%}
325
+ {{ ']' }}
326
+ {%- endif -%}
327
+ {%- if add_generation_prompt -%}
328
+ {{ assistant_token }}
329
+ {%- endif -%}
330
+ {# Copyright 2025-present Unsloth. Apache 2.0 License. #}
config.json ADDED
@@ -0,0 +1,175 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "architectures": [
3
+ "ApertusForCausalLM"
4
+ ],
5
+ "attention_bias": false,
6
+ "attention_dropout": 0.0,
7
+ "bos_token_id": 1,
8
+ "torch_dtype": "bfloat16",
9
+ "eos_token_id": 68,
10
+ "hidden_act": "xielu",
11
+ "hidden_dropout": 0.0,
12
+ "hidden_size": 8192,
13
+ "initializer_range": 0.02,
14
+ "intermediate_size": 43008,
15
+ "max_position_embeddings": 65536,
16
+ "mlp_bias": false,
17
+ "model_type": "apertus",
18
+ "num_attention_heads": 64,
19
+ "num_hidden_layers": 80,
20
+ "num_key_value_heads": 8,
21
+ "pad_token_id": 3,
22
+ "post_norm": false,
23
+ "qk_norm": true,
24
+ "quantization_config": {
25
+ "_load_in_4bit": true,
26
+ "_load_in_8bit": false,
27
+ "bnb_4bit_compute_dtype": "bfloat16",
28
+ "bnb_4bit_quant_storage": "uint8",
29
+ "bnb_4bit_quant_type": "nf4",
30
+ "bnb_4bit_use_double_quant": true,
31
+ "llm_int8_enable_fp32_cpu_offload": false,
32
+ "llm_int8_has_fp16_weight": false,
33
+ "llm_int8_skip_modules": [
34
+ "embed_tokens",
35
+ "embedding",
36
+ "lm_head",
37
+ "multi_modal_projector",
38
+ "merger",
39
+ "modality_projection",
40
+ "router",
41
+ "visual",
42
+ "vision_tower",
43
+ "model.layers.12.self_attn",
44
+ "model.layers.13.self_attn",
45
+ "model.layers.78.mlp",
46
+ "model.layers.62.mlp",
47
+ "model.layers.3.self_attn",
48
+ "model.layers.61.mlp",
49
+ "model.layers.4.self_attn",
50
+ "model.layers.60.mlp",
51
+ "model.layers.1.self_attn",
52
+ "model.layers.9.self_attn",
53
+ "model.layers.8.self_attn",
54
+ "model.layers.77.mlp",
55
+ "model.layers.59.mlp",
56
+ "model.layers.6.self_attn",
57
+ "model.layers.2.self_attn",
58
+ "model.layers.10.self_attn",
59
+ "model.layers.7.self_attn",
60
+ "model.layers.69.mlp",
61
+ "model.layers.5.self_attn",
62
+ "model.layers.58.mlp",
63
+ "model.layers.57.mlp",
64
+ "model.layers.3.mlp",
65
+ "model.layers.4.mlp",
66
+ "model.layers.56.mlp",
67
+ "model.layers.55.mlp",
68
+ "model.layers.5.mlp",
69
+ "model.layers.54.mlp",
70
+ "model.layers.48.mlp",
71
+ "model.layers.7.mlp",
72
+ "model.layers.53.mlp",
73
+ "model.layers.16.mlp",
74
+ "model.layers.47.mlp",
75
+ "model.layers.52.mlp",
76
+ "model.layers.18.mlp",
77
+ "model.layers.13.mlp",
78
+ "model.layers.17.mlp",
79
+ "model.layers.19.mlp",
80
+ "model.layers.14.mlp",
81
+ "model.layers.11.mlp",
82
+ "model.layers.20.mlp",
83
+ "model.layers.12.mlp",
84
+ "model.layers.21.mlp",
85
+ "model.layers.46.mlp",
86
+ "model.layers.15.mlp",
87
+ "model.layers.22.mlp",
88
+ "model.layers.23.mlp",
89
+ "model.layers.51.mlp",
90
+ "model.layers.24.mlp",
91
+ "model.layers.25.mlp",
92
+ "model.layers.26.mlp",
93
+ "model.layers.27.mlp",
94
+ "model.layers.28.mlp",
95
+ "model.layers.45.mlp",
96
+ "model.layers.6.mlp",
97
+ "model.layers.29.mlp",
98
+ "model.layers.50.mlp",
99
+ "model.layers.30.mlp",
100
+ "model.layers.8.mlp",
101
+ "model.layers.44.mlp",
102
+ "model.layers.10.mlp",
103
+ "model.layers.31.mlp",
104
+ "model.layers.43.mlp",
105
+ "model.layers.49.mlp",
106
+ "model.layers.32.mlp",
107
+ "model.layers.9.mlp",
108
+ "model.layers.33.mlp",
109
+ "model.layers.34.mlp",
110
+ "model.layers.42.mlp",
111
+ "model.layers.39.mlp",
112
+ "model.layers.41.mlp",
113
+ "model.layers.35.mlp",
114
+ "model.layers.38.mlp",
115
+ "model.layers.40.mlp",
116
+ "model.layers.37.mlp",
117
+ "model.layers.36.mlp",
118
+ "model.layers.70.mlp",
119
+ "model.layers.2.mlp",
120
+ "model.layers.0.mlp",
121
+ "model.layers.0.self_attn",
122
+ "model.layers.1.mlp",
123
+ "model.layers.0.self_attn.o_proj",
124
+ "model.layers.0.mlp.down_proj",
125
+ "model.layers.1.self_attn.o_proj",
126
+ "model.layers.2.self_attn.o_proj",
127
+ "model.layers.3.self_attn.o_proj",
128
+ "model.layers.4.self_attn.o_proj",
129
+ "model.layers.5.self_attn.o_proj",
130
+ "model.layers.6.self_attn.o_proj",
131
+ "model.layers.7.self_attn.o_proj",
132
+ "model.layers.8.self_attn.o_proj",
133
+ "model.layers.9.self_attn.o_proj",
134
+ "model.layers.10.self_attn.o_proj",
135
+ "model.layers.12.self_attn.o_proj",
136
+ "model.layers.18.self_attn.o_proj",
137
+ "model.layers.25.self_attn.o_proj",
138
+ "model.layers.27.self_attn.o_proj",
139
+ "model.layers.31.self_attn.o_proj",
140
+ "model.layers.32.self_attn.o_proj",
141
+ "model.layers.33.self_attn.o_proj",
142
+ "model.layers.35.self_attn.o_proj",
143
+ "model.layers.46.self_attn.o_proj",
144
+ "model.layers.51.self_attn.o_proj",
145
+ "model.layers.59.self_attn.o_proj",
146
+ "model.layers.68.self_attn.o_proj",
147
+ "model.layers.69.self_attn.o_proj",
148
+ "model.layers.70.self_attn.o_proj",
149
+ "model.layers.71.self_attn.o_proj",
150
+ "model.layers.72.self_attn.o_proj",
151
+ "model.layers.73.self_attn.o_proj",
152
+ "model.layers.76.self_attn.o_proj",
153
+ "model.layers.79.self_attn.o_proj"
154
+ ],
155
+ "llm_int8_threshold": 6.0,
156
+ "load_in_4bit": true,
157
+ "load_in_8bit": false,
158
+ "quant_method": "bitsandbytes"
159
+ },
160
+ "rms_norm_eps": 1e-05,
161
+ "rope_scaling": {
162
+ "factor": 8.0,
163
+ "high_freq_factor": 4.0,
164
+ "low_freq_factor": 1.0,
165
+ "original_max_position_embeddings": 8192,
166
+ "rope_type": "llama3",
167
+ "type": "llama3"
168
+ },
169
+ "rope_theta": 12000000,
170
+ "tie_word_embeddings": false,
171
+ "transformers_version": "4.56.2",
172
+ "unsloth_fixed": true,
173
+ "use_cache": false,
174
+ "vocab_size": 131072
175
+ }
generation_config.json ADDED
@@ -0,0 +1,12 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_from_model_config": true,
3
+ "bos_token_id": 1,
4
+ "eos_token_id": [
5
+ 2,
6
+ 68,
7
+ 72
8
+ ],
9
+ "max_length": 65536,
10
+ "pad_token_id": 3,
11
+ "transformers_version": "4.56.2"
12
+ }
model-00001-of-00025.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3999deac891c7e0433ef0926e0edb571e690263e38f20a074c75ff0a872a3913
3
+ size 4865428944
model-00002-of-00025.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e0c74e369f63d05898d3ced3b198dfb7b3143d60568dbb44bd5dd8f7377daf1a
3
+ size 4429289408
model-00003-of-00025.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c112bfb38d0ca43e26f5b16fc66f7a93433cd2e8fe2fc78ad5c866b237547df8
3
+ size 4999714024
model-00004-of-00025.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:29bc53591567cd8d253c3f9f383563eae112a030826c24f339de13cb83b72325
3
+ size 4966160096
model-00005-of-00025.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8f4e8cb8102125ff1bba681f10e90beb94b9ff9619a6e0f88d247887ebbe9139
3
+ size 4909843832
model-00006-of-00025.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:bab89df539b4b06529630efb12b906cbdd86069f6aeeb4349968170eaae0f54b
3
+ size 4763657343
model-00007-of-00025.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f15a7d1839b0f2a5271c2c480dc128ac1211dfaac3497cafcbdf720cb9a98ecb
3
+ size 4561262712
model-00008-of-00025.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c3e9fa23c5817000c7560e8222d04ff6e884d7f31067d59b9fd6ae97371d6782
3
+ size 4461666247
model-00009-of-00025.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8fa04f1b76816ba9135760b80fb151af76787c55254f1eb70dbdda196c16a910
3
+ size 4561262713
model-00010-of-00025.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d94b2fc85bbcd4a1c9c05d25d5751565da6653ba332a67c905e00328aaa65bb8
3
+ size 4561262714
model-00011-of-00025.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1838643886fbc2befaf24cec356a05f1343b83a8016594e2e3bebffce40eab36
3
+ size 4561262713
model-00012-of-00025.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:dab1313e344103672979528b17239d134ac098702cea1dfb81a2b93e102839df
3
+ size 4660859175
model-00013-of-00025.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d3893b68ccf4d2fd36c1b309680e748d36476f9dde04be8303de57093ed4eed4
3
+ size 4561262713
model-00014-of-00025.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7cc35f6283eed02402401e90652f95855db04b651368d0689fbd1b6cb8df63ba
3
+ size 4461666247
model-00015-of-00025.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:55e8a67b1d532edc7286b175796a4edc22d8370d11906b593107b5a8b73d39bb
3
+ size 4461666247
model-00016-of-00025.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:faf5472fa035a2c74884542e8333d0767c4e70a6051bf6db726dc27283a10a8c
3
+ size 4561262713
model-00017-of-00025.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3c05432daf738ebc558bf2abc51432042897f5674f1f5a32e4b352d74c6dba8c
3
+ size 4461666246
model-00018-of-00025.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:edd0916aeeb998ebc1a83c42ad67d343412518838e7519ca314453292a3df69a
3
+ size 4561262715
model-00019-of-00025.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d463b98d9517afcbfa69c18697546e3d8754355e299c391fff6a99002c2ba3d7
3
+ size 4461666245
model-00020-of-00025.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b9c508d51f9f5a758c57bac720bdc84eb22215f2d7761350d914f1fa8fe91c24
3
+ size 4461666245
model-00021-of-00025.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:fe656b231411d4a4ca4db0449945a7f01303e630894ffb456a922051d6228d90
3
+ size 4561262713
model-00022-of-00025.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:91c4fde2b48f9081cd2f932458bae0914dcc329316f4fa66e121b99115a80866
3
+ size 4921971104
model-00023-of-00025.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ac3dd0b488611ed2c8561feaf110aa9f6f79a53a2dc997e1268e24d495e3c6df
3
+ size 4978543510
model-00024-of-00025.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:844be231471aa84fc2d95693cfee2ffdbea8d29546151640292987e7dee017cc
3
+ size 4982690391
model-00025-of-00025.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f5cfec32ee5e7be1417a8534ef53f2061ce5f8dac0c60c35be111bc99a0bb092
3
+ size 2645259802
model.safetensors.index.json ADDED
The diff for this file is too large to render. See raw diff
 
special_tokens_map.json ADDED
@@ -0,0 +1,30 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "bos_token": {
3
+ "content": "<s>",
4
+ "lstrip": false,
5
+ "normalized": false,
6
+ "rstrip": false,
7
+ "single_word": false
8
+ },
9
+ "eos_token": {
10
+ "content": "<|assistant_end|>",
11
+ "lstrip": false,
12
+ "normalized": false,
13
+ "rstrip": false,
14
+ "single_word": false
15
+ },
16
+ "pad_token": {
17
+ "content": "<pad>",
18
+ "lstrip": false,
19
+ "normalized": false,
20
+ "rstrip": false,
21
+ "single_word": false
22
+ },
23
+ "unk_token": {
24
+ "content": "<unk>",
25
+ "lstrip": false,
26
+ "normalized": false,
27
+ "rstrip": false,
28
+ "single_word": false
29
+ }
30
+ }
tokenizer.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:bb201fb226cde11f66c3cf51c5344fb37b1611f00c21e75c324546d854eff2e1
3
+ size 17078480
tokenizer_config.json ADDED
The diff for this file is too large to render. See raw diff