--- base_model: - Alelcv27/llama3-1b-math-dpo - Alelcv27/llama3-1b-code-dpo - meta-llama/Llama-3.2-1B library_name: transformers tags: - mergekit - merge --- # llama3-1b-nuslerp-v2 This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the NuSLERP merge method using [meta-llama/Llama-3.2-1B](https://huggingface.co/meta-llama/Llama-3.2-1B) as a base. ### Models Merged The following models were included in the merge: * [Alelcv27/llama3-1b-math-dpo](https://huggingface.co/Alelcv27/llama3-1b-math-dpo) * [Alelcv27/llama3-1b-code-dpo](https://huggingface.co/Alelcv27/llama3-1b-code-dpo) ### Configuration The following YAML configuration was used to produce this model: ```yaml base_model: meta-llama/Llama-3.2-1B dtype: float16 merge_method: nuslerp modules: default: slices: - sources: - layer_range: [0, 16] model: Alelcv27/llama3-1b-math-dpo parameters: weight: 0.5 - layer_range: [0, 16] model: Alelcv27/llama3-1b-code-dpo parameters: weight: 0.5 - layer_range: [0, 16] model: meta-llama/Llama-3.2-1B ```