L3.1-Artemis-g2-8B / README.md
chargoddard's picture
Upload folder using huggingface_hub
bbf15b2 verified
---
base_model:
- DavidAU/L3.1-RP-Hero-BigTalker-8B
- Sao10K/L3-8B-Lunaris-v1
- mergekit-community/L3-Boshima-a
- Casual-Autopsy/L3-Umbral-Mind-RP-v3.0-8B
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [Model Stock](https://arxiv.org/abs/2403.19522) merge method using [DavidAU/L3.1-RP-Hero-BigTalker-8B](https://huggingface.co/DavidAU/L3.1-RP-Hero-BigTalker-8B) as a base.
### Models Merged
The following models were included in the merge:
* [Sao10K/L3-8B-Lunaris-v1](https://huggingface.co/Sao10K/L3-8B-Lunaris-v1)
* [mergekit-community/L3-Boshima-a](https://huggingface.co/mergekit-community/L3-Boshima-a)
* [Casual-Autopsy/L3-Umbral-Mind-RP-v3.0-8B](https://huggingface.co/Casual-Autopsy/L3-Umbral-Mind-RP-v3.0-8B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
dtype: float32
out_dtype: bfloat16
merge_method: model_stock
base_model: DavidAU/L3.1-RP-Hero-BigTalker-8B
models:
- model: DavidAU/L3.1-RP-Hero-BigTalker-8B
parameters:
weight: 0.8
- model: Casual-Autopsy/L3-Umbral-Mind-RP-v3.0-8B
parameters:
weight: 0.8
- model: mergekit-community/L3-Boshima-a
parameters:
weight: 0.8
- model: Sao10K/L3-8B-Lunaris-v1
parameters:
weight: 1
parameters:
normalize: true
```