Tcomanr-V2_6
This is a merge of pre-trained language models created using mergekit.
Merge Details
Merge Method
This model was merged using the TIES merge method using Qwen/Qwen3-4B-Thinking-2507 as a base.
Models Merged
The following models were included in the merge:
- ValiantLabs/Qwen3-4B-Esper3
- ValiantLabs/Qwen3-4B-ShiningValiant3
- ertghiu256/Qwen3-Hermes-4b
- ertghiu256/qwen-3-4b-mixture-of-thought
- ertghiu256/qwen3-4b-code-reasoning
- janhq/Jan-v1-4B
- ertghiu256/Qwen3-4b-2507-Thinking-math-and-code
- quelmap/Lightning-4b
- GetSoloTech/Qwen3-Code-Reasoning-4B
- Qwen/Qwen3-4b-Instruct-2507
- ertghiu256/qwen3-multi-reasoner
- Tesslate/WEBGEN-4B-Preview
- huihui-ai/Huihui-Qwen3-4B-Thinking-2507-abliterated
- ertghiu256/qwen3-math-reasoner
- ertghiu256/Qwen3-4B-Thinking-2507-Hermes-3
- Goekdeniz-Guelmez/Josiefied-Qwen3-4B-Instruct-2507-gabliterated-v2
- Tesslate/UIGEN-FX-4B-Preview
- POLARIS-Project/Polaris-4B-Preview
Configuration
The following YAML configuration was used to produce this model:
models:
- model: ertghiu256/qwen3-math-reasoner
parameters:
weight: 0.85
- model: ertghiu256/qwen3-4b-code-reasoning
parameters:
weight: 0.9
- model: ertghiu256/qwen-3-4b-mixture-of-thought
parameters:
weight: 1.0
- model: POLARIS-Project/Polaris-4B-Preview
parameters:
weight: 1.0
- model: ertghiu256/qwen3-multi-reasoner
parameters:
weight: 0.85
- model: ertghiu256/Qwen3-Hermes-4b
parameters:
weight: 0.7
- model: ValiantLabs/Qwen3-4B-Esper3
parameters:
weight: 0.8
- model: Tesslate/WEBGEN-4B-Preview
parameters:
weight: 1.0
- model: Tesslate/UIGEN-FX-4B-Preview
parameters:
weight: 0.95
- model: ValiantLabs/Qwen3-4B-ShiningValiant3
parameters:
weight: 0.8
- model: huihui-ai/Huihui-Qwen3-4B-Thinking-2507-abliterated
parameters:
weight: 0.85
- model: Qwen/Qwen3-4B-Thinking-2507
parameters:
weight: 1.0
- model: Qwen/Qwen3-4b-Instruct-2507
parameters:
weight: 1.0
- model: GetSoloTech/Qwen3-Code-Reasoning-4B
parameters:
weight: 0.95
- model: ertghiu256/Qwen3-4B-Thinking-2507-Hermes-3
parameters:
weight: 1.0
- model: janhq/Jan-v1-4B
parameters:
weight: 0.25
- model: Goekdeniz-Guelmez/Josiefied-Qwen3-4B-Instruct-2507-gabliterated-v2
parameters:
weight: 0.85
- model: quelmap/Lightning-4b
parameters:
weight: 0.75
- model: ertghiu256/Qwen3-4b-2507-Thinking-math-and-code
parameters:
weight: 1.0
merge_method: ties
base_model: Qwen/Qwen3-4B-Thinking-2507
parameters:
normalize: true
int8_mask: true
lambda: 1.0
dtype: float16
- Downloads last month
- 141
Model tree for ertghiu256/Qwen3-4b-tcomanr-merge-v2.6
Merge model
this model