VibeThinker-1.5B-mlx-4bit

This is a 4-bit quantized version of the VibeThinker-1.5B model, optimized for Apple Silicon using the MLX framework.

Model Details

Installation

To use this model, you need to install the mlx and mlx-lm packages:

pip install mlx mlx-lm

Usage

You can generate text using the mlx_lm.generate command:

mlx_lm.generate --model mlx-community/VibeThinker-1.5B-mlx-4bit --prompt "A cafรฉ sells 6 types of drinks. A customer orders exactly 8 drinks (types may repeat), but no drink type may be chosen more than 3 times. How many distinct combinations of 8 drinks can the customer order? Give your answer as an integer." --temp 0.6 --top-p 0.95 --top-k -1 --max-tokens 40960

Or using Python:

from mlx_lm import load, generate

model, tokenizer = load("mlx-community/VibeThinker-1.5B-mlx-4bit")
response = generate(model, tokenizer, prompt="A cafรฉ sells 6 types of drinks. A customer orders exactly 8 drinks (types may repeat), but no drink type may be chosen more than 3 times. How many distinct combinations of 8 drinks can the customer order? Give your answer as an integer.", verbose=True)

Original Model: VibeThinker-1.5B

๐Ÿšจ We recommend using this model for competitive-style math and algorithm coding problems. It works better to ask the question in English. We do not advise using it for other tasks, as this is an experimental release aimed at exploring the reasoning capabilities of small models.

๐Ÿ“ Github   |   ๐Ÿค– Model Scope |   ๐Ÿ“„ Techical Report

Introduction

VibeThinker-1.5B is a 1.5-billion parameter dense language model. With a total training cost of only $7,800 USD, it achieves reasoning performance comparable to larger models like GPT OSS-20B Medium.

image

Key Performance Data

๐Ÿ’ก Mathematical Reasoning: On the three major math benchmarks AIME24, AIME25, and HMMT25, its scores (80.3, 74.4, and 50.4, respectively) all surpass those of the initial DeepSeek R1 model, which has over 400 times the parameters (scores of 79.8, 70.0, and 41.7, respectively).

๐ŸŒฑ Code Generation: It achieved scores of 55.9 on LiveCodeBench v5 and 51.1 on v6. Its v6 score slightly leads Magistral Medium (50.3), underscoring its strong reasoning performance.

image

๐Ÿ” On the AIME 25 benchmark, VibeThinker-1.5B significantly extends the Pareto frontier of reasoning accuracy versus model scale, demonstrating that exceptional performance can be achieved with extreme parameter efficiency.

image

Training Pipeline

image

VibeThinker-1.5B's core innovation lies in the "Spectrum-to-Signal Principle" (SSP) training framework: it first explores solution diversity during the Supervised Fine-Tuning (SFT) stage, and then optimizes its policy to reinforce correct signals in the Reinforcement Learning (RL) stage. By systematically integrating these two phases, our approach establishes diversity as the central technical design principle, enabling VibeThinker-1.5B to achieve robust performance that surpasses conventional training paradigms.

Downloads last month
492
Safetensors
Model size
0.2B params
Tensor type
BF16
ยท
U32
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for mlx-community/VibeThinker-1.5B-mlx-4bit

Base model

Qwen/Qwen2.5-1.5B
Quantized
(20)
this model