1. Model Introduction
LimiX is a new class of tabular AI model designed to overcome one of modern machine learning’s longest-standing bottlenecks: structured data. With only 2M parameters, LimiX-2M sets a new state-of-the-art across classification, regression, and missing-value imputation, surpassing XGBoost, CatBoost, AutoGluon, and TabPFN, and approaching the performance level of the larger LimiX-16M. Its lightweight, training-free design makes advanced tabular modeling accessible on ordinary hardware while preserving full transparency and offline deployability.
Key Features
Unified Tabular Reasoning:
End-to-end designed for multi-task tabular intelligence, enabling a single model to handle classification, regression, and imputation without additional tuning, preprocessing, or task-specific fine-tuning.
Training-Free, Context-Driven Inference:
Operates directly through context learning: no training, no hyperparameters, no preprocessing pipelines. LimiX automatically interprets and processes raw tabular inputs for immediate use.
Lightweight & Efficient Deployment:
A compact 2M-parameter architecture enables fast inference and smooth operation on standard CPUs and laptops, dramatically reducing compute requirements for advanced tabular modeling.
2. Model Architecture & Pretraining Procedures
LimiX adopts a 12-block transformer architecture with axis-wise attention to features and samples, supported by pre-normalized LayerNorm for stable scaling. The LimiX-16M variant uses an asymmetric design, two feature-axis passes and one sample-axis pass per block, to strengthen feature interaction modeling in heterogeneous schemas with minimal overhead.
To learn the joint distribution of tabular variables, LimiX is pretrained through Context-Conditional Masked Modeling (CCMM). By masking table cells and conditioning predictions on a small set of context rows, the model internalizes a wide range of conditional dependencies while adapting to new datasets without training or labels.
3. Evaluation Results
Classification
On the BCCO-CLS benchmark, LimiX-16M establishes leading performance by significantly outperforming AutoGluon and all PFN variants in mean AUC, Accuracy, and F1 scores, with substantially better ranks. LimiX-2M also marks a clear lead over these models in most metrics, except for its AUC rank.
Regression
LimiX-16M achieves the best overall scores and rankings on TALENT-REG, with the PFN models and LimiX-2M emerging as close runners-up in both R² and RMSE.
Missing Value Imputation
LimiX introduces the first training-free, in-context approach for missing-value imputation on entirely new datasets. Across a wide set of real-world benchmarks, LimiX-16M delivers the best performance, achieving lower RMSE and error rates than classical and learned imputers including KNN, MICE, MissForest, GAIN, and MIWAE. Unlike all prior methods, which depend on additional fitting, LimiX performs imputation directly from context with consistently superior accuracy.
Finetune
Using an attention-based retrieval–guided downsampling strategy, LimiX-16M fine-tunes on compact, highly relevant in-context episodes rather than full long contexts, substantially improving sample efficiency and reducing training cost. This approach enables LimiX-16M to significantly outperform strong baselines such as TabDPT and TabPFN-v2, with notable AUC gains across BCCO-CLS datasets.
4. Deployment
Environment Preparation
Recommended to deploy with Docker. Download the Dockerfile from the repository and execute the following command to build the image:
docker build --network=host -t limix/infe:v1 --build-arg FROM_IMAGES=nvidia/cuda:12.2.0-base-ubuntu22.04 -f Dockerfile .For manual deployment, install dependencies:
# Download precompiled flash_attn file wget -O flash_attn-2.8.0.post2+cu12torch2.7cxx11abiTRUE-cp312-cp312-linux_x86_64.whl https://github.com/Dao-AILab/flash-attention/releases/download/v2.8.0.post2/flash_attn-2.8.0.post2+cu12torch2.7cxx11abiTRUE-cp312-cp312-linux_x86_64.whl # Install basic dependencies pip install python==3.12.7 torch==2.7.1 torchvision==0.22.1 torchaudio==2.7.1 pip install flash_attn-2.8.0.post2+cu12torch2.7cxx11abiTRUE-cp312-cp312-linux_x86_64.whl pip install scikit-learn einops huggingface-hub matplotlib networkx numpy pandas scipy tqdm typing_extensions xgboost kditransform hyperopt
Model Download
Download model weights via Hugging Face Hub:
from huggingface_hub import hf_hub_download model_file = hf_hub_download(repo_id="stableai-org/LimiX-16M", filename="LimiX-16M.ckpt", local_dir="./cache")
5. Model Usage
Classification Task Example
from sklearn.datasets import load_breast_cancer from sklearn.metrics import accuracy_score, roc_auc_score from sklearn.model_selection import train_test_split from huggingface_hub import hf_hub_download import numpy as np import os, sys os.environ["RANK"] = "0" os.environ["WORLD_SIZE"] = "1" os.environ["MASTER_ADDR"] = "127.0.0.1" os.environ["MASTER_PORT"] = "29500" ROOT_DIR = os.path.abspath(os.path.join(os.path.dirname(__file__), "..")) if ROOT_DIR not in sys.path: sys.path.insert(0, ROOT_DIR) from inference.predictor import LimiXPredictor X, y = load_breast_cancer(return_X_y=True) X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.5, random_state=42) model_file = hf_hub_download(repo_id="stableai-org/LimiX-16M", filename="LimiX-16M.ckpt", local_dir="./cache") clf = LimiXPredictor(device='cuda', model_path=model_file, inference_config='config/cls_default_retrieval.json') prediction = clf.predict(X_train, y_train, X_test) print("roc_auc_score:", roc_auc_score(y_test, prediction[:, 1])) print("accuracy_score:", accuracy_score(y_test, np.argmax(prediction, axis=1)))Regression Task Example
from functools import partial from sklearn.datasets import fetch_california_housing from sklearn.model_selection import train_test_split from sklearn.metrics import r2_score from huggingface_hub import hf_hub_download try: from sklearn.metrics import root_mean_squared_error as mean_squared_error except: from sklearn.metrics import mean_squared_error mean_squared_error = partial(mean_squared_error, squared=False) import os, sys os.environ["RANK"] = "0" os.environ["WORLD_SIZE"] = "1" os.environ["MASTER_ADDR"] = "127.0.0.1" os.environ["MASTER_PORT"] = "29500" ROOT_DIR = os.path.abspath(os.path.join(os.path.dirname(__file__), "..")) if ROOT_DIR not in sys.path: sys.path.insert(0, ROOT_DIR) from inference.predictor import LimiXPredictor house_data = fetch_california_housing() X, y = house_data.data, house_data.target X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33, random_state=42) y_mean = y_train.mean() y_std = y_train.std() y_train_normalized = (y_train - y_mean) / y_std y_test_normalized = (y_test - y_mean) / y_std model_path = hf_hub_download(repo_id="stableai-org/LimiX-16M", filename="LimiX-16M.ckpt", local_dir="./cache") model = LimiXPredictor(device='cuda', model_path=model_path, inference_config='config/reg_default_retrieval.json') y_pred = model.predict(X_train, y_train_normalized, X_test) # Compute RMSE and R² y_pred = y_pred.to('cpu').numpy() rmse = mean_squared_error(y_test_normalized, y_pred) r2 = r2_score(y_test_normalized, y_pred) print(f'RMSE: {rmse}') print(f'R2: {r2}')
Ensemble Inference Based on Sample Retrieval
For a detailed technical introduction to Ensemble Inference Based on Sample Retrieval, please refer to the technical report.
Considering inference speed and memory requirements, ensemble inference based on sample retrieval currently only supports hardware with specifications higher than the NVIDIA RTX 4090 GPU.
Classification Task
python inference_classifier.py --save_name your_save_name --inference_config_path path_to_retrieval_config --data_dir path_to_data
Regression Task
python inference_regression.py --save_name your_save_name --inference_config_path path_to_retrieval_config --data_dir path_to_data
Customizing Data Preprocessing for Inference Tasks
First, Generate the Inference Configuration File
generate_inference_config()
Classification Task
Single GPU or CPU
python inference_classifier.py --save_name your_save_name --inference_config_path path_to_retrieval_config --data_dir path_to_data
Multi-GPU Distributed Inference
torchrun --nproc_per_node=8 inference_classifier.py --save_name your_save_name --inference_config_path path_to_retrieval_config --data_dir path_to_data --inference_with_DDP
Regression Task
Single GPU or CPU
python inference_regression.py --save_name your_save_name --inference_config_path path_to_retrieval_config --data_dir path_to_data
Multi-GPU Distributed Inference
torchrun --nproc_per_node=8 inference_regression.py --save_name your_save_name --inference_config_path path_to_retrieval_config --data_dir path_to_data --inference_with_DDP
Retrieval Optimization Project
This project implements an optimized retrieval system. To achieve the best performance, we utilize Optuna for hyperparameter tuning of retrieval parameters.
Installation
Ensure you have the required dependencies installed:
pip install optuna
Usage
For standard inference using pre-optimized parameters, refer to the code below:
searchInference = RetrievalSearchHyperparameters(
dict(device_id=0,model_path=model_path), X_train, y_train, X_test, y_test,
)
config, result = searchInference.search(n_trials=10, metric="AUC",
inference_config='config/cls_default_retrieval.json',task_type="cls")
This will launch an Optuna study to find the best combination of retrieval parameters for your specific dataset and use case.
6. Tool Invocation
The LimiX model can integrate with various toolchains for extended functionality:
Data Processing Tools: Integrates with
pandasandscikit-learnfor data cleaning, feature engineering, and result evaluation (e.g.,r2_score,mean_squared_error).Hyperparameter Optimization Tools: Optimize retrieval parameters via the
hyperoptlibrary, example as follows:# Hyperparameter search example (refer to inference_regression.py) from utils.inference_utils import sample_inferece_params hyperopt_config, base_config = sample_inferece_params(rng, 2, 4) model.set_inference_config(inference_config=hyperopt_config, **base_config)Distributed Inference: Supports DDP (Distributed Data Parallel) mode for multi-GPU acceleration via
torch.distributed.
7. License
Code License: The repository code is licensed under the [Apache-2.0 License](LICENSE.txt), allowing commercial use and secondary development with retention of the original copyright notice.
Model Weight License: The use of LimiX model weights is subject to a separate Model License:
Fully open for academic research without authorization required.
Commercial use requires official authorization (refer to the license application process on the StableAI official website).
8. Third-Party Notices
This project uses the following third-party components, whose usage is governed by their respective licenses:
PyTorch: BSD-style license
scikit-learn: BSD license
flash-attention: MIT License
Hugging Face Hub: Apache-2.0 License
For the complete list of dependencies and license information, refer to
requirements.txtand the official documentation of each component.
9. Contact Us
Official Documentation: https://www.limix.ai/doc/
GitHub Repository: https://github.com/limix-ldm/LimiX (Submit issues for questions)
Official Website: https://www.stable-ai.ai/ (For commercial cooperation and license inquiries)
Technical Report: LimiX: Unleashing Structured-Data Modeling Capability for Generalist Intelligence
- Downloads last month
- 83





