Text Generation
Transformers
Safetensors
minimax_m2
conversational
custom_code
fp8

When will transformers support Minimax-M2?

#11
by zx-modelcloud - opened

Currently it seems that Local Deployment can only be done through SGLANG or vLLM.

MiniMax org

Support for transformers is in progress:

Sign up or log in to comment