Whisper MLX Model

This repository contains a CoreML and MLX-optimized Whisper model for efficient speech recognition on Apple devices.

Model Components

  • CoreML encoder for efficient inferencing on Apple Neural Engine
  • MLX decoder for fast processing with Apple Silicon optimizations
  • HuggingFace Whisper processor and model template

Usage

from asr_streaming import StreamingConfig, StreamingBackend

# Create config with HuggingFace repository
config = StreamingConfig(
    use_huggingface=True,
    huggingface_repo="TheStageAI/whisper-medium"
)

# Initialize the backend
backend = StreamingBackend(config)

# Now use the backend as normal
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Collection including TheStageAI/whisper-medium