Whisper-base Voice Activity Detection (VAD) for Japanese ASMR - ONNX
Model Description
This is a refined Whisper-based Voice Activity Detection (VAD) model that leverages the pre-trained Whisper encoder with a lightweight non-autoregressive decoder for high-precision speech activity detection. While fine-tuned on Japanese ASMR content for optimal performance on soft speech and whispers, the model retains Whisper's robust multilingual foundation, enabling effective speech detection across diverse languages and acoustic conditions. It has been optimized and exported to ONNX format for efficient inference across different platforms.
This work builds upon recent research demonstrating the positive transfer of Whisper's speech representations to VAD tasks, as shown in WhisperSeg and related work.
Key Features
- Architecture: Encoder-Decoder model based on whisper-base
- Frame Resolution: 20ms per frame for precise temporal detection
- Input Duration: Processes 30-second audio chunks
- Output: Frame-level speech/non-speech predictions
- Optimized: ONNX format for cross-platform deployment
- Real-time capable: Fast non-autoregressive inference
Model Architecture Details
- Base Model: OpenAI whisper-base encoder (frozen during training)
- Decoder: 2-layer transformer decoder with 8 attention heads
- Processing:
- Input: 30-second audio chunks (480,000 samples @ 16kHz)
- Features: 80-channel log-mel spectrogram
- Output: 1500 frame predictions (20ms per frame)
Performance
- Frame Duration: 20ms per frame for precise temporal detection
- Processing Speed: ~100x real-time on CPU (single-sample processing)
- Batch Processing: Currently limited to batch size of 1 due to ONNX export constraints, but single-sample inference is extremely fast
- Specialized Training: Japanese ASMR and whispered speech
- Generalization: Despite being fine-tuned on Japanese ASMR, the model inherits Whisper's strong multilingual capabilities and can effectively detect speech in various languages and acoustic environments
Advantages over Native Whisper VAD
- No hallucinations: Discriminative model cannot generate spurious text
- Much faster: Single forward pass, non-autoregressive inference
- Higher precision: 20ms frame-level temporal resolution vs Whisper's 30s chunks
- Robust: Focal loss training handles speech/silence imbalance effectively
- Lightweight: Decoder adds minimal parameters to base Whisper encoder
Usage
Quick Start with ONNX Runtime
import numpy as np
import onnxruntime as ort
from transformers import WhisperFeatureExtractor
import librosa
# Load model
session = ort.InferenceSession("model.onnx")
feature_extractor = WhisperFeatureExtractor.from_pretrained("openai/whisper-base")
# Load and preprocess audio
audio, sr = librosa.load("audio.wav", sr=16000)
audio_chunk = audio[:480000] # 30 seconds
# Extract features
inputs = feature_extractor(
audio_chunk,
sampling_rate=16000,
return_tensors="np"
)
# Run inference
outputs = session.run(None, {session.get_inputs()[0].name: inputs.input_features})
predictions = outputs[0] # Shape: [1, 1500] - 1500 frames of 20ms each
# Apply threshold
speech_frames = predictions[0] > 0.5
Using the Provided Inference Script
The model repository includes a comprehensive inference.py script with advanced features:
from inference import WhisperVADInference
# Initialize model
vad = WhisperVADInference(
model_path="model.onnx",
threshold=0.5, # Speech detection threshold
min_speech_duration=0.25, # Minimum speech segment duration
min_silence_duration=0.1 # Minimum silence between segments
)
# Process audio file
segments = vad.process_audio("audio.wav")
# Segments format: List of (start_time, end_time) tuples
for start, end in segments:
print(f"Speech detected: {start:.2f}s - {end:.2f}s")
Streaming/Real-time Processing
# Process audio stream in chunks
vad = WhisperVADInference("model.onnx", streaming=True)
for audio_chunk in audio_stream:
speech_active = vad.process_chunk(audio_chunk)
if speech_active:
# Handle speech detection
pass
Input/Output Specifications
Input
- Audio Format: 16kHz mono audio
- Chunk Size: 30 seconds (480,000 samples)
- Feature Type: 80-channel log-mel spectrogram
- Shape:
[1, 80, 3000](batch size fixed to 1 - see note below)
Output
- Type: Frame-level probabilities
- Shape:
[1, 1500](batch size fixed to 1) - Frame Duration: 20ms per frame
- Range: [0, 1] probability of speech presence
Note on Batch Processing: Currently, the ONNX model only supports batch size of 1 due to export limitations between PyTorch transformers and ONNX. However, single-sample inference is highly optimized and runs extremely fast (~100x real-time on CPU), making sequential processing still very efficient for most use cases.
Training Details
Training Configuration
- Dataset: ~500 hours Japanese ASMR audio recordings with accurate speech timestamps
- Loss Function: Focal Loss (α=0.25, γ=2.0) for class imbalance
- Optimizer: AdamW with learning rate 1.5e-3
- Batch Size: 128
- Training Duration: 5 epochs
- Hardware: Single GPU training with mixed precision (bf16)
Data Processing
- Audio segmented into 30-second chunks
- Frame-level labels generated from word-level timestamps
- Augmentation: None (relying on Whisper's pre-training robustness)
Limitations and Considerations
- Fixed Duration: Model expects 30-second chunks; shorter audio needs padding
- Training Specialization: While the model performs well across languages and environments due to Whisper's strong multilingual foundation, it excels particularly at:
- Japanese ASMR content (primary training data)
- Whispered and soft speech detection
- Quiet, intimate audio environments
- Generalization: The model can effectively handle various languages and normal speech volumes, though performance may be slightly better on content similar to the training data
- Background Noise: Performance may degrade in very noisy conditions
- Music/Singing: Primarily trained on speech; may have variable performance on singing
Model Files
model.onnx: ONNX model filemodel_metadata.json: Model configuration and parametersinference.py: Ready-to-use inference script with post-processingrequirements.txt: Python dependencies
Installation
pip install onnxruntime # or onnxruntime-gpu for GPU support
pip install librosa transformers numpy
Applications
- ASMR Content Processing: Detect whispered speech and subtle vocalizations in ASMR recordings
- Japanese Audio Processing: Optimized for Japanese language content, especially soft speech
- Transcription Pre-processing: Filter out silence before ASR, particularly effective for whispered content
- Audio Indexing: Identify speech segments in long recordings
- Real-time Communication: Detect active speech in calls/meetings
- Audio Analytics: Speech/silence ratio analysis for ASMR and meditation content
- Subtitle Alignment: Accurate timing for subtitles, including whispered dialogue
Citation
If you use this model, please cite:
@misc{whisper-vad,
title={Whisper-VAD: Whisper-based Voice Activity Detection},
author={Grider},
year={2025},
publisher={Hugging Face},
howpublished={\url{https://huggingface.co/TransWithAI/Whisper-Vad-EncDec-ASMR-onnx}}
}
References
- Original Whisper Paper: Robust Speech Recognition via Large-Scale Weak Supervision
- WhisperSeg: Positive Transfer of the Whisper Speech Transformer to Human and Animal Voice Activity Detection
License
MIT License
Acknowledgments
This model builds upon OpenAI's Whisper model and implements architectural refinements for efficient voice activity detection.
Model tree for TransWithAI/Whisper-Vad-EncDec-ASMR-onnx
Base model
openai/whisper-base