File size: 12,263 Bytes
f2cc83f b9fde04 f2cc83f b9fde04 f2cc83f b9fde04 f2cc83f |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 |
---
language:
- ru
license: cc-by-nc-nd-4.0
task_categories:
- text-to-speech
pretty_name: Balalaika
tags:
- russian
---
# A Data-Centric Framework for Addressing Phonetic and Prosodic Challenges in Russian Speech Generative Models
[Paper](https://huggingface.co/papers/2507.13563) | [Code](https://github.com/mtuciru/balalaika)
Russian speech synthesis presents distinctive challenges, including vowel reduction, consonant devoicing, variable stress patterns, homograph ambiguity, and unnatural intonation. This paper introduces Balalaika, a novel dataset comprising more than 2,000 hours of studio-quality Russian speech with comprehensive textual annotations, including punctuation and stress markings. Experimental results show that models trained on Balalaika significantly outperform those trained on existing datasets in both speech synthesis and enhancement tasks.
---
## Quick Start π
```bash
git clone https://github.com/mtuciru/balalaika && cd balalaika
bash create_user_env.sh # sets up venv + pip deps
bash use_meta_500h.sh # pick 100h / 500h / 1000h / 2000h as needed
```
## Table of Contents
1. [Prerequisites](#prerequisites)
2. [Installation](#installation)
3. [Data Preparation](#data-preparation)
- [Quick Setup (Default Parameters)](#quick-setup)
- [Custom Metadata Download](#custom-metadata-download)
4. [Running the Pipeline](#running-the-pipeline)
- [Basic Scenario (Local Processing)](#basic-scenario-local-processing)
5. [Configuration](#configuration)
6. [Environment Variables](#environment-variables)
7. [Models](#models)
8. [Citation](#citation)
<!-- 9. [Acknowledgments](#acknowledgments) -->
9. [License](#license)
---
## Prerequisites
Ensure you have the following tools installed on your system:
```bash
sudo apt update && sudo apt install -y \
ffmpeg \ # video/audio toolkit
python3 \ # Python
python3-pip \ # Pip package manager
python3-venv \ # std-lib virtual-env support
python3-dev \ # headers for compiling native wheels
python-is-python3
wget -qO- https://astral.sh/uv/install.sh | sh
```
---
## Installation
Clone the repository and set up the environment:
```bash
git clone https://github.com/mtuciru/balalaika
cd balalaika
# Use this if you want to annotate/modify the dataset
bash create_dev_env.sh
# Use this if you only want to use the pre-annotated dataset
bash create_user_env.sh
```
---
## Data Preparation
### Quick Setup (Default Parameters)
To download and prepare the dataset with default settings, choose one of the preconfigured dataset sizes:
* **100-hour dataset**
```bash
bash use_meta_100h.sh
```
* **500-hour dataset**
```bash
bash use_meta_500h.sh
```
* **1000-hour dataset**
```bash
bash use_meta_1000h.sh
```
* **2000-hour dataset**
```bash
bash use_meta_2000h.sh
```
All metadata can also be downloaded from [Hugging Face β MTUCI](https://huggingface.co/MTUCI).
### Custom Metadata Download
If you already have generated metadata files (`balalaika.parquet` and `balalaika.pkl`), place them in the project root and run:
```bash
bash use_meta.sh
```
---
## Running the Pipeline
### Basic Scenario (Local Processing)
This scenario will:
1. Download datasets
2. Split audio into semantic chunks
3. Transcribe all segments
4. Perform speaker segmentation
5. Apply phonemization
To execute locally, run:
```bash
bash base.sh configs/config.yaml
```
All output metadata will be saved in `podcasts/result.csv`.
---
## Configuration
The main configuration file is located at `configs/config.yaml`. This file is organized into several sections, each corresponding to a specific stage of the podcast processing pipeline. Below is a detailed explanation of the key parameters within each section.
---
### Global Parameters
* `podcasts_path`: It specifies the **absolute path** to the directory where all downloaded podcast files will be stored and where subsequent processing (preprocessing, separation, transcription, etc.) will look for and save its output.
---
### `download` Section
This section controls how podcast episodes are downloaded.
* `podcasts_path`: (As explained above) The directory where downloaded podcasts will be saved.
* `episodes_limit`: This sets a **limit on the number of episodes** to download from a single podcast playlist.
* `num_workers`: Specifies the **number of parallel processes** to use for downloading. A higher number can speed up downloads but will consume more system resources.
* `podcasts_urls_file`: This parameter points to the **path of a `.pkl` file** that contains a list of podcast URLs to be downloaded.
---
### `preprocess` Section
This section handles the initial processing of downloaded audio files, such as chopping them into smaller segments.
* `podcasts_path`: (As explained above) The directory containing the raw downloaded podcasts that need to be preprocessed.
* `duration`: Defines the **maximum length in seconds** for each audio sample (segment).
* `num_workers`: Specifies the **number of parallel processes** to use during preprocessing.
* `whisper_model`: Specifies the **name or path of the Faster-Whisper compatible model** to be used for initial audio processing.
* `compute_type`: Determines the **computation type** for the Whisper model, affecting performance and memory usage.
* `beam_size`: This parameter is related to the **beam search algorithm** used in the Whisper model's decoding process.
---
### `separation` Section
This section calculates metrics for each audio
* `podcasts_path`: (As explained above) The directory where the chopped podcasts (from the `preprocess` stage) are located.
* `num_workers`: The **number of parallel processes** to use for audio separation.
* `nisqa_config`: Specifies the **path to the configuration file for NISQA**
* `one_speaker`: A **boolean flag** (`True`/`False`) that, when enabled (`True`), instructs the system to download and process only those audio recordings that should contain a single speaker.
---
### `transcription` Section
This section is responsible for converting audio into text.
* `podcasts_path`: (As explained above) The directory containing the processed audio files ready for transcription.
* `model_name`: Specifies the **type of automatic speech recognition (ASR) model** to use. Options typically include `"ctc" or "rnnt"`.
* `num_workers`: The **number of parallel processes per GPU** to use for transcription.
* `with_timestamps`: A **boolean flag** (`True`/`False`) that, when enabled, allows the transcription process to generate timestamps for each word or segment. **it only works with ctc**
* `lm_path`: Specifies the **path to a language model file (`.bin`)**. A language model can improve transcription accuracy by providing contextual information.
---
### `punctuation` Section
This section focuses on adding proper punctuation to the transcribed text.
* `podcasts_path`: (As explained above) The directory where the transcribed text files are located.
* `model_name`: Specifies the **name of the RUPunct model** to be used for punctuation restoration.
* `num_workers`: The **number of parallel processes per GPU** to use for punctuation.
---
### `accent` Section
In the transcribed text this part is restored with accents.
* `podcasts_path`: (As explained above) The directory containing the relevant podcast files.
* `num_workers`: The **number of parallel processes per GPU** to use for accent processing.
* `model_name`: Specifies the **name of the ruAccent model** to be used.
---
### `phonemizer` Section
This section is responsible for converting text into phonetic representations (phonemes).
* `podcasts_path`: (As explained above) The directory where the text files (from transcription and punctuation stages) are located.
* `num_workers`: The **number of parallel processes per GPU** to use for phonemization.
---
### `classification` Section
This section relates to global speaker clustering.
* `podcasts_path`: (As explained above) The directory containing the podcast files relevant for classification.
* `num_workers`: The **number of parallel processes per GPU** to use for classification.
* `threshold`: This is the **speaker classification confidence threshold**. Values typically range from `0.6` to `0.9`. A higher threshold means the model needs to be more confident in its classification to assign a label.
* `model_path`: Specifies the **path to the pretrained speaker classification model** in `.pt` format.
---
### Execution Scripts
Each processing script (`*_yaml.sh` and `*_args.sh`) offers flexibility in how parameters are provided:
* `*_yaml.sh`: These scripts read all necessary parameters directly from the main `config.yaml` file, ensuring consistency across different stages.
* `*_args.sh`: These scripts allow for hardcoded arguments directly within the shell script itself, which can be useful for quick tests or specific overrides without modifying the main configuration file.
## Environment Variables
Create a `.env` file in the project root with the following:
```ini
HF_TOKEN=<your_huggingface_token>
YANDEX_KEY=<your_yandex_music_token>
```
* `HF_TOKEN`: Required for speaker count estimation.
* `YANDEX_KEY`: Required for dataset downloads.
---
## Important Notes
- All scripts must be executed from the **project root directory**.
- Paths in the config file must be **absolute**.
- The processing scripts (punctuation, accents) should be run **sequentially**.
- Youβll need:
- Yandex Music API key ([How to get one](https://yandex-music.readthedocs.io/en/main/token.html))
- Hugging Face token
## Models
Place all required models under the `models/` directory with the following structure:
```
models/
βββ voxblink_resnet/ # Speaker classification model
β βββ ...
βββ nisqa_s.tar # Audio quality assessment model
```
Supported models:
- [NISQA](https://github.com/deepvk/NISQA-s) β Audio quality assessment.
- [GigaAM](https://github.com/salute-developers/GigaAM) β ASR.
- [ruAccent](https://github.com/Den4ikAI/ruaccent) β Accent restoration.
- [RUPunct](https://huggingface.co/RUPunct/RUPunct_big) β Punctuation restoration.
- [VoxBlink ResNet](https://github.com/wenet-e2e/wespeaker) β Speaker classification.
- [TryIPaG2P](https://github.com/NikiPshg/TryIPaG2P) β Phonemization.
- [Speaker Diarization](https://github.com/pyannote/pyannote-audio) β Speaker diarization.
- [Whisper](https://github.com/SYSTRAN/faster-whisper) β ASR + segmentation
---
## Citation
If you use this pipeline in your research or production, please cite:
```
@misc{borodin2025datacentricframeworkaddressingphonetic,
title={A Data-Centric Framework for Addressing Phonetic and Prosodic Challenges in Russian Speech Generative Models},
author={Kirill Borodin and Nikita Vasiliev and Vasiliy Kudryavtsev and Maxim Maslov and Mikhail Gorodnichev and Oleg Rogov and Grach Mkrtchian},
year={2025},
eprint={2507.13563},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2507.13563},
}
```
---
<!-- ## References and Acknowledgements
Thanks to all the developers and contributors who made this project possible.
<a href="https://github.com/mtuciru/balalaika/graphs/contributors">
<img src="https://contrib.rocks/image?repo=yeongpin/balalaikap&preview=true&max=&columns=" />
</a> -->
## License
### Dataset: Balalaika
- **CC BY-NC-ND 4.0** β non-commercial, no derivatives, research use only.
- Cite the corpus and do **not** redistribute files without written permission.
### Code
- **CC BY-NC-SA 4.0** β You may use, modify, and share the material for academic, non-commercial purposes only.
-You must retain the copyright and license notices; contact the authors for commercial use.
### Third-Party Models & Libraries
Comply with each componentβs original license in addition to the above:
| Component | License |
|-----------|---------|
| NISQA-s | Apache 2.0 |
| GigaAM | MIT |
| ruAccent | CC BY-NC-ND 4.0 |
| RUPunct | CC BY-NC-ND 4.0 |
| VoxBlink ResNet | Apache 2.0 |
| TryIPaG2P | MIT |
| pyannote-audio | MIT |
| Faster-Whisper | MIT | |