Datasets:

Languages:
Russian
ArXiv:
License:
korallll commited on
Commit
f2cc83f
·
verified ·
1 Parent(s): 370f716

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +336 -3
README.md CHANGED
@@ -1,3 +1,336 @@
1
- ---
2
- license: mit
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: cc-by-nc-sa-4.0
3
+ language:
4
+ - ru
5
+ task_categories:
6
+ - text-to-speech
7
+ pretty_name: Balalaika
8
+ ---
9
+
10
+ # A Data-Centric Framework for Addressing Phonetic and Prosodic Challenges in Russian Speech Generative Models
11
+
12
+ Russian speech synthesis presents distinctive challenges, including vowel reduction, consonant devoicing, variable stress patterns, homograph ambiguity, and unnatural intonation. This paper introduces Balalaika, a novel dataset comprising more than 2,000 hours of studio-quality Russian speech with comprehensive textual annotations, including punctuation and stress markings. Experimental results show that models trained on Balalaika significantly outperform those trained on existing datasets in both speech synthesis and enhancement tasks.
13
+
14
+ ---
15
+
16
+ ## Quick Start 👟
17
+ ```bash
18
+ git clone https://github.com/mtuciru/balalaika && cd balalaika
19
+ bash create_user_env.sh # sets up venv + pip deps
20
+ bash use_meta_500h.sh # pick 100h / 500h / 1000h / 2000h as needed
21
+
22
+ ```
23
+
24
+ ## Table of Contents
25
+
26
+ 1. [Prerequisites](#prerequisites)
27
+ 2. [Installation](#installation)
28
+ 3. [Data Preparation](#data-preparation)
29
+ - [Quick Setup (Default Parameters)](#quick-setup)
30
+ - [Custom Metadata Download](#custom-metadata-download)
31
+ 4. [Running the Pipeline](#running-the-pipeline)
32
+ - [Basic Scenario (Local Processing)](#basic-scenario-local-processing)
33
+ 5. [Configuration](#configuration)
34
+ 6. [Environment Variables](#environment-variables)
35
+ 7. [Models](#models)
36
+ 8. [Citation](#citation)
37
+ <!-- 9. [Acknowledgments](#acknowledgments) -->
38
+ 9. [License](#license)
39
+
40
+ ---
41
+
42
+ ## Prerequisites
43
+
44
+ Ensure you have the following tools installed on your system:
45
+
46
+ ```bash
47
+ sudo apt update && sudo apt install -y \
48
+ ffmpeg \ # video/audio toolkit
49
+ python3 \ # Python
50
+ python3-pip \ # Pip package manager
51
+ python3-venv \ # std-lib virtual-env support
52
+ python3-dev \ # headers for compiling native wheels
53
+ python-is-python3
54
+ wget -qO- https://astral.sh/uv/install.sh | sh
55
+
56
+ ```
57
+
58
+ ---
59
+
60
+ ## Installation
61
+
62
+ Clone the repository and set up the environment:
63
+
64
+ ```bash
65
+ git clone https://github.com/mtuciru/balalaika
66
+ cd balalaika
67
+ # Use this if you want to annotate/modify the dataset
68
+ bash create_dev_env.sh
69
+ # Use this if you only want to use the pre-annotated dataset
70
+ bash create_user_env.sh
71
+ ```
72
+
73
+ ---
74
+
75
+ ## Data Preparation
76
+
77
+ ### Quick Setup (Default Parameters)
78
+
79
+ To download and prepare the dataset with default settings, choose one of the preconfigured dataset sizes:
80
+
81
+ * **100-hour dataset**
82
+ ```bash
83
+ bash use_meta_100h.sh
84
+ ```
85
+
86
+ * **500-hour dataset**
87
+ ```bash
88
+ bash use_meta_500h.sh
89
+ ```
90
+
91
+ * **1000-hour dataset**
92
+ ```bash
93
+ bash use_meta_1000h.sh
94
+ ```
95
+
96
+ * **2000-hour dataset**
97
+ ```bash
98
+ bash use_meta_2000h.sh
99
+ ```
100
+
101
+ All metadata can also be downloaded from [Hugging Face – MTUCI](https://huggingface.co/MTUCI).
102
+
103
+ ### Custom Metadata Download
104
+
105
+ If you already have generated metadata files (`balalaika.parquet` and `balalaika.pkl`), place them in the project root and run:
106
+
107
+ ```bash
108
+ bash use_meta.sh
109
+ ```
110
+
111
+ ---
112
+
113
+ ## Running the Pipeline
114
+
115
+
116
+ ### Basic Scenario (Local Processing)
117
+
118
+
119
+ This scenario will:
120
+
121
+ 1. Download datasets
122
+ 2. Split audio into semantic chunks
123
+ 3. Transcribe all segments
124
+ 4. Perform speaker segmentation
125
+ 5. Apply phonemization
126
+
127
+ To execute locally, run:
128
+
129
+ ```bash
130
+ bash base.sh configs/config.yaml
131
+ ```
132
+
133
+ All output metadata will be saved in `podcasts/result.csv`.
134
+
135
+ ---
136
+
137
+ ## Configuration
138
+
139
+ The main configuration file is located at `configs/config.yaml`. This file is organized into several sections, each corresponding to a specific stage of the podcast processing pipeline. Below is a detailed explanation of the key parameters within each section.
140
+
141
+ ---
142
+
143
+ ### Global Parameters
144
+
145
+ * `podcasts_path`: It specifies the **absolute path** to the directory where all downloaded podcast files will be stored and where subsequent processing (preprocessing, separation, transcription, etc.) will look for and save its output.
146
+ ---
147
+
148
+ ### `download` Section
149
+
150
+ This section controls how podcast episodes are downloaded.
151
+
152
+ * `podcasts_path`: (As explained above) The directory where downloaded podcasts will be saved.
153
+ * `episodes_limit`: This sets a **limit on the number of episodes** to download from a single podcast playlist.
154
+ * `num_workers`: Specifies the **number of parallel processes** to use for downloading. A higher number can speed up downloads but will consume more system resources.
155
+ * `podcasts_urls_file`: This parameter points to the **path of a `.pkl` file** that contains a list of podcast URLs to be downloaded.
156
+
157
+ ---
158
+
159
+ ### `preprocess` Section
160
+
161
+ This section handles the initial processing of downloaded audio files, such as chopping them into smaller segments.
162
+
163
+ * `podcasts_path`: (As explained above) The directory containing the raw downloaded podcasts that need to be preprocessed.
164
+ * `duration`: Defines the **maximum length in seconds** for each audio sample (segment).
165
+ * `num_workers`: Specifies the **number of parallel processes** to use during preprocessing.
166
+ * `whisper_model`: Specifies the **name or path of the Faster-Whisper compatible model** to be used for initial audio processing.
167
+ * `compute_type`: Determines the **computation type** for the Whisper model, affecting performance and memory usage.
168
+ * `beam_size`: This parameter is related to the **beam search algorithm** used in the Whisper model's decoding process.
169
+
170
+ ---
171
+
172
+ ### `separation` Section
173
+
174
+ This section calculates metrics for each audio
175
+
176
+ * `podcasts_path`: (As explained above) The directory where the chopped podcasts (from the `preprocess` stage) are located.
177
+ * `num_workers`: The **number of parallel processes** to use for audio separation.
178
+ * `nisqa_config`: Specifies the **path to the configuration file for NISQA**
179
+ * `one_speaker`: A **boolean flag** (`True`/`False`) that, when enabled (`True`), instructs the system to download and process only those audio recordings that should contain a single speaker.
180
+
181
+ ---
182
+
183
+ ### `transcription` Section
184
+
185
+ This section is responsible for converting audio into text.
186
+
187
+ * `podcasts_path`: (As explained above) The directory containing the processed audio files ready for transcription.
188
+ * `model_name`: Specifies the **type of automatic speech recognition (ASR) model** to use. Options typically include `"ctc" or "rnnt"`.
189
+ * `num_workers`: The **number of parallel processes per GPU** to use for transcription.
190
+ * `with_timestamps`: A **boolean flag** (`True`/`False`) that, when enabled, allows the transcription process to generate timestamps for each word or segment. **it only works with ctc**
191
+ * `lm_path`: Specifies the **path to a language model file (`.bin`)**. A language model can improve transcription accuracy by providing contextual information.
192
+
193
+ ---
194
+
195
+ ### `punctuation` Section
196
+
197
+ This section focuses on adding proper punctuation to the transcribed text.
198
+
199
+ * `podcasts_path`: (As explained above) The directory where the transcribed text files are located.
200
+ * `model_name`: Specifies the **name of the RUPunct model** to be used for punctuation restoration.
201
+ * `num_workers`: The **number of parallel processes per GPU** to use for punctuation.
202
+ ---
203
+
204
+ ### `accent` Section
205
+
206
+ In the transcribed text this part is restored with accents.
207
+
208
+ * `podcasts_path`: (As explained above) The directory containing the relevant podcast files.
209
+ * `num_workers`: The **number of parallel processes per GPU** to use for accent processing.
210
+ * `model_name`: Specifies the **name of the ruAccent model** to be used.
211
+
212
+ ---
213
+
214
+ ### `phonemizer` Section
215
+
216
+ This section is responsible for converting text into phonetic representations (phonemes).
217
+
218
+ * `podcasts_path`: (As explained above) The directory where the text files (from transcription and punctuation stages) are located.
219
+ * `num_workers`: The **number of parallel processes per GPU** to use for phonemization.
220
+ ---
221
+
222
+ ### `classification` Section
223
+
224
+ This section relates to global speaker clustering.
225
+
226
+ * `podcasts_path`: (As explained above) The directory containing the podcast files relevant for classification.
227
+ * `num_workers`: The **number of parallel processes per GPU** to use for classification.
228
+ * `threshold`: This is the **speaker classification confidence threshold**. Values typically range from `0.6` to `0.9`. A higher threshold means the model needs to be more confident in its classification to assign a label.
229
+ * `model_path`: Specifies the **path to the pretrained speaker classification model** in `.pt` format.
230
+ ---
231
+
232
+ ### Execution Scripts
233
+
234
+ Each processing script (`*_yaml.sh` and `*_args.sh`) offers flexibility in how parameters are provided:
235
+
236
+ * `*_yaml.sh`: These scripts read all necessary parameters directly from the main `config.yaml` file, ensuring consistency across different stages.
237
+ * `*_args.sh`: These scripts allow for hardcoded arguments directly within the shell script itself, which can be useful for quick tests or specific overrides without modifying the main configuration file.
238
+
239
+ ## Environment Variables
240
+
241
+ Create a `.env` file in the project root with the following:
242
+
243
+ ```ini
244
+ HF_TOKEN=<your_huggingface_token>
245
+ YANDEX_KEY=<your_yandex_music_token>
246
+ ```
247
+
248
+ * `HF_TOKEN`: Required for speaker count estimation.
249
+ * `YANDEX_KEY`: Required for dataset downloads.
250
+
251
+ ---
252
+
253
+ ## Important Notes
254
+
255
+ - All scripts must be executed from the **project root directory**.
256
+ - Paths in the config file must be **absolute**.
257
+ - The processing scripts (punctuation, accents) should be run **sequentially**.
258
+ - You’ll need:
259
+ - Yandex Music API key ([How to get one](https://yandex-music.readthedocs.io/en/main/token.html))
260
+ - Hugging Face token
261
+
262
+ ## Models
263
+
264
+ Place all required models under the `models/` directory with the following structure:
265
+
266
+ ```
267
+ models/
268
+ ├── voxblink_resnet/ # Speaker classification model
269
+ │ └── ...
270
+ └── nisqa_s.tar # Audio quality assessment model
271
+ ```
272
+
273
+ Supported models:
274
+
275
+ - [NISQA](https://github.com/deepvk/NISQA-s) – Audio quality assessment.
276
+ - [GigaAM](https://github.com/salute-developers/GigaAM) – ASR.
277
+ - [ruAccent](https://github.com/Den4ikAI/ruaccent) – Accent restoration.
278
+ - [RUPunct](https://huggingface.co/RUPunct/RUPunct_big) – Punctuation restoration.
279
+ - [VoxBlink ResNet](https://github.com/wenet-e2e/wespeaker) – Speaker classification.
280
+ - [TryIPaG2P](https://github.com/NikiPshg/TryIPaG2P) – Phonemization.
281
+ - [Speaker Diarization](https://github.com/pyannote/pyannote-audio) – Speaker diarization.
282
+ - [Whisper](https://github.com/SYSTRAN/faster-whisper) – ASR + segmentation
283
+
284
+ ---
285
+
286
+ ## Citation
287
+
288
+ If you use this pipeline in your research or production, please cite:
289
+ ```
290
+ @misc{borodin2025datacentricframeworkaddressingphonetic,
291
+ title={A Data-Centric Framework for Addressing Phonetic and Prosodic Challenges in Russian Speech Generative Models},
292
+ author={Kirill Borodin and Nikita Vasiliev and Vasiliy Kudryavtsev and Maxim Maslov and Mikhail Gorodnichev and Oleg Rogov and Grach Mkrtchian},
293
+ year={2025},
294
+ eprint={2507.13563},
295
+ archivePrefix={arXiv},
296
+ primaryClass={cs.CL},
297
+ url={https://arxiv.org/abs/2507.13563},
298
+ }
299
+ ```
300
+
301
+ ---
302
+
303
+ <!-- ## References and Acknowledgements
304
+
305
+ Thanks to all the developers and contributors who made this project possible.
306
+
307
+ <a href="https://github.com/mtuciru/balalaika/graphs/contributors">
308
+ <img src="https://contrib.rocks/image?repo=yeongpin/balalaikap&preview=true&max=&columns=" />
309
+ </a> -->
310
+
311
+
312
+ ## License
313
+
314
+
315
+ ### Dataset: Balalaika
316
+ - **CC BY-NC-ND 4.0** – non-commercial, no derivatives, research use only.
317
+ - Cite the corpus and do **not** redistribute files without written permission.
318
+
319
+ ### Code
320
+ - **CC BY-NC-SA 4.0** – You may use, modify, and share the material for academic, non-commercial purposes only.
321
+ -You must retain the copyright and license notices; contact the authors for commercial use.
322
+
323
+
324
+ ### Third-Party Models & Libraries
325
+ Comply with each component’s original license in addition to the above:
326
+
327
+ | Component | License |
328
+ |-----------|---------|
329
+ | NISQA-s | Apache 2.0 |
330
+ | GigaAM | MIT |
331
+ | ruAccent | CC BY-NC-ND 4.0 |
332
+ | RUPunct | CC BY-NC-ND 4.0 |
333
+ | VoxBlink ResNet | Apache 2.0 |
334
+ | TryIPaG2P | MIT |
335
+ | pyannote-audio | MIT |
336
+ | Faster-Whisper | MIT |