Datasets:
Create README.md
Browse files
README.md
ADDED
|
@@ -0,0 +1,25 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
annotations_creators: []
|
| 3 |
+
language: []
|
| 4 |
+
language_creators: []
|
| 5 |
+
license: []
|
| 6 |
+
multilinguality: []
|
| 7 |
+
pretty_name: Mel spectrograms of music
|
| 8 |
+
size_categories:
|
| 9 |
+
- 10K<n<100K
|
| 10 |
+
source_datasets: []
|
| 11 |
+
tags:
|
| 12 |
+
- audio
|
| 13 |
+
- spectrograms
|
| 14 |
+
task_categories:
|
| 15 |
+
- image-to-image
|
| 16 |
+
task_ids: []
|
| 17 |
+
---
|
| 18 |
+
Over 20,000 256x256 mel spectrograms of 5 second samples of music from my Spotify liked playlist. The code to convert from audio to spectrogram and vice versa can be found in https://github.com/teticio/audio-diffusion along with scripts to train and run inference using De-noising Diffusion Probabilistic Models.
|
| 19 |
+
```
|
| 20 |
+
x_res = 1024
|
| 21 |
+
y_res = 1024
|
| 22 |
+
sample_rate = 44100
|
| 23 |
+
n_fft = 2048
|
| 24 |
+
hop_length = 512
|
| 25 |
+
```
|