Update README
Browse files
README.md
CHANGED
|
@@ -1,199 +1,277 @@
|
|
| 1 |
---
|
| 2 |
-
|
| 3 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 4 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 5 |
|
| 6 |
-
|
|
|
|
|
|
|
| 7 |
|
| 8 |
-
|
|
|
|
| 9 |
|
|
|
|
|
|
|
| 10 |
|
|
|
|
| 11 |
|
| 12 |
-
##
|
| 13 |
-
|
| 14 |
-
###
|
| 15 |
-
|
| 16 |
-
|
| 17 |
-
|
| 18 |
-
|
| 19 |
-
|
| 20 |
-
|
| 21 |
-
|
| 22 |
-
|
| 23 |
-
|
| 24 |
-
|
| 25 |
-
|
| 26 |
-
|
| 27 |
-
|
| 28 |
-
|
| 29 |
-
|
| 30 |
-
|
| 31 |
-
|
| 32 |
-
|
| 33 |
-
|
| 34 |
-
|
| 35 |
-
|
| 36 |
-
|
| 37 |
-
|
| 38 |
-
|
| 39 |
-
|
| 40 |
-
|
| 41 |
-
|
| 42 |
-
|
| 43 |
-
|
| 44 |
-
[
|
| 45 |
-
|
| 46 |
-
|
| 47 |
-
|
| 48 |
-
|
| 49 |
-
|
| 50 |
-
[
|
| 51 |
-
|
| 52 |
-
|
| 53 |
-
|
| 54 |
-
|
| 55 |
-
|
| 56 |
-
[
|
| 57 |
-
|
| 58 |
-
|
| 59 |
-
|
| 60 |
-
|
| 61 |
-
|
| 62 |
-
[
|
| 63 |
-
|
| 64 |
-
|
| 65 |
-
|
| 66 |
-
|
| 67 |
-
|
| 68 |
-
|
| 69 |
-
|
| 70 |
-
|
| 71 |
-
|
| 72 |
-
|
| 73 |
-
|
| 74 |
-
[
|
| 75 |
-
|
| 76 |
-
|
| 77 |
-
|
| 78 |
-
### Training Data
|
| 79 |
-
|
| 80 |
-
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
|
| 81 |
-
|
| 82 |
-
[More Information Needed]
|
| 83 |
-
|
| 84 |
-
### Training Procedure
|
| 85 |
-
|
| 86 |
-
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
|
| 87 |
-
|
| 88 |
-
#### Preprocessing [optional]
|
| 89 |
-
|
| 90 |
-
[More Information Needed]
|
| 91 |
-
|
| 92 |
-
|
| 93 |
-
#### Training Hyperparameters
|
| 94 |
-
|
| 95 |
-
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
|
| 96 |
-
|
| 97 |
-
#### Speeds, Sizes, Times [optional]
|
| 98 |
-
|
| 99 |
-
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
|
| 100 |
-
|
| 101 |
-
[More Information Needed]
|
| 102 |
-
|
| 103 |
-
## Evaluation
|
| 104 |
-
|
| 105 |
-
<!-- This section describes the evaluation protocols and provides the results. -->
|
| 106 |
-
|
| 107 |
-
### Testing Data, Factors & Metrics
|
| 108 |
-
|
| 109 |
-
#### Testing Data
|
| 110 |
-
|
| 111 |
-
<!-- This should link to a Dataset Card if possible. -->
|
| 112 |
-
|
| 113 |
-
[More Information Needed]
|
| 114 |
-
|
| 115 |
-
#### Factors
|
| 116 |
-
|
| 117 |
-
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
|
| 118 |
-
|
| 119 |
-
[More Information Needed]
|
| 120 |
-
|
| 121 |
-
#### Metrics
|
| 122 |
-
|
| 123 |
-
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
|
| 124 |
-
|
| 125 |
-
[More Information Needed]
|
| 126 |
-
|
| 127 |
-
### Results
|
| 128 |
-
|
| 129 |
-
[More Information Needed]
|
| 130 |
-
|
| 131 |
-
#### Summary
|
| 132 |
-
|
| 133 |
-
|
| 134 |
-
|
| 135 |
-
## Model Examination [optional]
|
| 136 |
-
|
| 137 |
-
<!-- Relevant interpretability work for the model goes here -->
|
| 138 |
-
|
| 139 |
-
[More Information Needed]
|
| 140 |
-
|
| 141 |
-
## Environmental Impact
|
| 142 |
-
|
| 143 |
-
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
|
| 144 |
-
|
| 145 |
-
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
|
| 146 |
-
|
| 147 |
-
- **Hardware Type:** [More Information Needed]
|
| 148 |
-
- **Hours used:** [More Information Needed]
|
| 149 |
-
- **Cloud Provider:** [More Information Needed]
|
| 150 |
-
- **Compute Region:** [More Information Needed]
|
| 151 |
-
- **Carbon Emitted:** [More Information Needed]
|
| 152 |
-
|
| 153 |
-
## Technical Specifications [optional]
|
| 154 |
-
|
| 155 |
-
### Model Architecture and Objective
|
| 156 |
-
|
| 157 |
-
[More Information Needed]
|
| 158 |
-
|
| 159 |
-
### Compute Infrastructure
|
| 160 |
-
|
| 161 |
-
[More Information Needed]
|
| 162 |
-
|
| 163 |
-
#### Hardware
|
| 164 |
-
|
| 165 |
-
[More Information Needed]
|
| 166 |
-
|
| 167 |
-
#### Software
|
| 168 |
-
|
| 169 |
-
[More Information Needed]
|
| 170 |
-
|
| 171 |
-
## Citation [optional]
|
| 172 |
-
|
| 173 |
-
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
|
| 174 |
-
|
| 175 |
-
**BibTeX:**
|
| 176 |
-
|
| 177 |
-
[More Information Needed]
|
| 178 |
-
|
| 179 |
-
**APA:**
|
| 180 |
-
|
| 181 |
-
[More Information Needed]
|
| 182 |
|
| 183 |
-
|
| 184 |
|
| 185 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 186 |
|
| 187 |
-
|
| 188 |
|
| 189 |
-
##
|
| 190 |
|
| 191 |
-
|
|
|
|
| 192 |
|
| 193 |
-
|
| 194 |
|
| 195 |
-
|
|
|
|
|
|
|
| 196 |
|
| 197 |
-
|
| 198 |
|
| 199 |
-
|
|
|
|
|
|
| 1 |
---
|
| 2 |
+
license: other
|
| 3 |
+
language:
|
| 4 |
+
- en
|
| 5 |
+
arxiv: 2507.08128
|
| 6 |
+
tags:
|
| 7 |
+
- audio
|
| 8 |
+
- reasoning
|
| 9 |
+
- audio understanding
|
| 10 |
+
- ASR
|
| 11 |
+
datasets:
|
| 12 |
+
- nvidia/AudioSkills
|
| 13 |
+
- nvidia/AF-Chat
|
| 14 |
+
- nvidia/AF-Think
|
| 15 |
+
- nvidia/LongAudio
|
| 16 |
+
pipeline_tag: audio-text-to-text
|
| 17 |
---
|
| 18 |
+
# Model Overview
|
| 19 |
+
|
| 20 |
+
<div align="center" style="display: flex; justify-content: center; align-items: center; text-align: center;">
|
| 21 |
+
<a href="https://github.com/NVIDIA/audio-flamingo" style="margin-right: 20px; text-decoration: none; display: flex; align-items: center;">
|
| 22 |
+
<img src="static/logo-no-bg.png" alt="Audio Flamingo 3 🔥🚀🔥" width="120">
|
| 23 |
+
</a>
|
| 24 |
+
</div>
|
| 25 |
+
<div align="center" style="display: flex; justify-content: center; align-items: center; text-align: center;">
|
| 26 |
+
<h2>
|
| 27 |
+
Audio Flamingo 3: Advancing Audio Intelligence with Fully Open Large Audio-Language Models
|
| 28 |
+
</h2>
|
| 29 |
+
</div>
|
| 30 |
+
|
| 31 |
+
<div align="center" style="display: flex; justify-content: center; margin-top: 10px;">
|
| 32 |
+
<a href="https://arxiv.org/abs/2507.08128"><img src="https://img.shields.io/badge/arXiv-2503.03983-AD1C18" style="margin-right: 5px;"></a>
|
| 33 |
+
<a href="https://research.nvidia.com/labs/adlr/AF3/"><img src="https://img.shields.io/badge/Demo page-228B22" style="margin-right: 5px;"></a>
|
| 34 |
+
<a href="https://github.com/NVIDIA/audio-flamingo"><img src='https://img.shields.io/badge/Github-Audio Flamingo 3-9C276A' style="margin-right: 5px;"></a>
|
| 35 |
+
<a href="https://github.com/NVIDIA/audio-flamingo/stargazers"><img src="https://img.shields.io/github/stars/NVIDIA/audio-flamingo.svg?style=social"></a>
|
| 36 |
+
</div>
|
| 37 |
+
|
| 38 |
+
<div align="center" style="display: flex; justify-content: center; margin-top: 10px; flex-wrap: wrap; gap: 5px;">
|
| 39 |
+
<a href="https://huggingface.co/nvidia/audio-flamingo-3">
|
| 40 |
+
<img src="https://img.shields.io/badge/🤗-Checkpoints-ED5A22.svg">
|
| 41 |
+
</a>
|
| 42 |
+
<a href="https://huggingface.co/nvidia/audio-flamingo-3-chat">
|
| 43 |
+
<img src="https://img.shields.io/badge/🤗-Checkpoints (Chat)-ED5A22.svg">
|
| 44 |
+
</a>
|
| 45 |
+
<a href="https://huggingface.co/datasets/nvidia/AudioSkills">
|
| 46 |
+
<img src="https://img.shields.io/badge/🤗-Dataset: AudioSkills--XL-ED5A22.svg">
|
| 47 |
+
</a>
|
| 48 |
+
<a href="https://huggingface.co/datasets/nvidia/LongAudio">
|
| 49 |
+
<img src="https://img.shields.io/badge/🤗-Dataset: LongAudio--XL-ED5A22.svg">
|
| 50 |
+
</a>
|
| 51 |
+
<a href="https://huggingface.co/datasets/nvidia/AF-Chat">
|
| 52 |
+
<img src="https://img.shields.io/badge/🤗-Dataset: AF--Chat-ED5A22.svg">
|
| 53 |
+
</a>
|
| 54 |
+
<a href="https://huggingface.co/datasets/nvidia/AF-Think">
|
| 55 |
+
<img src="https://img.shields.io/badge/🤗-Dataset: AF--Think-ED5A22.svg">
|
| 56 |
+
</a>
|
| 57 |
+
</div>
|
| 58 |
+
|
| 59 |
+
<div align="center" style="display: flex; justify-content: center; margin-top: 10px;">
|
| 60 |
+
<a href="https://huggingface.co/spaces/nvidia/audio-flamingo-3"><img src="https://img.shields.io/badge/🤗-Gradio Demo (7B)-5F9EA0.svg" style="margin-right: 5px;"></a>
|
| 61 |
+
</div>
|
| 62 |
+
|
| 63 |
+
## Description:
|
| 64 |
+
Audio Flamingo 3 (AF3) is a fully open, state-of-the-art Large Audio-Language Model (LALM) that advances reasoning and understanding across speech, sounds, and music. AF3 builds on previous work with innovations in:
|
| 65 |
+
|
| 66 |
+
- Unified audio representation learning (speech, sound, music)
|
| 67 |
+
- Flexible, on-demand chain-of-thought reasoning
|
| 68 |
+
- Long-context audio comprehension (up to 10 minutes)
|
| 69 |
+
- Multi-turn, multi-audio conversational dialogue (AF3-Chat)
|
| 70 |
+
- Voice-to-voice interaction (AF3-Chat)
|
| 71 |
+
|
| 72 |
+
Extensive evaluations confirm AF3’s effectiveness, setting new benchmarks on over 20 public audio understanding and reasoning tasks.
|
| 73 |
+
|
| 74 |
+
**This model is for non-commercial research purposes only.**
|
| 75 |
+
|
| 76 |
+
## Results:
|
| 77 |
+
<center><img src="static/af3_radial-1.png" width="400"></center>
|
| 78 |
+
|
| 79 |
+
## Model Architecture:
|
| 80 |
+
Audio Flamingo 3 uses AF-Whisper unified audio encoder, MLP-based audio adaptor, Decoder-only LLM backbone (Qwen2.5-7B), and Streaming TTS module (AF3-Chat). Audio Flamingo 3 can take up to 10 minutes of audio inputs.
|
| 81 |
+
|
| 82 |
+
<center><img src="static/af3_main_diagram-1.png" width="800"></center>
|
| 83 |
+
|
| 84 |
+
## License / Terms of Use
|
| 85 |
+
The model is released under the [NVIDIA OneWay Noncommercial License](static/NVIDIA_OneWay_Noncommercial_License.docx). Portions of the dataset generation are also subject to the [Qwen Research License](https://huggingface.co/Qwen/Qwen2.5-3B/blob/main/LICENSE) and OpenAI’s [Terms of Use](https://openai.com/policies/terms-of-use).
|
| 86 |
+
|
| 87 |
+
## Deployment Geography
|
| 88 |
+
Global.
|
| 89 |
+
|
| 90 |
+
## Use Case
|
| 91 |
+
Intended for researchers and developers to explore:
|
| 92 |
+
- Audio question answering and reasoning
|
| 93 |
+
- Long-context audio comprehension
|
| 94 |
+
- Interactive sound/music design assistants
|
| 95 |
+
- Multi-turn (voice) chat
|
| 96 |
+
|
| 97 |
+
## Release Date
|
| 98 |
+
- Github (07/10/2025) via https://github.com/NVIDIA/audio-flamingo
|
| 99 |
+
- HuggingFace (07/10/2025) via https://huggingface.co/nvidia/audio-flamingo-3
|
| 100 |
+
|
| 101 |
+
## References:
|
| 102 |
+
* [Audio Flamingo 3: Advancing Audio Intelligence with Fully Open Large Audio-Language Models]()
|
| 103 |
+
* [Project Page](https://github.com/NVIDIA/audio-flamingo)
|
| 104 |
+
* [Demo Website](https://research.nvidia.com/labs/adlr/AF3/)
|
| 105 |
+
* [Hugging Face](https://huggingface.co/nvidia/audio-flamingo-3)
|
| 106 |
+
|
| 107 |
+
|
| 108 |
+
## Model Architecture:
|
| 109 |
+
**Architecture Type:** Transformer
|
| 110 |
+
**Network Architecture:** Audio Flamingo 3
|
| 111 |
+
|
| 112 |
+
AF3 uses:
|
| 113 |
+
- AF-Whisper unified audio encoder
|
| 114 |
+
- MLP-based audio adaptor
|
| 115 |
+
- Decoder-only LLM backbone (Qwen2.5-7B)
|
| 116 |
+
- Streaming TTS module (AF3-Chat)
|
| 117 |
+
|
| 118 |
+
**This model was developed based on [NVILA](https://github.com/NVlabs/VILA/tree/main/scripts/NVILA-Lite) and [Qwen-2.5-7B](https://huggingface.co/Qwen/Qwen2.5-7B) <br>
|
| 119 |
+
|
| 120 |
+
## Input:
|
| 121 |
+
- Input Type: Audio, Text <br>
|
| 122 |
+
- Input Format: WAV/MP3/FLAC, UTF-8 text <br>
|
| 123 |
+
- Input Parameters: Audio is Two-Dimensional (2D) and Text is One-Dimensional (1D)<br>
|
| 124 |
+
- Other Properties Related to Input: <br>
|
| 125 |
+
- Max Audio Length: 10 Minutes <br>
|
| 126 |
+
- Max Text Length: 16000 tokens<br>
|
| 127 |
+
|
| 128 |
+
|
| 129 |
+
## Output:
|
| 130 |
+
- Output Type: Text (and optional speech) <br>
|
| 131 |
+
- Text Format: UTF-8 string <br>
|
| 132 |
+
- Output Parameters: One-Dimensional (1D)<br>
|
| 133 |
+
- Other Properties Related to Output: <br>
|
| 134 |
+
- Max Text Length: 1024 tokens <br>
|
| 135 |
+
- Speech Format: streaming TTS (text-to-speech) waveform<br>
|
| 136 |
+
|
| 137 |
+
|
| 138 |
+
Our AI models are designed and/or optimized to run on NVIDIA GPU-accelerated systems (A100/H100). By leveraging NVIDIA’s hardware (e.g. GPU cores) and software frameworks (e.g., CUDA libraries), the model achieves faster training and inference times compared to CPU-only solutions. <br>
|
| 139 |
+
|
| 140 |
+
## Software Integration:
|
| 141 |
+
**Runtime Engine:** PyTorch / HuggingFace Transformers
|
| 142 |
|
| 143 |
+
**Supported Hardware:**
|
| 144 |
+
* NVIDIA Ampere (A100)
|
| 145 |
+
* NVIDIA Hopper (H100)
|
| 146 |
|
| 147 |
+
**Supported OS:**
|
| 148 |
+
* Linux
|
| 149 |
|
| 150 |
+
## Model Version:
|
| 151 |
+
* v3.0
|
| 152 |
|
| 153 |
+
---
|
| 154 |
|
| 155 |
+
## Training and Testing Datasets:
|
| 156 |
+
|
| 157 |
+
### Training Dataset:
|
| 158 |
+
AF3 is trained entirely on open-source audio data, organized into four novel, large-scale collections. For each dataset, we mention whether the dataset annotations are collected by Human or they are Automated i.e. generated using AI models.
|
| 159 |
+
|
| 160 |
+
The data collection method noted below applies for all datasets used for training and testing:
|
| 161 |
+
Data Collection Method: Human
|
| 162 |
+
Labeling Collection Method: Please see below:
|
| 163 |
+
|
| 164 |
+
#### General Sound:
|
| 165 |
+
* [WavCaps](https://github.com/XinhaoMei/WavCaps) (Automated)
|
| 166 |
+
* [MACS](https://zenodo.org/records/5114771) (Human)
|
| 167 |
+
* [SoundDescs](https://github.com/akoepke/audio-retrieval-benchmark) (Human)
|
| 168 |
+
* [Clotho-v2](https://github.com/audio-captioning/clotho-dataset/tree/master) (Human)
|
| 169 |
+
* [WavText5K](https://github.com/microsoft/WavText5K) (Human)
|
| 170 |
+
* [Clotho-AQA](https://zenodo.org/records/6473207) (Human)
|
| 171 |
+
* [Open-AQA](https://github.com/YuanGongND/ltu?tab=readme-ov-file) (Automated)
|
| 172 |
+
* [CompA-R](https://github.com/Sreyan88/GAMA) (Automated)
|
| 173 |
+
* [Salmonn AQA](https://github.com/bytedance/SALMONN/tree/main) (Automated)
|
| 174 |
+
* [Audio Entailment](https://github.com/microsoft/AudioEntailment)(Automated)
|
| 175 |
+
* [CompA](https://github.com/Sreyan88/CompA) (Automated)
|
| 176 |
+
* [AudioSet](https://research.google.com/audioset/download.html) (Human)
|
| 177 |
+
* [YouTube-8M](https://research.google.com/youtube8m/) (Human)
|
| 178 |
+
* [FSD50k](https://zenodo.org/records/4060432) (Human)
|
| 179 |
+
* [CochlScene](https://github.com/cochlearai/cochlscene) (Human)
|
| 180 |
+
* [NonSpeech7K](https://zenodo.org/records/6967442) (Human)
|
| 181 |
+
* [Chime-Home](https://code.soundsoftware.ac.uk/projects/chime-home-dataset-annotation-and-baseline-evaluation-code) (Human)
|
| 182 |
+
* [Sonyc-UST](https://zenodo.org/records/3966543) (Human)
|
| 183 |
+
|
| 184 |
+
#### Music:
|
| 185 |
+
* [LP-MusicCaps](https://github.com/seungheondoh/lp-music-caps) (Automated)
|
| 186 |
+
* [MusicQA](https://github.com/shansongliu/MU-LLaMA?tab=readme-ov-file) (Automated)
|
| 187 |
+
* [MusicAVQA](https://gewu-lab.github.io/MUSIC-AVQA/) (Human)
|
| 188 |
+
* [MusicBench](https://huggingface.co/datasets/amaai-lab/MusicBench) (Automated)
|
| 189 |
+
* [Mu-LLAMA](https://github.com/shansongliu/MU-LLaMA) (Automated)
|
| 190 |
+
* [NSynth](https://magenta.tensorflow.org/datasets/nsynth) (Human)
|
| 191 |
+
* [FMA](https://github.com/mdeff/fma) (Human)
|
| 192 |
+
* [MusDB-HQ](https://zenodo.org/records/3338373) (Human)
|
| 193 |
+
* [Music4All](https://sites.google.com/view/contact4music4all) (Human)
|
| 194 |
+
* [Million Song Dataset](http://millionsongdataset.com/) (Human)
|
| 195 |
+
|
| 196 |
+
#### Speech:
|
| 197 |
+
* [MSP-Podcast](https://ecs.utdallas.edu/research/researchlabs/msp-lab/MSP-Podcast.html) (Human)
|
| 198 |
+
* [JL-Corpus](https://github.com/tli725/JL-Corpus) (Human)
|
| 199 |
+
* [MELD](https://github.com/declare-lab/MELD) (Human)
|
| 200 |
+
* [Tess](https://www.kaggle.com/datasets/ejlok1/toronto-emotional-speech-set-tess) (Human)
|
| 201 |
+
* [OMGEmotion](https://github.com/knowledgetechnologyuhh/OMGEmotionChallenge) (Human)
|
| 202 |
+
* [Emov-DB](https://github.com/numediart/EmoV-DB) (Human)
|
| 203 |
+
* [LibriSpeech](https://www.openslr.org/12) (Human)
|
| 204 |
+
* [SPGISpeech](https://datasets.kensho.com/datasets/spgispeech) (Human)
|
| 205 |
+
* [TEDLIUM](https://www.openslr.org/51/) (Human)
|
| 206 |
+
* [GigaSpeech](https://github.com/SpeechColab/GigaSpeech) (Human)
|
| 207 |
+
* [Common Voice 15](https://huggingface.co/datasets/mozilla-foundation/common_voice_12_0) (Human)
|
| 208 |
+
* [VoxPopuli](https://github.com/facebookresearch/voxpopuli) (Human)
|
| 209 |
+
* [VoxCeleb2](https://www.robots.ox.ac.uk/~vgg/data/voxceleb/vox2.html) (Human)
|
| 210 |
+
* [Switchboard](https://catalog.ldc.upenn.edu/LDC97S62) (Human)
|
| 211 |
+
* [AMI](https://groups.inf.ed.ac.uk/ami/corpus/) (Human)
|
| 212 |
+
|
| 213 |
+
#### Voice:
|
| 214 |
+
* [VoiceAssistant-400K](https://huggingface.co/datasets/gpt-omni/VoiceAssistant-400K) (Automated)
|
| 215 |
+
|
| 216 |
+
#### Mixed:
|
| 217 |
+
* [AudioSkills-XL (ours)](https://huggingface.co/datasets/nvidia/AudioSkills) (Automated)
|
| 218 |
+
* [LongAudio-XL (ours)](https://huggingface.co/datasets/nvidia/LongAudio) (Automated)
|
| 219 |
+
* [AF-Think (ours)](https://huggingface.co/datasets/nvidia/AF-Think) (Automated)
|
| 220 |
+
* [AF-Chat (ours)](https://huggingface.co/datasets/nvidia/AF-Chat) (Automated)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 221 |
|
| 222 |
+
---
|
| 223 |
|
| 224 |
+
### Testing Dataset:
|
| 225 |
+
Audio Flamingo 3 is evaluated on the test split of the following datasets.
|
| 226 |
+
|
| 227 |
+
Data Collection Method: Human (for all datasets noted below)
|
| 228 |
+
Labeling Method: See below
|
| 229 |
+
|
| 230 |
+
* [ClothoAQA](https://zenodo.org/records/6473207) (Human)
|
| 231 |
+
* [MusicAVQA](https://gewu-lab.github.io/MUSIC-AVQA/) (Human)
|
| 232 |
+
* [Clotho-v2](https://github.com/audio-captioning/clotho-dataset/tree/master) (Human)
|
| 233 |
+
* [CochlScene](https://github.com/cochlearai/cochlscene) (Human)
|
| 234 |
+
* [NonSpeech7K](https://zenodo.org/records/6967442) (Human)
|
| 235 |
+
* [NSynth](https://magenta.tensorflow.org/datasets/nsynth) (Human)
|
| 236 |
+
* [AudioCaps](https://github.com/cdjkim/audiocaps) (Human)
|
| 237 |
+
* [US8K](https://urbansounddataset.weebly.com/urbansound8k.html) (Human)
|
| 238 |
+
* [GTZAN](https://www.tensorflow.org/datasets/catalog/gtzan) (Human)
|
| 239 |
+
* [MMAU](https://github.com/Sakshi113/mmau/tree/main) (Human)
|
| 240 |
+
* [MMAR](https://arxiv.org/abs/2505.13032) (Human)
|
| 241 |
+
* [Audio Entailment](https://github.com/microsoft/AudioEntailment)(Automated)
|
| 242 |
+
* [CompA-R-test](https://github.com/Sreyan88/GAMA) (Automated)
|
| 243 |
+
* [MuchoMusic](https://huggingface.co/datasets/yongyizang/RUListening) (Automated)
|
| 244 |
+
* [Open-AQA](https://github.com/YuanGongND/ltu?tab=readme-ov-file)(Automated)
|
| 245 |
+
* [MusicInstruct](https://huggingface.co/datasets/m-a-p/Music-Instruct) (Automated)
|
| 246 |
+
* [MusicQA](https://huggingface.co/datasets/mu-llama/MusicQA) (Automated)
|
| 247 |
+
* [CMM Hallucination](https://huggingface.co/datasets/DAMO-NLP-SG/CMM) (Human)
|
| 248 |
+
* [IEMOCAP](https://sail.usc.edu/iemocap/) (Human)
|
| 249 |
+
* [VoiceBench](https://github.com/MatthewCYM/VoiceBench) (Human)
|
| 250 |
+
* [OpenAudioBench](https://huggingface.co/datasets/baichuan-inc/OpenAudioBench) (Human)
|
| 251 |
+
* [SEED](https://github.com/BytedanceSpeech/seed-tts-eval) (Human)
|
| 252 |
+
* [LibriSpeech](https://www.openslr.org/12) (Human)
|
| 253 |
+
* [SPGISpeech](https://datasets.kensho.com/datasets/spgispeech) (Human)
|
| 254 |
+
* [TEDLIUM](https://www.openslr.org/51/) (Human)
|
| 255 |
+
* [GigaSpeech](https://github.com/SpeechColab/GigaSpeech) (Human)
|
| 256 |
+
* [Common Voice 15](https://huggingface.co/datasets/mozilla-foundation/common_voice_12_0) (Human)
|
| 257 |
+
* [VoxPopuli](https://github.com/facebookresearch/voxpopuli) (Human)
|
| 258 |
+
* [LongAudioBench (ours)](https://huggingface.co/datasets/nvidia/LongAudio) (Automated)
|
| 259 |
+
* [AF-Chat-test (ours)](https://huggingface.co/datasets/nvidia/AF-Chat) (Human)
|
| 260 |
|
| 261 |
+
---
|
| 262 |
|
| 263 |
+
## Inference:
|
| 264 |
|
| 265 |
+
**Engine:** HuggingFace Transformers
|
| 266 |
+
**Test Hardware:** NVIDIA A100 80 GB
|
| 267 |
|
| 268 |
+
---
|
| 269 |
|
| 270 |
+
## Ethical Considerations:
|
| 271 |
+
NVIDIA believes Trustworthy AI is a shared responsibility and we have established policies and practices to enable development for a wide array of AI applications. When downloaded or used in accordance with our terms of service, developers should work with their internal model team to ensure this model meets requirements for the relevant industry and use case and addresses unforeseen product misuse.
|
| 272 |
+
Please report security vulnerabilities or NVIDIA AI Concerns [here](https://www.nvidia.com/en-us/support/submit-security-vulnerability/).
|
| 273 |
|
| 274 |
+
---
|
| 275 |
|
| 276 |
+
## Acknowledgements
|
| 277 |
+
Built with Qwen, NVILA and the open audio-ML community.
|