Update README.md
Browse files
README.md
CHANGED
|
@@ -1,3 +1,21 @@
|
|
| 1 |
-
---
|
| 2 |
-
license: mit
|
| 3 |
-
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
license: mit
|
| 3 |
+
---
|
| 4 |
+
# A fork of voicekit-team/T-one with ONNX and CUDA support.
|
| 5 |
+
|
| 6 |
+
Solves the problem of extremely slow operation of the model on some devices and adds the ability to run inference directly from the GPU code.
|
| 7 |
+
|
| 8 |
+
```
|
| 9 |
+
!pip install git+https://github.com/NikiPshg/T-one-cuda-onnx.git
|
| 10 |
+
```
|
| 11 |
+
## Usage example
|
| 12 |
+
```
|
| 13 |
+
from tone import StreamingCTCPipeline, read_audio, read_example_audio
|
| 14 |
+
|
| 15 |
+
|
| 16 |
+
audio = read_example_audio() # or read_audio("your_audio.flac")
|
| 17 |
+
# device_id device_id if the graphics card is not found, the CPU is used
|
| 18 |
+
pipeline = StreamingCTCPipeline.from_hugging_face(device_id=0)
|
| 19 |
+
print(pipeline.forward_offline(audio)) # offline recognition using onnx cuda
|
| 20 |
+
|
| 21 |
+
```
|