Audio-Text-to-Text
Transformers
Safetensors
leduckhai nielsr HF Staff commited on
Commit
0a1127a
·
verified ·
1 Parent(s): 0fba840

Improve model card: update pipeline tag, add library name, and link to Github repo (#2)

Browse files

- Improve model card: update pipeline tag, add library name, and link to Github repo (bd726e618aabd45c5ef7165f900188869283cfad)


Co-authored-by: Niels Rogge <[email protected]>

Files changed (1) hide show
  1. README.md +19 -15
README.md CHANGED
@@ -1,5 +1,4 @@
1
  ---
2
- license: mit
3
  datasets:
4
  - leduckhai/Sentiment-Reasoning
5
  language:
@@ -8,7 +7,9 @@ language:
8
  - zh
9
  - de
10
  - fr
11
- pipeline_tag: text-generation
 
 
12
  ---
13
 
14
  # Sentiment Reasoning for Healthcare
@@ -26,21 +27,21 @@ pipeline_tag: text-generation
26
  </p>
27
  <p align="center"><em>Sentiment Reasoning pipeline</em></p>
28
 
29
- * **Abstract:**
30
- Transparency in AI healthcare decision-making is crucial. By incorporating rationales to explain reason for each predicted label, users could understand Large Language Models (LLMs)’s reasoning to make better decision. In this work, we introduce a new task - **Sentiment Reasoning** - for both speech and text modalities, and our proposed multimodal multitask framework and **the world's largest multimodal sentiment analysis dataset**. Sentiment Reasoning is an auxiliary task in sentiment analysis where the model predicts both the sentiment label and generates the rationale behind it based on the input transcript. Our study conducted on both human transcripts and Automatic Speech Recognition (ASR) transcripts shows that Sentiment Reasoning helps improve model transparency by providing rationale for model prediction with quality semantically comparable to humans while also improving model's classification performance (**+2% increase in both accuracy and macro-F1**) via rationale-augmented fine-tuning. Also, no significant difference in the semantic quality of generated rationales between human and ASR transcripts. All code, data (**five languages - Vietnamese, English, Chinese, German, and French**) and models are published online.
31
 
32
- * **Citation:**
33
- Please cite this paper: [https://arxiv.org/abs/2407.21054](https://arxiv.org/abs/2407.21054)
34
 
35
- ``` bibtex
36
- @misc{Sentiment_Reasoning,
37
- title={Sentiment Reasoning for Healthcare},
38
- author={Khai-Nguyen Nguyen and Khai Le-Duc and Bach Phan Tat and Duy Le and Long Vo-Dang and Truong-Son Hy},
39
- year={2024},
40
- eprint={2407.21054},
41
- url={https://arxiv.org/abs/2407.21054},
42
- }
43
- ```
44
 
45
  This repository contains scripts for automatic speech recognition (ASR) and sentiment reasoning using cascaded sequence-to-sequence (seq2seq) audio-language models. The provided scripts cover model preparation, training, inference, and evaluation processes, based on the dataset in the paper.
46
 
@@ -65,6 +66,9 @@ This repository contains scripts for automatic speech recognition (ASR) and sent
65
  </p>
66
  <p align="center"><em>Sample data format used in Sentiment Reasoning dataset</em></p>
67
 
 
 
 
68
 
69
  ## Contact
70
 
 
1
  ---
 
2
  datasets:
3
  - leduckhai/Sentiment-Reasoning
4
  language:
 
7
  - zh
8
  - de
9
  - fr
10
+ license: mit
11
+ pipeline_tag: audio-text-to-text
12
+ library_name: transformers
13
  ---
14
 
15
  # Sentiment Reasoning for Healthcare
 
27
  </p>
28
  <p align="center"><em>Sentiment Reasoning pipeline</em></p>
29
 
30
+ * **Abstract:**
31
+ Transparency in AI healthcare decision-making is crucial. By incorporating rationales to explain reason for each predicted label, users could understand Large Language Models (LLMs)’s reasoning to make better decision. In this work, we introduce a new task - **Sentiment Reasoning** - for both speech and text modalities, and our proposed multimodal multitask framework and **the world's largest multimodal sentiment analysis dataset**. Sentiment Reasoning is an auxiliary task in sentiment analysis where the model predicts both the sentiment label and generates the rationale behind it based on the input transcript. Our study conducted on both human transcripts and Automatic Speech Recognition (ASR) transcripts shows that Sentiment Reasoning helps improve model transparency by providing rationale for model prediction with quality semantically comparable to humans while also improving model's classification performance (**+2% increase in both accuracy and macro-F1**) via rationale-augmented fine-tuning. Also, no significant difference in the semantic quality of generated rationales between human and ASR transcripts. All code, data (**five languages - Vietnamese, English, Chinese, German, and French**) and models are published online.
32
 
33
+ * **Citation:**
34
+ Please cite this paper: [https://arxiv.org/abs/2407.21054](https://arxiv.org/abs/2407.21054)
35
 
36
+ ```bibtex
37
+ @misc{Sentiment_Reasoning,
38
+ title={Sentiment Reasoning for Healthcare},
39
+ author={Khai-Nguyen Nguyen and Khai Le-Duc and Bach Phan Tat and Duy Le and Long Vo-Dang and Truong-Son Hy},
40
+ year={2024},
41
+ eprint={2407.21054},
42
+ url={https://arxiv.org/abs/2407.21054},
43
+ }
44
+ ```
45
 
46
  This repository contains scripts for automatic speech recognition (ASR) and sentiment reasoning using cascaded sequence-to-sequence (seq2seq) audio-language models. The provided scripts cover model preparation, training, inference, and evaluation processes, based on the dataset in the paper.
47
 
 
66
  </p>
67
  <p align="center"><em>Sample data format used in Sentiment Reasoning dataset</em></p>
68
 
69
+ ## Use
70
+
71
+ We provide the simple generation process for using our model. For more details, you could refer to [Github](https://github.com/leduckhai/Sentiment-Reasoning)
72
 
73
  ## Contact
74