Datasets:
Update README.md
Browse files
README.md
CHANGED
|
@@ -55,7 +55,7 @@ The dataset features code-switched speech, combining Catalan (ca) and Spanish (e
|
|
| 55 |
## Dataset Structure
|
| 56 |
|
| 57 |
### Data Instances
|
| 58 |
-
```
|
| 59 |
{
|
| 60 |
'audio':
|
| 61 |
{
|
|
@@ -90,8 +90,11 @@ This corpus specifically focuses on Catalan code-switched with Spanish, a lingui
|
|
| 90 |
This task is particularly low-resourced because, besides being a variety of the Catalan language, it further restricts the available data by incorporating code-switching, a complex and less-explored aspect of language use.
|
| 91 |
|
| 92 |
### Source Data
|
| 93 |
-
### Initial Data Collection
|
| 94 |
This corpus was extracted from the original [tv3_parla](https://huggingface.co/datasets/collectivat/tv3_parla) dataset that includes 240 hours of Catalan speech from broadcast material.
|
|
|
|
|
|
|
|
|
|
|
|
|
| 95 |
To extract the CS part, we used the BERT detection. [Google’s multilingual BERT](https://arxiv.org/pdf/1810.04805) was fine-tuned for token classification using a synthetic corpus of code-switched dialogues in Catalan and Spanish.
|
| 96 |
During fine-tuning, each word was labeled with its corresponding language token.
|
| 97 |
Once trained, the model was applied to the transcriptions of the original TV3 Parla dataset, where it performed token-level language classification.
|
|
|
|
| 55 |
## Dataset Structure
|
| 56 |
|
| 57 |
### Data Instances
|
| 58 |
+
```
|
| 59 |
{
|
| 60 |
'audio':
|
| 61 |
{
|
|
|
|
| 90 |
This task is particularly low-resourced because, besides being a variety of the Catalan language, it further restricts the available data by incorporating code-switching, a complex and less-explored aspect of language use.
|
| 91 |
|
| 92 |
### Source Data
|
|
|
|
| 93 |
This corpus was extracted from the original [tv3_parla](https://huggingface.co/datasets/collectivat/tv3_parla) dataset that includes 240 hours of Catalan speech from broadcast material.
|
| 94 |
+
|
| 95 |
+
### Data Collection and Processing
|
| 96 |
+
|
| 97 |
+
|
| 98 |
To extract the CS part, we used the BERT detection. [Google’s multilingual BERT](https://arxiv.org/pdf/1810.04805) was fine-tuned for token classification using a synthetic corpus of code-switched dialogues in Catalan and Spanish.
|
| 99 |
During fine-tuning, each word was labeled with its corresponding language token.
|
| 100 |
Once trained, the model was applied to the transcriptions of the original TV3 Parla dataset, where it performed token-level language classification.
|