Update README.md
Browse files
README.md
CHANGED
|
@@ -15,7 +15,7 @@ Apart from the special token "<|endoftext|>" for text ending in the OpenAI GPT-2
|
|
| 15 |
The model and tokenizer were trained using the Hugging Face libraries with an Nvidia Tesla V100 GPU with 16GB memory on Google Colab servers.
|
| 16 |
|
| 17 |
## Authors
|
| 18 |
-
The
|
| 19 |
|
| 20 |
Thanks to the members of the community who collaborated with funding for the initial tests.
|
| 21 |
|
|
|
|
| 15 |
The model and tokenizer were trained using the Hugging Face libraries with an Nvidia Tesla V100 GPU with 16GB memory on Google Colab servers.
|
| 16 |
|
| 17 |
## Authors
|
| 18 |
+
The model was trained by Alejandro Oñate Latorre (Spain) and Jorge Ortiz Fuentes (Chile), members of -Deep ESP-, an open-source community on Natural Language Processing in Spanish (https://t.me/joinchat/VoEp1bPrDYEexc6h).
|
| 19 |
|
| 20 |
Thanks to the members of the community who collaborated with funding for the initial tests.
|
| 21 |
|