Update README.md
#77
by
mmendoza
- opened
README.md
CHANGED
|
@@ -51,7 +51,7 @@ Disclaimer: The team releasing BART did not write a model card for this model so
|
|
| 51 |
|
| 52 |
## Model description
|
| 53 |
|
| 54 |
-
BART is a transformer encoder-
|
| 55 |
|
| 56 |
BART is particularly effective when fine-tuned for text generation (e.g. summarization, translation) but also works well for comprehension tasks (e.g. text classification, question answering). This particular checkpoint has been fine-tuned on CNN Daily Mail, a large collection of text-summary pairs.
|
| 57 |
|
|
|
|
| 51 |
|
| 52 |
## Model description
|
| 53 |
|
| 54 |
+
BART is a transformer encoder-decoder (seq2seq) model with a bidirectional (BERT-like) encoder and an autoregressive (GPT-like) decoder. BART is pre-trained by (1) corrupting text with an arbitrary noising function, and (2) learning a model to reconstruct the original text.
|
| 55 |
|
| 56 |
BART is particularly effective when fine-tuned for text generation (e.g. summarization, translation) but also works well for comprehension tasks (e.g. text classification, question answering). This particular checkpoint has been fine-tuned on CNN Daily Mail, a large collection of text-summary pairs.
|
| 57 |
|