Update README.md
Browse files
README.md
CHANGED
|
@@ -12,17 +12,21 @@ widget:
|
|
| 12 |
example_title: "Long-s piano ad"
|
| 13 |
---
|
| 14 |
|
| 15 |
-
(Work in progress)
|
| 16 |
-
|
| 17 |
# Swedish OCR correction
|
| 18 |
|
| 19 |
<!-- Provide a quick summary of what the model is/does. -->
|
| 20 |
|
| 21 |
This model corrects OCR errors in Swedish text.
|
| 22 |
|
|
|
|
|
|
|
|
|
|
|
|
|
| 23 |
## Model Description
|
| 24 |
|
| 25 |
-
This model is a fine-tuned version of [byt5-small](https://huggingface.co/google/byt5-small), a character-level multilingual transformer.
|
|
|
|
|
|
|
| 26 |
|
| 27 |
<!-- ### Model Description-->
|
| 28 |
|
|
@@ -42,6 +46,7 @@ This model is a fine-tuned version of [byt5-small](https://huggingface.co/google
|
|
| 42 |
- **Paper [optional]:** [More Information Needed]
|
| 43 |
- **Demo [optional]:** [More Information Needed]-->
|
| 44 |
|
|
|
|
| 45 |
## Training Data
|
| 46 |
|
| 47 |
The base model byt5 is pre-trained on [mc4](https://huggingface.co/datasets/mc4). This fine-tuned version is further trained on:
|
|
@@ -49,11 +54,24 @@ The base model byt5 is pre-trained on [mc4](https://huggingface.co/datasets/mc4)
|
|
| 49 |
- Swedish newspapers from 1818 to 2018. Parts of the dataset are available from Språkbanken Text: [Swedish newspapers 1818-1870](https://spraakbanken.gu.se/en/resources/svenska-tidningar-1818-1870), [Swedish newspapers 1871-1906](https://spraakbanken.gu.se/resurser/svenska-tidningar-1871-1906).
|
| 50 |
- Swedish blackletter documents from 1626 to 1816, available from Språkbaknen Text: [Swedish fraktur 1626-1816](https://spraakbanken.gu.se/resurser/svensk-fraktur-1626-1816)
|
| 51 |
|
| 52 |
-
This data includes characters not used in Swedish today, such as the long s (ſ) and the esszett ligature (ß).
|
|
|
|
| 53 |
|
| 54 |
## Usage
|
| 55 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 56 |
|
| 57 |
-
|
|
|
|
|
|
|
|
|
|
| 58 |
|
|
|
|
|
|
|
| 59 |
|
|
|
|
| 12 |
example_title: "Long-s piano ad"
|
| 13 |
---
|
| 14 |
|
|
|
|
|
|
|
| 15 |
# Swedish OCR correction
|
| 16 |
|
| 17 |
<!-- Provide a quick summary of what the model is/does. -->
|
| 18 |
|
| 19 |
This model corrects OCR errors in Swedish text.
|
| 20 |
|
| 21 |
+
## Try it!
|
| 22 |
+
- On short texts in the inference widget to the right ->
|
| 23 |
+
- On files or longer texts in the [demo](https://huggingface.co/spaces/viklofg/swedish-ocr-correction-demo)
|
| 24 |
+
|
| 25 |
## Model Description
|
| 26 |
|
| 27 |
+
This model is a fine-tuned version of [byt5-small](https://huggingface.co/google/byt5-small), a character-level multilingual transformer.
|
| 28 |
+
The fine-tuning data consists of OCR samples from Swedish newspapers and historical documents.
|
| 29 |
+
The model works on texts up to 128 UTF-8 bytes (see [Length limit](#length-limit)).
|
| 30 |
|
| 31 |
<!-- ### Model Description-->
|
| 32 |
|
|
|
|
| 46 |
- **Paper [optional]:** [More Information Needed]
|
| 47 |
- **Demo [optional]:** [More Information Needed]-->
|
| 48 |
|
| 49 |
+
|
| 50 |
## Training Data
|
| 51 |
|
| 52 |
The base model byt5 is pre-trained on [mc4](https://huggingface.co/datasets/mc4). This fine-tuned version is further trained on:
|
|
|
|
| 54 |
- Swedish newspapers from 1818 to 2018. Parts of the dataset are available from Språkbanken Text: [Swedish newspapers 1818-1870](https://spraakbanken.gu.se/en/resources/svenska-tidningar-1818-1870), [Swedish newspapers 1871-1906](https://spraakbanken.gu.se/resurser/svenska-tidningar-1871-1906).
|
| 55 |
- Swedish blackletter documents from 1626 to 1816, available from Språkbaknen Text: [Swedish fraktur 1626-1816](https://spraakbanken.gu.se/resurser/svensk-fraktur-1626-1816)
|
| 56 |
|
| 57 |
+
This data includes characters not used in Swedish today, such as the long s (ſ) and the esszett ligature (ß), which means that the model should be able to handle texts with these characters.
|
| 58 |
+
See for example the example titled _Long-s piano ad_ in the inference widget to the right.
|
| 59 |
|
| 60 |
## Usage
|
| 61 |
+
Use the code below to get started with the model.
|
| 62 |
+
|
| 63 |
+
```python
|
| 64 |
+
from transformers import pipeline, T5ForConditionalGeneration, AutoTokenizer
|
| 65 |
+
|
| 66 |
+
model = T5ForConditionalGeneration.from_pretrained('viklofg/swedish-ocr-correction')
|
| 67 |
+
tokenizer = AutoTokenizer.from_pretrained('google/byt5-small')
|
| 68 |
+
pipe = pipeline('text2text-generation', model=model, tokenizer=tokenizer)
|
| 69 |
|
| 70 |
+
ocr = 'Den i HandelstidniDgens g&rdagsnnmmer omtalade hvalfisken, sorn fångats i Frölnndaviken'
|
| 71 |
+
output = pipe(ocr)
|
| 72 |
+
print(output)
|
| 73 |
+
```
|
| 74 |
|
| 75 |
+
### Length limit
|
| 76 |
+
The model accepts input sequences of at most 128 UTF-8 bytes, longer sequences are truncated to this limit. 128 UTF-8 bytes corresponds to slightly less than 128 characters of Swedish text since most characters are encoded as one byte, but non-ASCII characters such as Å, Ä, and Ö are encoded as two (or more) bytes.
|
| 77 |
|