PaddleOCR-VL-For-Manga
Model Description
PaddleOCR-VL-For-Manga is an OCR model enhanced for Japanese manga text recognition. It is fine-tuned from PaddleOCR-VL and achieves much higher accuracy on manga speech bubbles and stylized fonts.
This model was fine-tuned on a combination of the Manga109-s dataset and 1.5 million synthetic data samples. It showcases the potential of Supervised Fine-Tuning (SFT) to create highly accurate, domain-specific VLMs for OCR tasks from a powerful, general-purpose base like PaddleOCR-VL, which supports 109 languages.
This project serves as a practical guide for developers looking to build their own custom OCR solutions. You can find the training code at the Github Repository, a tutorial is coming soon.
Performance
The model achieves a 70% full-sentence accuracy on a test set of Manga109-s crops (representing a 10% split of the dataset). For comparison, the original PaddleOCR-VL on the same test dataset achieves 27% full sentence accuracy.
Common errors involve discrepancies between visually similar characters that are often used interchangeably, such as:
!?vs.!?(Full-width vs. half-width punctuation)OKvs.ok(Full-width vs. half-width letters)1205vs.1205(Full-width vs. half-width numbers)- “人” (U+4EBA) vs. “⼈” (U+2F08) (Standard CJK Unified Ideograph vs. CJK Radical)
The prevalence of these character types highlights a limitation of standard metrics like Character Error Rate (CER). These metrics may not fully capture the model's practical accuracy, as they penalize semantically equivalent variations that are common in stylized text.
Examples
How to Use
You can use this model with the transformers, PaddleOCR, or any library that supports PaddleOCR-VL to perform OCR on manga images. The model architecture and weights layout are identical to the base model.
If your application involves documents with structured layouts, you can use your fine-tuned OCR model in conjunction with PP-DocLayoutV2 for layout analysis. However, for manga, the reading order and layout are quite different.
Training Details
- Base Model: PaddleOCR-VL
- Dataset:
- Manga109-s: 0.1 million randomly sampled text-region crops (not full pages) were used for training (90% split); the remaining 10% crops were used for testing.
- Synthetic Data: 1.5 million generated samples.
- Training Frameworks:
- transformers and trl
- Alternatives for SFT:
Acknowledgements
- Manga109-s dataset, which provided the manga text-region crops used for training and evaluation.
- PaddleOCR-VL, the base Visual Language Model on which this model is fine-tuned.
- manga-ocr, used in this project for data processing and synthetic data generation; it also inspired practical workflows and evaluation considerations for manga OCR.
License
This model is licensed under the Apache 2.0 license.
- Downloads last month
- 96




