|
|
---
|
|
|
license: apache-2.0
|
|
|
tags:
|
|
|
- executorch
|
|
|
- object-detection
|
|
|
- vision
|
|
|
- YOLO
|
|
|
- anchor-free
|
|
|
- pytorch
|
|
|
datasets:
|
|
|
- coco
|
|
|
metrics:
|
|
|
- mAP
|
|
|
---
|
|
|
# YOLOX models for executorch
|
|
|
|
|
|
YOLOX models trained on COCO object detection (118k annotated images) at resolution 640x640. It was introduced in the paper [YOLOX: Exceeding YOLO Series in 2021](https://arxiv.org/abs/2107.08430) by Zheng Ge et al. and first released in [this repository](https://github.com/Megvii-BaseDetection/YOLOX).
|
|
|
|
|
|
The models in this repo have been exported to use with [executorch](https://github.com/pytorch/executorch)
|
|
|
|
|
|
|
|
|
Here is an example of detections created with YOLOX nano and the executorch runtime:
|
|
|
|
|
|

|
|
|
|
|
|
|
|
|
The models are exported from the following standard models trained on COCO:
|
|
|
|
|
|
#### Standard Models.
|
|
|
|
|
|
|Model |size |mAP<sup>val<br>0.5:0.95 |mAP<sup>test<br>0.5:0.95 | Speed V100<br>(ms) | Params<br>(M) |FLOPs<br>(G)| weights |
|
|
|
| ------ |:---: | :---: | :---: |:---: |:---: | :---: | :----: |
|
|
|
|[YOLOX-s](./exps/default/yolox_s.py) |640 |40.5 |40.5 |9.8 |9.0 | 26.8 | [github](https://github.com/Megvii-BaseDetection/YOLOX/releases/download/0.1.1rc0/yolox_s.pth) |
|
|
|
|[YOLOX-m](./exps/default/yolox_m.py) |640 |46.9 |47.2 |12.3 |25.3 |73.8| [github](https://github.com/Megvii-BaseDetection/YOLOX/releases/download/0.1.1rc0/yolox_m.pth) |
|
|
|
|[YOLOX-l](./exps/default/yolox_l.py) |640 |49.7 |50.1 |14.5 |54.2| 155.6 | [github](https://github.com/Megvii-BaseDetection/YOLOX/releases/download/0.1.1rc0/yolox_l.pth) |
|
|
|
|[YOLOX-x](./exps/default/yolox_x.py) |640 |51.1 |**51.5** | 17.3 |99.1 |281.9 | [github](https://github.com/Megvii-BaseDetection/YOLOX/releases/download/0.1.1rc0/yolox_x.pth) |
|
|
|
|[YOLOX-Darknet53](./exps/default/yolov3.py) |640 | 47.7 | 48.0 | 11.1 |63.7 | 185.3 | [github](https://github.com/Megvii-BaseDetection/YOLOX/releases/download/0.1.1rc0/yolox_darknet.pth) |
|
|
|
|
|
|
# How to use
|
|
|
|
|
|
The models have been exported using the code from this [PR](https://github.com/Megvii-BaseDetection/YOLOX/pull/1860). It includes instructions on how to export your model so it can be executed using the executorch runtime.
|
|
|
|
|
|
|
|
|
Example code on how to run inference:
|
|
|
```python
|
|
|
import cv2
|
|
|
from executorch.runtime import Runtime
|
|
|
|
|
|
input_shape = (640,640) # (416,416) for tiny and nano
|
|
|
origin_img = cv2.imread("path/to/your/image.png")
|
|
|
img = cv2.resize(origin_img, input_shape)
|
|
|
|
|
|
runtime = Runtime.get()
|
|
|
method = runtime.load_program("path/to/model/yolox_s.pte").load_method("forward")
|
|
|
|
|
|
output = method.execute([torch.from_numpy(img).unsqueeze(0)])
|
|
|
output = [o.numpy() for o in output]
|
|
|
|
|
|
# Add postprocessing like NMS to transform to bounding boxes
|
|
|
```
|
|
|
|
|
|
|
|
|
# How to export and use your own YOLOX model
|
|
|
|
|
|
Install the YOLOX project from [here](https://github.com/Megvii-BaseDetection/YOLOX) and follow these instructions:
|
|
|
|
|
|
### Step1: Install executorch
|
|
|
|
|
|
run the following command to install onnxruntime:
|
|
|
```shell
|
|
|
pip install executorch
|
|
|
```
|
|
|
|
|
|
#### Convert Your Model to Executorch
|
|
|
|
|
|
First, you should move to <YOLOX_HOME> by:
|
|
|
```shell
|
|
|
cd <YOLOX_HOME>
|
|
|
```
|
|
|
Then, you can:
|
|
|
|
|
|
1. Convert a standard YOLOX model by -n:
|
|
|
```shell
|
|
|
python3 tools/export_executorch.py --output-name yolox_s.pte -n yolox-s -c yolox_s.pth
|
|
|
```
|
|
|
Notes:
|
|
|
* -n: specify a model name. The model name must be one of the [yolox-s,m,l,x and yolox-nano, yolox-tiny, yolov3]
|
|
|
* -c: the model you have trained
|
|
|
* To customize an input shape for onnx model, modify the following code in tools/export_executorch.py:
|
|
|
|
|
|
```python
|
|
|
dummy_input = torch.randn(1, 3, exp.test_size[0], exp.test_size[1])
|
|
|
```
|
|
|
|
|
|
1. Convert a standard YOLOX model by -f. When using -f, the above command is equivalent to:
|
|
|
|
|
|
```shell
|
|
|
python3 tools/export_executorch.py --output-name yolox_s.pte -f exps/default/yolox_s.py -c yolox_s.pth
|
|
|
```
|
|
|
|
|
|
3. To convert your customized model, please use -f:
|
|
|
|
|
|
```shell
|
|
|
python3 tools/export_executorch.py --output-name your_yolox.pte -f exps/your_dir/your_yolox.py -c your_yolox.pth
|
|
|
```
|
|
|
|
|
|
### Step3: Executorch Runtime Demo
|
|
|
|
|
|
Step1.
|
|
|
```shell
|
|
|
cd <YOLOX_HOME>/demo/executorch
|
|
|
```
|
|
|
|
|
|
Step2.
|
|
|
```shell
|
|
|
python3 executorch_inference.py -m <EXECUTORCH_MODEL_PATH> -i <IMAGE_PATH> -o <OUTPUT_DIR> -s 0.3 --input_shape 640,640
|
|
|
```
|
|
|
Notes:
|
|
|
* -m: your converted pte model
|
|
|
* -i: input_image
|
|
|
* -s: score threshold for visualization.
|
|
|
* --input_shape: should be consistent with the shape you used for executorch convertion.
|
|
|
|
|
|
|
|
|
|
|
|
## Cite YOLOX
|
|
|
If you use YOLOX in your research, please cite our work by using the following BibTeX entry:
|
|
|
|
|
|
```latex
|
|
|
@article{yolox2021,
|
|
|
title={YOLOX: Exceeding YOLO Series in 2021},
|
|
|
author={Ge, Zheng and Liu, Songtao and Wang, Feng and Li, Zeming and Sun, Jian},
|
|
|
journal={arXiv preprint arXiv:2107.08430},
|
|
|
year={2021}
|
|
|
}
|
|
|
|
|
|
[ImageTag]: ./example_output.png
|
|
|
|