Upload folder using huggingface_hub
Browse files- .gitattributes +1 -0
- README.md +92 -0
- SV_kp.engine +3 -0
- config.yml +32 -0
- miner.py +359 -0
- objdetect.pt +3 -0
- pitch.py +679 -0
- player.pt +3 -0
.gitattributes
CHANGED
|
@@ -33,3 +33,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
|
| 33 |
*.zip filter=lfs diff=lfs merge=lfs -text
|
| 34 |
*.zst filter=lfs diff=lfs merge=lfs -text
|
| 35 |
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
|
|
|
|
|
| 33 |
*.zip filter=lfs diff=lfs merge=lfs -text
|
| 34 |
*.zst filter=lfs diff=lfs merge=lfs -text
|
| 35 |
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
| 36 |
+
SV_kp.engine filter=lfs diff=lfs merge=lfs -text
|
README.md
ADDED
|
@@ -0,0 +1,92 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# 🚀 Example Chute for Turbovision 🪂
|
| 2 |
+
|
| 3 |
+
This repository demonstrates how to deploy a **Chute** via the **Turbovision CLI**, hosted on **Hugging Face Hub**.
|
| 4 |
+
It serves as a minimal example showcasing the required structure and workflow for integrating machine learning models, preprocessing, and orchestration into a reproducible Chute environment.
|
| 5 |
+
|
| 6 |
+
## Repository Structure
|
| 7 |
+
The following two files **must be present** (in their current locations) for a successful deployment — their content can be modified as needed:
|
| 8 |
+
|
| 9 |
+
| File | Purpose |
|
| 10 |
+
|------|----------|
|
| 11 |
+
| `miner.py` | Defines the ML model type(s), orchestration, and all pre/postprocessing logic. |
|
| 12 |
+
| `config.yml` | Specifies machine configuration (e.g., GPU type, memory, environment variables). |
|
| 13 |
+
|
| 14 |
+
Other files — e.g., model weights, utility scripts, or dependencies — are **optional** and can be included as needed for your model. Note: Any required assets must be defined or contained **within this repo**, which is fully open-source, since all network-related operations (downloading challenge data, weights, etc.) are disabled **inside the Chute**
|
| 15 |
+
|
| 16 |
+
## Overview
|
| 17 |
+
|
| 18 |
+
Below is a high-level diagram showing the interaction between Huggingface, Chutes and Turbovision:
|
| 19 |
+
|
| 20 |
+

|
| 21 |
+
|
| 22 |
+
## Local Testing
|
| 23 |
+
After editing the `config.yml` and `miner.py` and saving it into your Huggingface Repo, you will want to test it works locally.
|
| 24 |
+
|
| 25 |
+
1. Copy the file `scorevision/chute_tmeplate/turbovision_chute.py.j2` as a python file called `my_chute.py` and fill in the missing variables:
|
| 26 |
+
```python
|
| 27 |
+
HF_REPO_NAME = "{{ huggingface_repository_name }}"
|
| 28 |
+
HF_REPO_REVISION = "{{ huggingface_repository_revision }}"
|
| 29 |
+
CHUTES_USERNAME = "{{ chute_username }}"
|
| 30 |
+
CHUTE_NAME = "{{ chute_name }}"
|
| 31 |
+
```
|
| 32 |
+
|
| 33 |
+
2. Run the following command to build the chute locally (Caution: there are known issues with the docker location when running this on a mac)
|
| 34 |
+
```bash
|
| 35 |
+
chutes build my_chute:chute --local --public
|
| 36 |
+
```
|
| 37 |
+
|
| 38 |
+
3. Run the name of the docker image just built (i.e. `CHUTE_NAME`) and enter it
|
| 39 |
+
```bash
|
| 40 |
+
docker run -p 8000:8000 -e CHUTES_EXECUTION_CONTEXT=REMOTE -it <image-name> /bin/bash
|
| 41 |
+
```
|
| 42 |
+
|
| 43 |
+
4. Run the file from within the container
|
| 44 |
+
```bash
|
| 45 |
+
chutes run my_chute:chute --dev --debug
|
| 46 |
+
```
|
| 47 |
+
|
| 48 |
+
5. In another terminal, test the local endpoints to ensure there are no bugs
|
| 49 |
+
```bash
|
| 50 |
+
curl -X POST http://localhost:8000/health -d '{}'
|
| 51 |
+
curl -X POST http://localhost:8000/predict -d '{"url": "https://scoredata.me/2025_03_14/35ae7a/h1_0f2ca0.mp4","meta": {}}'
|
| 52 |
+
```
|
| 53 |
+
|
| 54 |
+
## Live Testing
|
| 55 |
+
1. If you have any chute with the same name (ie from a previous deployment), ensure you delete that first (or you will get an error when trying to build).
|
| 56 |
+
```bash
|
| 57 |
+
chutes chutes list
|
| 58 |
+
```
|
| 59 |
+
Take note of the chute id that you wish to delete (if any)
|
| 60 |
+
```bash
|
| 61 |
+
chutes chutes delete <chute-id>
|
| 62 |
+
```
|
| 63 |
+
|
| 64 |
+
You should also delete its associated image
|
| 65 |
+
```bash
|
| 66 |
+
chutes images list
|
| 67 |
+
```
|
| 68 |
+
Take note of the chute image id
|
| 69 |
+
```bash
|
| 70 |
+
chutes images delete <chute-image-id>
|
| 71 |
+
```
|
| 72 |
+
|
| 73 |
+
2. Use Turbovision's CLI to build, deploy and commit on-chain (Note: you can skip the on-chain commit using `--no-commit`. You can also specify a past huggingface revision to point to using `--revision` and/or the local files you want to upload to your huggingface repo using `--model-path`)
|
| 74 |
+
```bash
|
| 75 |
+
sv -vv push
|
| 76 |
+
```
|
| 77 |
+
|
| 78 |
+
3. When completed, warm up the chute (if its cold 🧊). (You can confirm its status using `chutes chutes list` or `chutes chutes get <chute-id>` if you already know its id). Note: Warming up can sometimes take a while but if the chute runs without errors (should be if you've tested locally first) and there are sufficient nodes (i.e. machines) available matching the `config.yml` you specified, the chute should become hot 🔥!
|
| 79 |
+
```bash
|
| 80 |
+
chutes warmup <chute-id>
|
| 81 |
+
```
|
| 82 |
+
|
| 83 |
+
4. Test the chute's endpoints
|
| 84 |
+
```bash
|
| 85 |
+
curl -X POST https://<YOUR-CHUTE-SLUG>.chutes.ai/health -d '{}' -H "Authorization: Bearer $CHUTES_API_KEY"
|
| 86 |
+
curl -X POST https://<YOUR-CHUTE-SLUG>.chutes.ai/predict -d '{"url": "https://scoredata.me/2025_03_14/35ae7a/h1_0f2ca0.mp4","meta": {}}' -H "Authorization: Bearer $CHUTES_API_KEY"
|
| 87 |
+
```
|
| 88 |
+
|
| 89 |
+
5. Test what your chute would get on a validator (this also applies any validation/integrity checks which may fail if you did not use the Turbovision CLI above to deploy the chute)
|
| 90 |
+
```bash
|
| 91 |
+
sv -vv run-once
|
| 92 |
+
```
|
SV_kp.engine
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:f99452eb79e064189e2758abd20a78845a5b639fc8b9c4bc650519c83e13e8db
|
| 3 |
+
size 368289641
|
config.yml
ADDED
|
@@ -0,0 +1,32 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
Image:
|
| 2 |
+
from_base: parachutes/python:3.12
|
| 3 |
+
run_command:
|
| 4 |
+
- pip install --upgrade setuptools wheel
|
| 5 |
+
- pip install "ultralytics==8.3.222" "opencv-python-headless" "numpy" "pydantic"
|
| 6 |
+
- pip install "tensorflow" "torch==2.7.1" "torchvision==0.22.1" "torch-tensorrt==2.7"
|
| 7 |
+
set_workdir: /app
|
| 8 |
+
|
| 9 |
+
|
| 10 |
+
NodeSelector:
|
| 11 |
+
gpu_count: 1
|
| 12 |
+
min_vram_gb_per_gpu: 16
|
| 13 |
+
include:
|
| 14 |
+
- a100
|
| 15 |
+
- a100_40gb
|
| 16 |
+
- "3090"
|
| 17 |
+
- a40
|
| 18 |
+
- a6000
|
| 19 |
+
- h100
|
| 20 |
+
- l40s
|
| 21 |
+
exclude:
|
| 22 |
+
- "5090"
|
| 23 |
+
- b200
|
| 24 |
+
- h200
|
| 25 |
+
- h20
|
| 26 |
+
- mi300x
|
| 27 |
+
|
| 28 |
+
Chute:
|
| 29 |
+
timeout_seconds: 900
|
| 30 |
+
concurrency: 4
|
| 31 |
+
max_instances: 5
|
| 32 |
+
scaling_threshold: 0.5
|
miner.py
ADDED
|
@@ -0,0 +1,359 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
from pathlib import Path
|
| 2 |
+
from typing import List, Tuple, Dict
|
| 3 |
+
import sys
|
| 4 |
+
import os
|
| 5 |
+
|
| 6 |
+
from numpy import ndarray
|
| 7 |
+
import numpy as np
|
| 8 |
+
from pydantic import BaseModel
|
| 9 |
+
import cv2
|
| 10 |
+
|
| 11 |
+
sys.path.append(os.path.dirname(os.path.abspath(__file__)))
|
| 12 |
+
|
| 13 |
+
os.environ["TF_CPP_MIN_LOG_LEVEL"] = "3"
|
| 14 |
+
os.environ["OMP_NUM_THREADS"] = "16"
|
| 15 |
+
os.environ["TF_NUM_INTRAOP_THREADS"] = "16"
|
| 16 |
+
os.environ["TF_NUM_INTEROP_THREADS"] = "2"
|
| 17 |
+
os.environ["CUDA_LAUNCH_BLOCKING"] = "0"
|
| 18 |
+
os.environ["ORT_LOGGING_LEVEL"] = "3"
|
| 19 |
+
os.environ["TF_ENABLE_ONEDNN_OPTS"] = "0"
|
| 20 |
+
|
| 21 |
+
import logging
|
| 22 |
+
import tensorflow as tf
|
| 23 |
+
from tensorflow.keras import mixed_precision
|
| 24 |
+
import torch._dynamo
|
| 25 |
+
import torch
|
| 26 |
+
import torch_tensorrt
|
| 27 |
+
import gc
|
| 28 |
+
from ultralytics import YOLO
|
| 29 |
+
from pitch import process_batch_input
|
| 30 |
+
|
| 31 |
+
logging.getLogger("tensorflow").setLevel(logging.ERROR)
|
| 32 |
+
tf.config.threading.set_intra_op_parallelism_threads(16)
|
| 33 |
+
tf.config.threading.set_inter_op_parallelism_threads(2)
|
| 34 |
+
tf.compat.v1.logging.set_verbosity(tf.compat.v1.logging.ERROR)
|
| 35 |
+
tf.get_logger().setLevel("ERROR")
|
| 36 |
+
tf.autograph.set_verbosity(0)
|
| 37 |
+
mixed_precision.set_global_policy("mixed_float16")
|
| 38 |
+
tf.config.optimizer.set_jit(True)
|
| 39 |
+
torch._dynamo.config.suppress_errors = True
|
| 40 |
+
|
| 41 |
+
|
| 42 |
+
class BoundingBox(BaseModel):
|
| 43 |
+
x1: int
|
| 44 |
+
y1: int
|
| 45 |
+
x2: int
|
| 46 |
+
y2: int
|
| 47 |
+
cls_id: int
|
| 48 |
+
conf: float
|
| 49 |
+
|
| 50 |
+
|
| 51 |
+
class TVFrameResult(BaseModel):
|
| 52 |
+
frame_id: int
|
| 53 |
+
boxes: List[BoundingBox]
|
| 54 |
+
keypoints: List[Tuple[int, int]]
|
| 55 |
+
|
| 56 |
+
|
| 57 |
+
class Miner:
|
| 58 |
+
QUASI_TOTAL_IOA: float = 0.90
|
| 59 |
+
SMALL_CONTAINED_IOA: float = 0.85
|
| 60 |
+
SMALL_RATIO_MAX: float = 0.50
|
| 61 |
+
SINGLE_PLAYER_HUE_PIVOT: float = 90.0
|
| 62 |
+
|
| 63 |
+
def __init__(self, path_hf_repo: Path) -> None:
|
| 64 |
+
self.bbox_model = YOLO(path_hf_repo / "player.pt")
|
| 65 |
+
print(" BBox Model (objdetect.pt) Loaded")
|
| 66 |
+
device = "cuda" if torch.cuda.is_available() else "cpu"
|
| 67 |
+
model_kp_path = path_hf_repo / "SV_kp.engine"
|
| 68 |
+
model_kp = torch_tensorrt.load(model_kp_path)
|
| 69 |
+
|
| 70 |
+
@torch.inference_mode()
|
| 71 |
+
def run_inference(model, input_tensor: torch.Tensor):
|
| 72 |
+
input_tensor = input_tensor.to(device).to(memory_format=torch.channels_last)
|
| 73 |
+
output = model.module().forward(input_tensor)
|
| 74 |
+
return output
|
| 75 |
+
|
| 76 |
+
run_inference(model_kp, torch.randn(8, 3, 540, 960, device=device, dtype=torch.float32))
|
| 77 |
+
self.keypoints_model = model_kp
|
| 78 |
+
self.kp_threshold = 0.1
|
| 79 |
+
self.pitch_batch_size = 8
|
| 80 |
+
print("✅ Keypoints Model Loaded")
|
| 81 |
+
|
| 82 |
+
def __repr__(self) -> str:
|
| 83 |
+
return (
|
| 84 |
+
f"BBox Model: {type(self.bbox_model).__name__}\n"
|
| 85 |
+
f"Keypoints Model: {type(self.keypoints_model).__name__}"
|
| 86 |
+
)
|
| 87 |
+
|
| 88 |
+
@staticmethod
|
| 89 |
+
def _clip_box_to_image(x1: int, y1: int, x2: int, y2: int, w: int, h: int) -> Tuple[int, int, int, int]:
|
| 90 |
+
x1 = max(0, min(int(x1), w - 1))
|
| 91 |
+
y1 = max(0, min(int(y1), h - 1))
|
| 92 |
+
x2 = max(0, min(int(x2), w - 1))
|
| 93 |
+
y2 = max(0, min(int(y2), h - 1))
|
| 94 |
+
if x2 <= x1:
|
| 95 |
+
x2 = min(w - 1, x1 + 1)
|
| 96 |
+
if y2 <= y1:
|
| 97 |
+
y2 = min(h - 1, y1 + 1)
|
| 98 |
+
return x1, y1, x2, y2
|
| 99 |
+
|
| 100 |
+
@staticmethod
|
| 101 |
+
def _area(bb: BoundingBox) -> int:
|
| 102 |
+
return max(0, bb.x2 - bb.x1) * max(0, bb.y2 - bb.y1)
|
| 103 |
+
|
| 104 |
+
@staticmethod
|
| 105 |
+
def _intersect_area(a: BoundingBox, b: BoundingBox) -> int:
|
| 106 |
+
ix1 = max(a.x1, b.x1)
|
| 107 |
+
iy1 = max(a.y1, b.y1)
|
| 108 |
+
ix2 = min(a.x2, b.x2)
|
| 109 |
+
iy2 = min(a.y2, b.y2)
|
| 110 |
+
if ix2 <= ix1 or iy2 <= iy1:
|
| 111 |
+
return 0
|
| 112 |
+
return (ix2 - ix1) * (iy2 - iy1)
|
| 113 |
+
|
| 114 |
+
@staticmethod
|
| 115 |
+
def _center(bb: BoundingBox) -> Tuple[float, float]:
|
| 116 |
+
return (0.5 * (bb.x1 + bb.x2), 0.5 * (bb.y1 + bb.y2))
|
| 117 |
+
|
| 118 |
+
@staticmethod
|
| 119 |
+
def _mean_hs(img_bgr: np.ndarray) -> Tuple[float, float]:
|
| 120 |
+
hsv = cv2.cvtColor(img_bgr, cv2.COLOR_BGR2HSV)
|
| 121 |
+
return float(np.mean(hsv[:, :, 0])), float(np.mean(hsv[:, :, 1]))
|
| 122 |
+
|
| 123 |
+
def _hs_feature_from_roi(self, img_bgr: np.ndarray, box: BoundingBox) -> np.ndarray:
|
| 124 |
+
H, W = img_bgr.shape[:2]
|
| 125 |
+
x1, y1, x2, y2 = self._clip_box_to_image(box.x1, box.y1, box.x2, box.y2, W, H)
|
| 126 |
+
roi = img_bgr[y1:y2, x1:x2]
|
| 127 |
+
if roi.size == 0:
|
| 128 |
+
return np.array([0.0, 0.0], dtype=np.float32)
|
| 129 |
+
hsv = cv2.cvtColor(roi, cv2.COLOR_BGR2HSV)
|
| 130 |
+
lower_green = np.array([35, 60, 60], dtype=np.uint8)
|
| 131 |
+
upper_green = np.array([85, 255, 255], dtype=np.uint8)
|
| 132 |
+
green_mask = cv2.inRange(hsv, lower_green, upper_green)
|
| 133 |
+
non_green_mask = cv2.bitwise_not(green_mask)
|
| 134 |
+
num_non_green = int(np.count_nonzero(non_green_mask))
|
| 135 |
+
total = hsv.shape[0] * hsv.shape[1]
|
| 136 |
+
if num_non_green > max(50, total // 20):
|
| 137 |
+
h_vals = hsv[:, :, 0][non_green_mask > 0]
|
| 138 |
+
s_vals = hsv[:, :, 1][non_green_mask > 0]
|
| 139 |
+
h_mean = float(np.mean(h_vals)) if h_vals.size else 0.0
|
| 140 |
+
s_mean = float(np.mean(s_vals)) if s_vals.size else 0.0
|
| 141 |
+
else:
|
| 142 |
+
h_mean, s_mean = self._mean_hs(roi)
|
| 143 |
+
return np.array([h_mean, s_mean], dtype=np.float32)
|
| 144 |
+
|
| 145 |
+
def _ioa(self, a: BoundingBox, b: BoundingBox) -> float:
|
| 146 |
+
inter = self._intersect_area(a, b)
|
| 147 |
+
aa = self._area(a)
|
| 148 |
+
if aa <= 0:
|
| 149 |
+
return 0.0
|
| 150 |
+
return inter / aa
|
| 151 |
+
|
| 152 |
+
def suppress_quasi_total_containment(self, boxes: List[BoundingBox]) -> List[BoundingBox]:
|
| 153 |
+
if len(boxes) <= 1:
|
| 154 |
+
return boxes
|
| 155 |
+
keep = [True] * len(boxes)
|
| 156 |
+
for i in range(len(boxes)):
|
| 157 |
+
if not keep[i]:
|
| 158 |
+
continue
|
| 159 |
+
for j in range(len(boxes)):
|
| 160 |
+
if i == j or not keep[j]:
|
| 161 |
+
continue
|
| 162 |
+
ioa_i_in_j = self._ioa(boxes[i], boxes[j])
|
| 163 |
+
if ioa_i_in_j >= self.QUASI_TOTAL_IOA:
|
| 164 |
+
keep[i] = False
|
| 165 |
+
break
|
| 166 |
+
return [bb for bb, k in zip(boxes, keep) if k]
|
| 167 |
+
|
| 168 |
+
def suppress_small_contained(self, boxes: List[BoundingBox]) -> List[BoundingBox]:
|
| 169 |
+
if len(boxes) <= 1:
|
| 170 |
+
return boxes
|
| 171 |
+
keep = [True] * len(boxes)
|
| 172 |
+
areas = [self._area(bb) for bb in boxes]
|
| 173 |
+
for i in range(len(boxes)):
|
| 174 |
+
if not keep[i]:
|
| 175 |
+
continue
|
| 176 |
+
for j in range(len(boxes)):
|
| 177 |
+
if i == j or not keep[j]:
|
| 178 |
+
continue
|
| 179 |
+
ai, aj = areas[i], areas[j]
|
| 180 |
+
if ai == 0 or aj == 0:
|
| 181 |
+
continue
|
| 182 |
+
if ai <= aj:
|
| 183 |
+
ratio = ai / aj
|
| 184 |
+
if ratio <= self.SMALL_RATIO_MAX:
|
| 185 |
+
ioa_i_in_j = self._ioa(boxes[i], boxes[j])
|
| 186 |
+
if ioa_i_in_j >= self.SMALL_CONTAINED_IOA:
|
| 187 |
+
keep[i] = False
|
| 188 |
+
break
|
| 189 |
+
else:
|
| 190 |
+
ratio = aj / ai
|
| 191 |
+
if ratio <= self.SMALL_RATIO_MAX:
|
| 192 |
+
ioa_j_in_i = self._ioa(boxes[j], boxes[i])
|
| 193 |
+
if ioa_j_in_i >= self.SMALL_CONTAINED_IOA:
|
| 194 |
+
keep[j] = False
|
| 195 |
+
return [bb for bb, k in zip(boxes, keep) if k]
|
| 196 |
+
|
| 197 |
+
def _assign_players_two_clusters(self, features: np.ndarray) -> Tuple[np.ndarray, np.ndarray]:
|
| 198 |
+
criteria = (cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, 20, 1.0)
|
| 199 |
+
_, labels, centers = cv2.kmeans(
|
| 200 |
+
np.float32(features),
|
| 201 |
+
K=2,
|
| 202 |
+
bestLabels=None,
|
| 203 |
+
criteria=criteria,
|
| 204 |
+
attempts=5,
|
| 205 |
+
flags=cv2.KMEANS_PP_CENTERS,
|
| 206 |
+
)
|
| 207 |
+
return labels.reshape(-1), centers
|
| 208 |
+
|
| 209 |
+
def _reclass_extra_goalkeepers(self, img_bgr: np.ndarray, boxes: List[BoundingBox], cluster_centers: np.ndarray | None) -> None:
|
| 210 |
+
gk_idxs = [i for i, bb in enumerate(boxes) if int(bb.cls_id) == 1]
|
| 211 |
+
if len(gk_idxs) <= 1:
|
| 212 |
+
return
|
| 213 |
+
gk_idxs_sorted = sorted(gk_idxs, key=lambda i: boxes[i].conf, reverse=True)
|
| 214 |
+
keep_gk_idx = gk_idxs_sorted[0]
|
| 215 |
+
to_reclass = gk_idxs_sorted[1:]
|
| 216 |
+
for gki in to_reclass:
|
| 217 |
+
hs_gk = self._hs_feature_from_roi(img_bgr, boxes[gki])
|
| 218 |
+
if cluster_centers is not None:
|
| 219 |
+
d0 = float(np.linalg.norm(hs_gk - cluster_centers[0]))
|
| 220 |
+
d1 = float(np.linalg.norm(hs_gk - cluster_centers[1]))
|
| 221 |
+
assign_cls = 6 if d0 <= d1 else 7
|
| 222 |
+
else:
|
| 223 |
+
assign_cls = 6 if float(hs_gk[0]) < self.SINGLE_PLAYER_HUE_PIVOT else 7
|
| 224 |
+
boxes[gki].cls_id = int(assign_cls)
|
| 225 |
+
|
| 226 |
+
def predict_batch(self, batch_images: List[ndarray], offset: int, n_keypoints: int) -> List[TVFrameResult]:
|
| 227 |
+
bboxes: Dict[int, List[BoundingBox]] = {}
|
| 228 |
+
bbox_model_results = self.bbox_model.predict(batch_images)
|
| 229 |
+
if bbox_model_results is not None:
|
| 230 |
+
for frame_idx_in_batch, detection in enumerate(bbox_model_results):
|
| 231 |
+
if not hasattr(detection, "boxes") or detection.boxes is None:
|
| 232 |
+
continue
|
| 233 |
+
boxes: List[BoundingBox] = []
|
| 234 |
+
for box in detection.boxes.data:
|
| 235 |
+
x1, y1, x2, y2, conf, cls_id = box.tolist()
|
| 236 |
+
if cls_id == 3:
|
| 237 |
+
cls_id = 2
|
| 238 |
+
elif cls_id == 2:
|
| 239 |
+
cls_id = 3
|
| 240 |
+
boxes.append(
|
| 241 |
+
BoundingBox(
|
| 242 |
+
x1=int(x1),
|
| 243 |
+
y1=int(y1),
|
| 244 |
+
x2=int(x2),
|
| 245 |
+
y2=int(y2),
|
| 246 |
+
cls_id=int(cls_id),
|
| 247 |
+
conf=float(conf),
|
| 248 |
+
)
|
| 249 |
+
)
|
| 250 |
+
footballs = [bb for bb in boxes if int(bb.cls_id) == 0]
|
| 251 |
+
if len(footballs) > 1:
|
| 252 |
+
best_ball = max(footballs, key=lambda b: b.conf)
|
| 253 |
+
boxes = [bb for bb in boxes if int(bb.cls_id) != 0]
|
| 254 |
+
boxes.append(best_ball)
|
| 255 |
+
boxes = self.suppress_quasi_total_containment(boxes)
|
| 256 |
+
boxes = self.suppress_small_contained(boxes)
|
| 257 |
+
img_bgr = batch_images[frame_idx_in_batch]
|
| 258 |
+
player_indices: List[int] = []
|
| 259 |
+
player_feats: List[np.ndarray] = []
|
| 260 |
+
for i, bb in enumerate(boxes):
|
| 261 |
+
if int(bb.cls_id) == 2:
|
| 262 |
+
hs = self._hs_feature_from_roi(img_bgr, bb)
|
| 263 |
+
player_indices.append(i)
|
| 264 |
+
player_feats.append(hs)
|
| 265 |
+
cluster_centers = None
|
| 266 |
+
n_players = len(player_feats)
|
| 267 |
+
if n_players >= 2:
|
| 268 |
+
feats = np.vstack(player_feats)
|
| 269 |
+
labels, centers = self._assign_players_two_clusters(feats)
|
| 270 |
+
order = np.argsort(centers[:, 0])
|
| 271 |
+
centers = centers[order]
|
| 272 |
+
remap = {old_idx: new_idx for new_idx, old_idx in enumerate(order)}
|
| 273 |
+
labels = np.vectorize(remap.get)(labels)
|
| 274 |
+
cluster_centers = centers
|
| 275 |
+
for idx_in_list, lbl in zip(player_indices, labels):
|
| 276 |
+
boxes[idx_in_list].cls_id = 6 if int(lbl) == 0 else 7
|
| 277 |
+
elif n_players == 1:
|
| 278 |
+
hue, _ = player_feats[0]
|
| 279 |
+
boxes[player_indices[0]].cls_id = 6 if float(hue) < self.SINGLE_PLAYER_HUE_PIVOT else 7
|
| 280 |
+
self._reclass_extra_goalkeepers(img_bgr, boxes, cluster_centers)
|
| 281 |
+
bboxes[offset + frame_idx_in_batch] = boxes
|
| 282 |
+
|
| 283 |
+
pitch_batch_size = min(self.pitch_batch_size, len(batch_images))
|
| 284 |
+
keypoints: Dict[int, List[Tuple[int, int]]] = {}
|
| 285 |
+
while True:
|
| 286 |
+
try:
|
| 287 |
+
gc.collect()
|
| 288 |
+
if torch.cuda.is_available():
|
| 289 |
+
tf.keras.backend.clear_session()
|
| 290 |
+
torch.cuda.empty_cache()
|
| 291 |
+
torch.cuda.synchronize()
|
| 292 |
+
device_str = "cuda" if torch.cuda.is_available() else "cpu"
|
| 293 |
+
keypoints_result = process_batch_input(
|
| 294 |
+
batch_images,
|
| 295 |
+
self.keypoints_model,
|
| 296 |
+
self.kp_threshold,
|
| 297 |
+
device_str,
|
| 298 |
+
batch_size=pitch_batch_size,
|
| 299 |
+
)
|
| 300 |
+
if keypoints_result is not None and len(keypoints_result) > 0:
|
| 301 |
+
for frame_number_in_batch, kp_dict in enumerate(keypoints_result):
|
| 302 |
+
if frame_number_in_batch >= len(batch_images):
|
| 303 |
+
break
|
| 304 |
+
frame_keypoints: List[Tuple[int, int]] = []
|
| 305 |
+
try:
|
| 306 |
+
height, width = batch_images[frame_number_in_batch].shape[:2]
|
| 307 |
+
if kp_dict is not None and isinstance(kp_dict, dict):
|
| 308 |
+
for idx in range(32):
|
| 309 |
+
x, y = 0, 0
|
| 310 |
+
kp_idx = idx + 1
|
| 311 |
+
if kp_idx in kp_dict:
|
| 312 |
+
try:
|
| 313 |
+
kp_data = kp_dict[kp_idx]
|
| 314 |
+
if isinstance(kp_data, dict) and "x" in kp_data and "y" in kp_data:
|
| 315 |
+
x = int(kp_data["x"] * width)
|
| 316 |
+
y = int(kp_data["y"] * height)
|
| 317 |
+
except (KeyError, TypeError, ValueError):
|
| 318 |
+
pass
|
| 319 |
+
frame_keypoints.append((x, y))
|
| 320 |
+
except (IndexError, ValueError, AttributeError):
|
| 321 |
+
frame_keypoints = [(0, 0)] * 32
|
| 322 |
+
if len(frame_keypoints) < n_keypoints:
|
| 323 |
+
frame_keypoints.extend([(0, 0)] * (n_keypoints - len(frame_keypoints)))
|
| 324 |
+
else:
|
| 325 |
+
frame_keypoints = frame_keypoints[:n_keypoints]
|
| 326 |
+
keypoints[offset + frame_number_in_batch] = frame_keypoints
|
| 327 |
+
print("✅ Keypoints predicted")
|
| 328 |
+
break
|
| 329 |
+
except RuntimeError as e:
|
| 330 |
+
print(self.pitch_batch_size)
|
| 331 |
+
if "out of memory" in str(e):
|
| 332 |
+
if self.pitch_batch_size == 1:
|
| 333 |
+
break
|
| 334 |
+
self.pitch_batch_size = self.pitch_batch_size // 2 if self.pitch_batch_size > 1 else 1
|
| 335 |
+
pitch_batch_size = min(self.pitch_batch_size, len(batch_images))
|
| 336 |
+
else:
|
| 337 |
+
break
|
| 338 |
+
except Exception as e:
|
| 339 |
+
print(f"❌ Error during keypoints prediction: {e}")
|
| 340 |
+
break
|
| 341 |
+
|
| 342 |
+
results: List[TVFrameResult] = []
|
| 343 |
+
for frame_number in range(offset, offset + len(batch_images)):
|
| 344 |
+
frame_boxes = bboxes.get(frame_number, [])
|
| 345 |
+
frame_keypoints = keypoints.get(frame_number, [(0, 0) for _ in range(n_keypoints)])
|
| 346 |
+
result = TVFrameResult(
|
| 347 |
+
frame_id=frame_number,
|
| 348 |
+
boxes=frame_boxes,
|
| 349 |
+
keypoints=frame_keypoints,
|
| 350 |
+
)
|
| 351 |
+
results.append(result)
|
| 352 |
+
|
| 353 |
+
gc.collect()
|
| 354 |
+
if torch.cuda.is_available():
|
| 355 |
+
tf.keras.backend.clear_session()
|
| 356 |
+
torch.cuda.empty_cache()
|
| 357 |
+
torch.cuda.synchronize()
|
| 358 |
+
|
| 359 |
+
return results
|
objdetect.pt
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:8bbacfcb38e38b1b8816788e9e6e845160533719a0b87b693d58b932380d0d28
|
| 3 |
+
size 152961687
|
pitch.py
ADDED
|
@@ -0,0 +1,679 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
from __future__ import absolute_import
|
| 2 |
+
from __future__ import division
|
| 3 |
+
from __future__ import print_function
|
| 4 |
+
|
| 5 |
+
import os
|
| 6 |
+
import sys
|
| 7 |
+
import time
|
| 8 |
+
from typing import List, Optional, Tuple
|
| 9 |
+
|
| 10 |
+
import cv2
|
| 11 |
+
import numpy as np
|
| 12 |
+
import torch
|
| 13 |
+
import torch.nn as nn
|
| 14 |
+
import torch.nn.functional as F
|
| 15 |
+
import torchvision.transforms as T
|
| 16 |
+
import torchvision.transforms.functional as f
|
| 17 |
+
from pydantic import BaseModel
|
| 18 |
+
|
| 19 |
+
import logging
|
| 20 |
+
logger = logging.getLogger(__name__)
|
| 21 |
+
|
| 22 |
+
|
| 23 |
+
class BoundingBox(BaseModel):
|
| 24 |
+
x1: int
|
| 25 |
+
y1: int
|
| 26 |
+
x2: int
|
| 27 |
+
y2: int
|
| 28 |
+
cls_id: int
|
| 29 |
+
conf: float
|
| 30 |
+
|
| 31 |
+
|
| 32 |
+
class TVFrameResult(BaseModel):
|
| 33 |
+
frame_id: int
|
| 34 |
+
boxes: list[BoundingBox]
|
| 35 |
+
keypoints: list[tuple[int, int]]
|
| 36 |
+
|
| 37 |
+
BatchNorm2d = nn.BatchNorm2d
|
| 38 |
+
BN_MOMENTUM = 0.1
|
| 39 |
+
|
| 40 |
+
def conv3x3(in_planes, out_planes, stride=1):
|
| 41 |
+
"""3x3 convolution with padding"""
|
| 42 |
+
return nn.Conv2d(in_planes, out_planes, kernel_size=3,
|
| 43 |
+
stride=stride, padding=1, bias=False)
|
| 44 |
+
|
| 45 |
+
|
| 46 |
+
class BasicBlock(nn.Module):
|
| 47 |
+
expansion = 1
|
| 48 |
+
|
| 49 |
+
def __init__(self, inplanes, planes, stride=1, downsample=None):
|
| 50 |
+
super(BasicBlock, self).__init__()
|
| 51 |
+
self.conv1 = conv3x3(inplanes, planes, stride)
|
| 52 |
+
self.bn1 = BatchNorm2d(planes, momentum=BN_MOMENTUM)
|
| 53 |
+
self.relu = nn.ReLU(inplace=True)
|
| 54 |
+
self.conv2 = conv3x3(planes, planes)
|
| 55 |
+
self.bn2 = BatchNorm2d(planes, momentum=BN_MOMENTUM)
|
| 56 |
+
self.downsample = downsample
|
| 57 |
+
self.stride = stride
|
| 58 |
+
|
| 59 |
+
def forward(self, x):
|
| 60 |
+
residual = x
|
| 61 |
+
|
| 62 |
+
out = self.conv1(x)
|
| 63 |
+
out = self.bn1(out)
|
| 64 |
+
out = self.relu(out)
|
| 65 |
+
|
| 66 |
+
out = self.conv2(out)
|
| 67 |
+
out = self.bn2(out)
|
| 68 |
+
|
| 69 |
+
if self.downsample is not None:
|
| 70 |
+
residual = self.downsample(x)
|
| 71 |
+
|
| 72 |
+
out += residual
|
| 73 |
+
out = self.relu(out)
|
| 74 |
+
|
| 75 |
+
return out
|
| 76 |
+
|
| 77 |
+
|
| 78 |
+
class Bottleneck(nn.Module):
|
| 79 |
+
expansion = 4
|
| 80 |
+
|
| 81 |
+
def __init__(self, inplanes, planes, stride=1, downsample=None):
|
| 82 |
+
super(Bottleneck, self).__init__()
|
| 83 |
+
self.conv1 = nn.Conv2d(inplanes, planes, kernel_size=1, bias=False)
|
| 84 |
+
self.bn1 = BatchNorm2d(planes, momentum=BN_MOMENTUM)
|
| 85 |
+
self.conv2 = nn.Conv2d(planes, planes, kernel_size=3, stride=stride,
|
| 86 |
+
padding=1, bias=False)
|
| 87 |
+
self.bn2 = BatchNorm2d(planes, momentum=BN_MOMENTUM)
|
| 88 |
+
self.conv3 = nn.Conv2d(planes, planes * self.expansion, kernel_size=1,
|
| 89 |
+
bias=False)
|
| 90 |
+
self.bn3 = BatchNorm2d(planes * self.expansion,
|
| 91 |
+
momentum=BN_MOMENTUM)
|
| 92 |
+
self.relu = nn.ReLU(inplace=True)
|
| 93 |
+
self.downsample = downsample
|
| 94 |
+
self.stride = stride
|
| 95 |
+
|
| 96 |
+
def forward(self, x):
|
| 97 |
+
residual = x
|
| 98 |
+
|
| 99 |
+
out = self.conv1(x)
|
| 100 |
+
out = self.bn1(out)
|
| 101 |
+
out = self.relu(out)
|
| 102 |
+
|
| 103 |
+
out = self.conv2(out)
|
| 104 |
+
out = self.bn2(out)
|
| 105 |
+
out = self.relu(out)
|
| 106 |
+
|
| 107 |
+
out = self.conv3(out)
|
| 108 |
+
out = self.bn3(out)
|
| 109 |
+
|
| 110 |
+
if self.downsample is not None:
|
| 111 |
+
residual = self.downsample(x)
|
| 112 |
+
|
| 113 |
+
out += residual
|
| 114 |
+
out = self.relu(out)
|
| 115 |
+
|
| 116 |
+
return out
|
| 117 |
+
|
| 118 |
+
|
| 119 |
+
class HighResolutionModule(nn.Module):
|
| 120 |
+
def __init__(self, num_branches, blocks, num_blocks, num_inchannels,
|
| 121 |
+
num_channels, fuse_method, multi_scale_output=True):
|
| 122 |
+
super(HighResolutionModule, self).__init__()
|
| 123 |
+
self._check_branches(
|
| 124 |
+
num_branches, blocks, num_blocks, num_inchannels, num_channels)
|
| 125 |
+
|
| 126 |
+
self.num_inchannels = num_inchannels
|
| 127 |
+
self.fuse_method = fuse_method
|
| 128 |
+
self.num_branches = num_branches
|
| 129 |
+
|
| 130 |
+
self.multi_scale_output = multi_scale_output
|
| 131 |
+
|
| 132 |
+
self.branches = self._make_branches(
|
| 133 |
+
num_branches, blocks, num_blocks, num_channels)
|
| 134 |
+
self.fuse_layers = self._make_fuse_layers()
|
| 135 |
+
self.relu = nn.ReLU(inplace=True)
|
| 136 |
+
|
| 137 |
+
def _check_branches(self, num_branches, blocks, num_blocks,
|
| 138 |
+
num_inchannels, num_channels):
|
| 139 |
+
if num_branches != len(num_blocks):
|
| 140 |
+
error_msg = 'NUM_BRANCHES({}) <> NUM_BLOCKS({})'.format(
|
| 141 |
+
num_branches, len(num_blocks))
|
| 142 |
+
logger.error(error_msg)
|
| 143 |
+
raise ValueError(error_msg)
|
| 144 |
+
|
| 145 |
+
if num_branches != len(num_channels):
|
| 146 |
+
error_msg = 'NUM_BRANCHES({}) <> NUM_CHANNELS({})'.format(
|
| 147 |
+
num_branches, len(num_channels))
|
| 148 |
+
logger.error(error_msg)
|
| 149 |
+
raise ValueError(error_msg)
|
| 150 |
+
|
| 151 |
+
if num_branches != len(num_inchannels):
|
| 152 |
+
error_msg = 'NUM_BRANCHES({}) <> NUM_INCHANNELS({})'.format(
|
| 153 |
+
num_branches, len(num_inchannels))
|
| 154 |
+
logger.error(error_msg)
|
| 155 |
+
raise ValueError(error_msg)
|
| 156 |
+
|
| 157 |
+
def _make_one_branch(self, branch_index, block, num_blocks, num_channels,
|
| 158 |
+
stride=1):
|
| 159 |
+
downsample = None
|
| 160 |
+
if stride != 1 or \
|
| 161 |
+
self.num_inchannels[branch_index] != num_channels[branch_index] * block.expansion:
|
| 162 |
+
downsample = nn.Sequential(
|
| 163 |
+
nn.Conv2d(self.num_inchannels[branch_index],
|
| 164 |
+
num_channels[branch_index] * block.expansion,
|
| 165 |
+
kernel_size=1, stride=stride, bias=False),
|
| 166 |
+
BatchNorm2d(num_channels[branch_index] * block.expansion,
|
| 167 |
+
momentum=BN_MOMENTUM),
|
| 168 |
+
)
|
| 169 |
+
|
| 170 |
+
layers = []
|
| 171 |
+
layers.append(block(self.num_inchannels[branch_index],
|
| 172 |
+
num_channels[branch_index], stride, downsample))
|
| 173 |
+
self.num_inchannels[branch_index] = \
|
| 174 |
+
num_channels[branch_index] * block.expansion
|
| 175 |
+
for i in range(1, num_blocks[branch_index]):
|
| 176 |
+
layers.append(block(self.num_inchannels[branch_index],
|
| 177 |
+
num_channels[branch_index]))
|
| 178 |
+
|
| 179 |
+
return nn.Sequential(*layers)
|
| 180 |
+
|
| 181 |
+
def _make_branches(self, num_branches, block, num_blocks, num_channels):
|
| 182 |
+
branches = []
|
| 183 |
+
|
| 184 |
+
for i in range(num_branches):
|
| 185 |
+
branches.append(
|
| 186 |
+
self._make_one_branch(i, block, num_blocks, num_channels))
|
| 187 |
+
|
| 188 |
+
return nn.ModuleList(branches)
|
| 189 |
+
|
| 190 |
+
def _make_fuse_layers(self):
|
| 191 |
+
if self.num_branches == 1:
|
| 192 |
+
return None
|
| 193 |
+
|
| 194 |
+
num_branches = self.num_branches
|
| 195 |
+
num_inchannels = self.num_inchannels
|
| 196 |
+
fuse_layers = []
|
| 197 |
+
for i in range(num_branches if self.multi_scale_output else 1):
|
| 198 |
+
fuse_layer = []
|
| 199 |
+
for j in range(num_branches):
|
| 200 |
+
if j > i:
|
| 201 |
+
fuse_layer.append(nn.Sequential(
|
| 202 |
+
nn.Conv2d(num_inchannels[j],
|
| 203 |
+
num_inchannels[i],
|
| 204 |
+
1,
|
| 205 |
+
1,
|
| 206 |
+
0,
|
| 207 |
+
bias=False),
|
| 208 |
+
BatchNorm2d(num_inchannels[i], momentum=BN_MOMENTUM)))
|
| 209 |
+
# nn.Upsample(scale_factor=2**(j-i), mode='nearest')))
|
| 210 |
+
elif j == i:
|
| 211 |
+
fuse_layer.append(None)
|
| 212 |
+
else:
|
| 213 |
+
conv3x3s = []
|
| 214 |
+
for k in range(i - j):
|
| 215 |
+
if k == i - j - 1:
|
| 216 |
+
num_outchannels_conv3x3 = num_inchannels[i]
|
| 217 |
+
conv3x3s.append(nn.Sequential(
|
| 218 |
+
nn.Conv2d(num_inchannels[j],
|
| 219 |
+
num_outchannels_conv3x3,
|
| 220 |
+
3, 2, 1, bias=False),
|
| 221 |
+
BatchNorm2d(num_outchannels_conv3x3, momentum=BN_MOMENTUM)))
|
| 222 |
+
else:
|
| 223 |
+
num_outchannels_conv3x3 = num_inchannels[j]
|
| 224 |
+
conv3x3s.append(nn.Sequential(
|
| 225 |
+
nn.Conv2d(num_inchannels[j],
|
| 226 |
+
num_outchannels_conv3x3,
|
| 227 |
+
3, 2, 1, bias=False),
|
| 228 |
+
BatchNorm2d(num_outchannels_conv3x3,
|
| 229 |
+
momentum=BN_MOMENTUM),
|
| 230 |
+
nn.ReLU(inplace=True)))
|
| 231 |
+
fuse_layer.append(nn.Sequential(*conv3x3s))
|
| 232 |
+
fuse_layers.append(nn.ModuleList(fuse_layer))
|
| 233 |
+
|
| 234 |
+
return nn.ModuleList(fuse_layers)
|
| 235 |
+
|
| 236 |
+
def get_num_inchannels(self):
|
| 237 |
+
return self.num_inchannels
|
| 238 |
+
|
| 239 |
+
def forward(self, x):
|
| 240 |
+
if self.num_branches == 1:
|
| 241 |
+
return [self.branches[0](x[0])]
|
| 242 |
+
|
| 243 |
+
for i in range(self.num_branches):
|
| 244 |
+
x[i] = self.branches[i](x[i])
|
| 245 |
+
|
| 246 |
+
x_fuse = []
|
| 247 |
+
for i in range(len(self.fuse_layers)):
|
| 248 |
+
y = x[0] if i == 0 else self.fuse_layers[i][0](x[0])
|
| 249 |
+
for j in range(1, self.num_branches):
|
| 250 |
+
if i == j:
|
| 251 |
+
y = y + x[j]
|
| 252 |
+
elif j > i:
|
| 253 |
+
y = y + F.interpolate(
|
| 254 |
+
self.fuse_layers[i][j](x[j]),
|
| 255 |
+
size=[x[i].shape[2], x[i].shape[3]],
|
| 256 |
+
mode='bilinear')
|
| 257 |
+
else:
|
| 258 |
+
y = y + self.fuse_layers[i][j](x[j])
|
| 259 |
+
x_fuse.append(self.relu(y))
|
| 260 |
+
|
| 261 |
+
return x_fuse
|
| 262 |
+
|
| 263 |
+
|
| 264 |
+
blocks_dict = {
|
| 265 |
+
'BASIC': BasicBlock,
|
| 266 |
+
'BOTTLENECK': Bottleneck
|
| 267 |
+
}
|
| 268 |
+
|
| 269 |
+
|
| 270 |
+
class HighResolutionNet(nn.Module):
|
| 271 |
+
|
| 272 |
+
def __init__(self, config, **kwargs):
|
| 273 |
+
self.inplanes = 64
|
| 274 |
+
extra = config['MODEL']['EXTRA']
|
| 275 |
+
super(HighResolutionNet, self).__init__()
|
| 276 |
+
|
| 277 |
+
# stem net
|
| 278 |
+
self.conv1 = nn.Conv2d(3, self.inplanes, kernel_size=3, stride=2, padding=1,
|
| 279 |
+
bias=False)
|
| 280 |
+
self.bn1 = BatchNorm2d(self.inplanes, momentum=BN_MOMENTUM)
|
| 281 |
+
self.conv2 = nn.Conv2d(self.inplanes, self.inplanes, kernel_size=3, stride=2, padding=1,
|
| 282 |
+
bias=False)
|
| 283 |
+
self.bn2 = BatchNorm2d(self.inplanes, momentum=BN_MOMENTUM)
|
| 284 |
+
self.relu = nn.ReLU(inplace=True)
|
| 285 |
+
self.sf = nn.Softmax(dim=1)
|
| 286 |
+
self.layer1 = self._make_layer(Bottleneck, 64, 64, 4)
|
| 287 |
+
|
| 288 |
+
self.stage2_cfg = extra['STAGE2']
|
| 289 |
+
num_channels = self.stage2_cfg['NUM_CHANNELS']
|
| 290 |
+
block = blocks_dict[self.stage2_cfg['BLOCK']]
|
| 291 |
+
num_channels = [
|
| 292 |
+
num_channels[i] * block.expansion for i in range(len(num_channels))]
|
| 293 |
+
self.transition1 = self._make_transition_layer(
|
| 294 |
+
[256], num_channels)
|
| 295 |
+
self.stage2, pre_stage_channels = self._make_stage(
|
| 296 |
+
self.stage2_cfg, num_channels)
|
| 297 |
+
|
| 298 |
+
self.stage3_cfg = extra['STAGE3']
|
| 299 |
+
num_channels = self.stage3_cfg['NUM_CHANNELS']
|
| 300 |
+
block = blocks_dict[self.stage3_cfg['BLOCK']]
|
| 301 |
+
num_channels = [
|
| 302 |
+
num_channels[i] * block.expansion for i in range(len(num_channels))]
|
| 303 |
+
self.transition2 = self._make_transition_layer(
|
| 304 |
+
pre_stage_channels, num_channels)
|
| 305 |
+
self.stage3, pre_stage_channels = self._make_stage(
|
| 306 |
+
self.stage3_cfg, num_channels)
|
| 307 |
+
|
| 308 |
+
self.stage4_cfg = extra['STAGE4']
|
| 309 |
+
num_channels = self.stage4_cfg['NUM_CHANNELS']
|
| 310 |
+
block = blocks_dict[self.stage4_cfg['BLOCK']]
|
| 311 |
+
num_channels = [
|
| 312 |
+
num_channels[i] * block.expansion for i in range(len(num_channels))]
|
| 313 |
+
self.transition3 = self._make_transition_layer(
|
| 314 |
+
pre_stage_channels, num_channels)
|
| 315 |
+
self.stage4, pre_stage_channels = self._make_stage(
|
| 316 |
+
self.stage4_cfg, num_channels, multi_scale_output=True)
|
| 317 |
+
|
| 318 |
+
self.upsample = nn.Upsample(scale_factor=2, mode='nearest')
|
| 319 |
+
final_inp_channels = sum(pre_stage_channels) + self.inplanes
|
| 320 |
+
|
| 321 |
+
self.head = nn.Sequential(nn.Sequential(
|
| 322 |
+
nn.Conv2d(
|
| 323 |
+
in_channels=final_inp_channels,
|
| 324 |
+
out_channels=final_inp_channels,
|
| 325 |
+
kernel_size=1),
|
| 326 |
+
BatchNorm2d(final_inp_channels, momentum=BN_MOMENTUM),
|
| 327 |
+
nn.ReLU(inplace=True),
|
| 328 |
+
nn.Conv2d(
|
| 329 |
+
in_channels=final_inp_channels,
|
| 330 |
+
out_channels=config['MODEL']['NUM_JOINTS'],
|
| 331 |
+
kernel_size=extra['FINAL_CONV_KERNEL']),
|
| 332 |
+
nn.Softmax(dim=1)))
|
| 333 |
+
|
| 334 |
+
|
| 335 |
+
|
| 336 |
+
def _make_head(self, x, x_skip):
|
| 337 |
+
x = self.upsample(x)
|
| 338 |
+
x = torch.cat([x, x_skip], dim=1)
|
| 339 |
+
x = self.head(x)
|
| 340 |
+
|
| 341 |
+
return x
|
| 342 |
+
|
| 343 |
+
def _make_transition_layer(
|
| 344 |
+
self, num_channels_pre_layer, num_channels_cur_layer):
|
| 345 |
+
num_branches_cur = len(num_channels_cur_layer)
|
| 346 |
+
num_branches_pre = len(num_channels_pre_layer)
|
| 347 |
+
|
| 348 |
+
transition_layers = []
|
| 349 |
+
for i in range(num_branches_cur):
|
| 350 |
+
if i < num_branches_pre:
|
| 351 |
+
if num_channels_cur_layer[i] != num_channels_pre_layer[i]:
|
| 352 |
+
transition_layers.append(nn.Sequential(
|
| 353 |
+
nn.Conv2d(num_channels_pre_layer[i],
|
| 354 |
+
num_channels_cur_layer[i],
|
| 355 |
+
3,
|
| 356 |
+
1,
|
| 357 |
+
1,
|
| 358 |
+
bias=False),
|
| 359 |
+
BatchNorm2d(
|
| 360 |
+
num_channels_cur_layer[i], momentum=BN_MOMENTUM),
|
| 361 |
+
nn.ReLU(inplace=True)))
|
| 362 |
+
else:
|
| 363 |
+
transition_layers.append(None)
|
| 364 |
+
else:
|
| 365 |
+
conv3x3s = []
|
| 366 |
+
for j in range(i + 1 - num_branches_pre):
|
| 367 |
+
inchannels = num_channels_pre_layer[-1]
|
| 368 |
+
outchannels = num_channels_cur_layer[i] \
|
| 369 |
+
if j == i - num_branches_pre else inchannels
|
| 370 |
+
conv3x3s.append(nn.Sequential(
|
| 371 |
+
nn.Conv2d(
|
| 372 |
+
inchannels, outchannels, 3, 2, 1, bias=False),
|
| 373 |
+
BatchNorm2d(outchannels, momentum=BN_MOMENTUM),
|
| 374 |
+
nn.ReLU(inplace=True)))
|
| 375 |
+
transition_layers.append(nn.Sequential(*conv3x3s))
|
| 376 |
+
|
| 377 |
+
return nn.ModuleList(transition_layers)
|
| 378 |
+
|
| 379 |
+
def _make_layer(self, block, inplanes, planes, blocks, stride=1):
|
| 380 |
+
downsample = None
|
| 381 |
+
if stride != 1 or inplanes != planes * block.expansion:
|
| 382 |
+
downsample = nn.Sequential(
|
| 383 |
+
nn.Conv2d(inplanes, planes * block.expansion,
|
| 384 |
+
kernel_size=1, stride=stride, bias=False),
|
| 385 |
+
BatchNorm2d(planes * block.expansion, momentum=BN_MOMENTUM),
|
| 386 |
+
)
|
| 387 |
+
|
| 388 |
+
layers = []
|
| 389 |
+
layers.append(block(inplanes, planes, stride, downsample))
|
| 390 |
+
inplanes = planes * block.expansion
|
| 391 |
+
for i in range(1, blocks):
|
| 392 |
+
layers.append(block(inplanes, planes))
|
| 393 |
+
|
| 394 |
+
return nn.Sequential(*layers)
|
| 395 |
+
|
| 396 |
+
def _make_stage(self, layer_config, num_inchannels,
|
| 397 |
+
multi_scale_output=True):
|
| 398 |
+
num_modules = layer_config['NUM_MODULES']
|
| 399 |
+
num_branches = layer_config['NUM_BRANCHES']
|
| 400 |
+
num_blocks = layer_config['NUM_BLOCKS']
|
| 401 |
+
num_channels = layer_config['NUM_CHANNELS']
|
| 402 |
+
block = blocks_dict[layer_config['BLOCK']]
|
| 403 |
+
fuse_method = layer_config['FUSE_METHOD']
|
| 404 |
+
|
| 405 |
+
modules = []
|
| 406 |
+
for i in range(num_modules):
|
| 407 |
+
# multi_scale_output is only used last module
|
| 408 |
+
if not multi_scale_output and i == num_modules - 1:
|
| 409 |
+
reset_multi_scale_output = False
|
| 410 |
+
else:
|
| 411 |
+
reset_multi_scale_output = True
|
| 412 |
+
modules.append(
|
| 413 |
+
HighResolutionModule(num_branches,
|
| 414 |
+
block,
|
| 415 |
+
num_blocks,
|
| 416 |
+
num_inchannels,
|
| 417 |
+
num_channels,
|
| 418 |
+
fuse_method,
|
| 419 |
+
reset_multi_scale_output)
|
| 420 |
+
)
|
| 421 |
+
num_inchannels = modules[-1].get_num_inchannels()
|
| 422 |
+
|
| 423 |
+
return nn.Sequential(*modules), num_inchannels
|
| 424 |
+
|
| 425 |
+
def forward(self, x):
|
| 426 |
+
# h, w = x.size(2), x.size(3)
|
| 427 |
+
x = self.conv1(x)
|
| 428 |
+
x_skip = x.clone()
|
| 429 |
+
x = self.bn1(x)
|
| 430 |
+
x = self.relu(x)
|
| 431 |
+
x = self.conv2(x)
|
| 432 |
+
x = self.bn2(x)
|
| 433 |
+
x = self.relu(x)
|
| 434 |
+
x = self.layer1(x)
|
| 435 |
+
|
| 436 |
+
x_list = []
|
| 437 |
+
for i in range(self.stage2_cfg['NUM_BRANCHES']):
|
| 438 |
+
if self.transition1[i] is not None:
|
| 439 |
+
x_list.append(self.transition1[i](x))
|
| 440 |
+
else:
|
| 441 |
+
x_list.append(x)
|
| 442 |
+
y_list = self.stage2(x_list)
|
| 443 |
+
|
| 444 |
+
x_list = []
|
| 445 |
+
for i in range(self.stage3_cfg['NUM_BRANCHES']):
|
| 446 |
+
if self.transition2[i] is not None:
|
| 447 |
+
x_list.append(self.transition2[i](y_list[-1]))
|
| 448 |
+
else:
|
| 449 |
+
x_list.append(y_list[i])
|
| 450 |
+
y_list = self.stage3(x_list)
|
| 451 |
+
|
| 452 |
+
x_list = []
|
| 453 |
+
for i in range(self.stage4_cfg['NUM_BRANCHES']):
|
| 454 |
+
if self.transition3[i] is not None:
|
| 455 |
+
x_list.append(self.transition3[i](y_list[-1]))
|
| 456 |
+
else:
|
| 457 |
+
x_list.append(y_list[i])
|
| 458 |
+
x = self.stage4(x_list)
|
| 459 |
+
|
| 460 |
+
# Head Part
|
| 461 |
+
height, width = x[0].size(2), x[0].size(3)
|
| 462 |
+
x1 = F.interpolate(x[1], size=(height, width), mode='bilinear', align_corners=False)
|
| 463 |
+
x2 = F.interpolate(x[2], size=(height, width), mode='bilinear', align_corners=False)
|
| 464 |
+
x3 = F.interpolate(x[3], size=(height, width), mode='bilinear', align_corners=False)
|
| 465 |
+
x = torch.cat([x[0], x1, x2, x3], 1)
|
| 466 |
+
x = self._make_head(x, x_skip)
|
| 467 |
+
|
| 468 |
+
return x
|
| 469 |
+
|
| 470 |
+
def init_weights(self, pretrained=''):
|
| 471 |
+
for m in self.modules():
|
| 472 |
+
if isinstance(m, nn.Conv2d):
|
| 473 |
+
nn.init.kaiming_normal_(m.weight, mode='fan_out', nonlinearity='relu')
|
| 474 |
+
#nn.init.normal_(m.weight, std=0.001)
|
| 475 |
+
#nn.init.constant_(m.bias, 0)
|
| 476 |
+
elif isinstance(m, nn.BatchNorm2d):
|
| 477 |
+
nn.init.constant_(m.weight, 1)
|
| 478 |
+
nn.init.constant_(m.bias, 0)
|
| 479 |
+
if pretrained != '':
|
| 480 |
+
if os.path.isfile(pretrained):
|
| 481 |
+
pretrained_dict = torch.load(pretrained)
|
| 482 |
+
model_dict = self.state_dict()
|
| 483 |
+
pretrained_dict = {k: v for k, v in pretrained_dict.items()
|
| 484 |
+
if k in model_dict.keys()}
|
| 485 |
+
model_dict.update(pretrained_dict)
|
| 486 |
+
self.load_state_dict(model_dict)
|
| 487 |
+
else:
|
| 488 |
+
sys.exit(f'Weights {pretrained} not found.')
|
| 489 |
+
|
| 490 |
+
|
| 491 |
+
def get_cls_net(config, pretrained='', **kwargs):
|
| 492 |
+
"""Create keypoint detection model with softmax activation"""
|
| 493 |
+
model = HighResolutionNet(config, **kwargs)
|
| 494 |
+
model.init_weights(pretrained)
|
| 495 |
+
return model
|
| 496 |
+
|
| 497 |
+
|
| 498 |
+
def get_cls_net_l(config, pretrained='', **kwargs):
|
| 499 |
+
"""Create line detection model with sigmoid activation"""
|
| 500 |
+
model = HighResolutionNet(config, **kwargs)
|
| 501 |
+
model.init_weights(pretrained)
|
| 502 |
+
|
| 503 |
+
# After loading weights, replace just the activation function
|
| 504 |
+
# The saved model expects the nested Sequential structure
|
| 505 |
+
inner_seq = model.head[0]
|
| 506 |
+
# Replace softmax (index 4) with sigmoid
|
| 507 |
+
model.head[0][4] = nn.Sigmoid()
|
| 508 |
+
|
| 509 |
+
return model
|
| 510 |
+
|
| 511 |
+
# Simplified utility functions - removed complex Gaussian generation functions
|
| 512 |
+
# These were mainly used for training data generation, not inference
|
| 513 |
+
|
| 514 |
+
|
| 515 |
+
|
| 516 |
+
# generate_gaussian_array_vectorized_dist_l function removed - not used in current implementation
|
| 517 |
+
@torch.inference_mode()
|
| 518 |
+
def run_inference(model, input_tensor: torch.Tensor, device):
|
| 519 |
+
input_tensor = input_tensor.to(device).to(memory_format=torch.channels_last)
|
| 520 |
+
output = model.module().forward(input_tensor)
|
| 521 |
+
return output
|
| 522 |
+
|
| 523 |
+
def preprocess_batch_fast(frames, device):
|
| 524 |
+
"""Ultra-fast batch preprocessing using optimized tensor operations"""
|
| 525 |
+
target_size = (540, 960) # H, W format for model input
|
| 526 |
+
batch = []
|
| 527 |
+
for i, frame in enumerate(frames):
|
| 528 |
+
frame_rgb = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)
|
| 529 |
+
img = cv2.resize(frame_rgb, (target_size[1], target_size[0]))
|
| 530 |
+
img = img.astype(np.float32) / 255.0
|
| 531 |
+
img = np.transpose(img, (2, 0, 1)) # HWC -> CHW
|
| 532 |
+
batch.append(img)
|
| 533 |
+
batch = torch.tensor(np.stack(batch), dtype=torch.float32)
|
| 534 |
+
|
| 535 |
+
return batch
|
| 536 |
+
|
| 537 |
+
def extract_keypoints_from_heatmap(heatmap: torch.Tensor, scale: int = 2, max_keypoints: int = 1):
|
| 538 |
+
"""Optimized keypoint extraction from heatmaps"""
|
| 539 |
+
batch_size, n_channels, height, width = heatmap.shape
|
| 540 |
+
|
| 541 |
+
# Find local maxima using max pooling (keep on GPU)
|
| 542 |
+
kernel = 3
|
| 543 |
+
pad = 1
|
| 544 |
+
max_pooled = F.max_pool2d(heatmap, kernel, stride=1, padding=pad)
|
| 545 |
+
local_maxima = (max_pooled == heatmap)
|
| 546 |
+
heatmap = heatmap * local_maxima
|
| 547 |
+
|
| 548 |
+
# Get top keypoints (keep on GPU longer)
|
| 549 |
+
scores, indices = torch.topk(heatmap.view(batch_size, n_channels, -1), max_keypoints, sorted=False)
|
| 550 |
+
y_coords = torch.div(indices, width, rounding_mode="floor")
|
| 551 |
+
x_coords = indices % width
|
| 552 |
+
|
| 553 |
+
# Optimized tensor operations
|
| 554 |
+
x_coords = x_coords * scale
|
| 555 |
+
y_coords = y_coords * scale
|
| 556 |
+
|
| 557 |
+
# Create result tensor directly on GPU
|
| 558 |
+
results = torch.stack([x_coords.float(), y_coords.float(), scores], dim=-1)
|
| 559 |
+
|
| 560 |
+
return results
|
| 561 |
+
|
| 562 |
+
|
| 563 |
+
def extract_keypoints_from_heatmap_fast(heatmap: torch.Tensor, scale: int = 2, max_keypoints: int = 1):
|
| 564 |
+
"""Ultra-fast keypoint extraction optimized for speed"""
|
| 565 |
+
batch_size, n_channels, height, width = heatmap.shape
|
| 566 |
+
|
| 567 |
+
# Simplified local maxima detection (faster but slightly less accurate)
|
| 568 |
+
max_pooled = F.max_pool2d(heatmap, 3, stride=1, padding=1)
|
| 569 |
+
local_maxima = (max_pooled == heatmap)
|
| 570 |
+
|
| 571 |
+
# Apply mask and get top keypoints in one go
|
| 572 |
+
masked_heatmap = heatmap * local_maxima
|
| 573 |
+
flat_heatmap = masked_heatmap.view(batch_size, n_channels, -1)
|
| 574 |
+
scores, indices = torch.topk(flat_heatmap, max_keypoints, dim=-1, sorted=False)
|
| 575 |
+
|
| 576 |
+
# Vectorized coordinate calculation
|
| 577 |
+
y_coords = torch.div(indices, width, rounding_mode="floor") * scale
|
| 578 |
+
x_coords = (indices % width) * scale
|
| 579 |
+
|
| 580 |
+
# Stack results efficiently
|
| 581 |
+
results = torch.stack([x_coords.float(), y_coords.float(), scores], dim=-1)
|
| 582 |
+
return results
|
| 583 |
+
|
| 584 |
+
|
| 585 |
+
def process_keypoints_vectorized(kp_coords, kp_threshold, w, h, batch_size):
|
| 586 |
+
"""Ultra-fast vectorized keypoint processing"""
|
| 587 |
+
batch_results = []
|
| 588 |
+
|
| 589 |
+
# Convert to numpy once for faster CPU operations
|
| 590 |
+
kp_np = kp_coords.cpu().numpy()
|
| 591 |
+
|
| 592 |
+
for batch_idx in range(batch_size):
|
| 593 |
+
kp_dict = {}
|
| 594 |
+
# Vectorized threshold check
|
| 595 |
+
valid_kps = kp_np[batch_idx, :, 0, 2] > kp_threshold
|
| 596 |
+
valid_indices = np.where(valid_kps)[0]
|
| 597 |
+
|
| 598 |
+
for ch_idx in valid_indices:
|
| 599 |
+
x = float(kp_np[batch_idx, ch_idx, 0, 0]) / w
|
| 600 |
+
y = float(kp_np[batch_idx, ch_idx, 0, 1]) / h
|
| 601 |
+
p = float(kp_np[batch_idx, ch_idx, 0, 2])
|
| 602 |
+
kp_dict[ch_idx + 1] = {'x': x, 'y': y, 'p': p}
|
| 603 |
+
|
| 604 |
+
batch_results.append(kp_dict)
|
| 605 |
+
|
| 606 |
+
return batch_results
|
| 607 |
+
|
| 608 |
+
def inference_batch(frames, model, kp_threshold, device, batch_size=8):
|
| 609 |
+
"""Optimized batch inference for multiple frames"""
|
| 610 |
+
results = []
|
| 611 |
+
num_frames = len(frames)
|
| 612 |
+
|
| 613 |
+
# Process all frames in optimally-sized batches
|
| 614 |
+
for i in range(0, num_frames, batch_size):
|
| 615 |
+
current_batch_size = min(batch_size, num_frames - i)
|
| 616 |
+
batch_frames = frames[i:i + current_batch_size]
|
| 617 |
+
|
| 618 |
+
# Fast preprocessing
|
| 619 |
+
batch = preprocess_batch_fast(batch_frames, device)
|
| 620 |
+
|
| 621 |
+
heatmaps = run_inference(model, batch, device)
|
| 622 |
+
|
| 623 |
+
# Ultra-fast keypoint extraction
|
| 624 |
+
kp_coords = extract_keypoints_from_heatmap_fast(heatmaps[:,:-1,:,:], scale=2, max_keypoints=1)
|
| 625 |
+
|
| 626 |
+
# Vectorized batch processing - no loops
|
| 627 |
+
batch_results = process_keypoints_vectorized(kp_coords, kp_threshold, 960, 540, current_batch_size)
|
| 628 |
+
results.extend(batch_results)
|
| 629 |
+
|
| 630 |
+
# Minimal cleanup
|
| 631 |
+
del heatmaps, kp_coords, batch
|
| 632 |
+
|
| 633 |
+
return results
|
| 634 |
+
|
| 635 |
+
# Keypoint mapping from detection indices to standard football pitch keypoint IDs
|
| 636 |
+
map_keypoints = {
|
| 637 |
+
1: 1, 2: 14, 3: 25, 4: 2, 5: 10, 6: 18, 7: 26, 8: 3, 9: 7, 10: 23,
|
| 638 |
+
11: 27, 20: 4, 21: 8, 22: 24, 23: 28, 24: 5, 25: 13, 26: 21, 27: 29,
|
| 639 |
+
28: 6, 29: 17, 30: 30, 31: 11, 32: 15, 33: 19, 34: 12, 35: 16, 36: 20,
|
| 640 |
+
45: 9, 50: 31, 52: 32, 57: 22
|
| 641 |
+
}
|
| 642 |
+
|
| 643 |
+
def get_mapped_keypoints(kp_points):
|
| 644 |
+
"""Apply keypoint mapping to detection results"""
|
| 645 |
+
mapped_points = {}
|
| 646 |
+
for key, value in kp_points.items():
|
| 647 |
+
if key in map_keypoints:
|
| 648 |
+
mapped_key = map_keypoints[key]
|
| 649 |
+
mapped_points[mapped_key] = value
|
| 650 |
+
# else:
|
| 651 |
+
# Keep unmapped keypoints with original key
|
| 652 |
+
# mapped_points[key] = value
|
| 653 |
+
return mapped_points
|
| 654 |
+
|
| 655 |
+
def process_batch_input(frames, model, kp_threshold, device, batch_size=8):
|
| 656 |
+
"""Process multiple input images in batch"""
|
| 657 |
+
# Batch inference
|
| 658 |
+
kp_results = inference_batch(frames, model, kp_threshold, device, batch_size)
|
| 659 |
+
kp_results = [get_mapped_keypoints(kp) for kp in kp_results]
|
| 660 |
+
# Draw results and save
|
| 661 |
+
# for i, (frame, kp_points, input_path) in enumerate(zip(frames, kp_results, valid_paths)):
|
| 662 |
+
# height, width = frame.shape[:2]
|
| 663 |
+
|
| 664 |
+
# # Apply mapping to get standard keypoint IDs
|
| 665 |
+
# mapped_kp_points = get_mapped_keypoints(kp_points)
|
| 666 |
+
|
| 667 |
+
# for key, value in mapped_kp_points.items():
|
| 668 |
+
# x = int(value['x'] * width)
|
| 669 |
+
# y = int(value['y'] * height)
|
| 670 |
+
# cv2.circle(frame, (x, y), 5, (0, 255, 0), -1) # Green circles
|
| 671 |
+
# cv2.putText(frame, str(key), (x+10, y), cv2.FONT_HERSHEY_SIMPLEX, 0.5, (255, 255, 255), 2)
|
| 672 |
+
|
| 673 |
+
# # Save result
|
| 674 |
+
# output_path = input_path.replace('.png', '_result.png').replace('.jpg', '_result.jpg')
|
| 675 |
+
# cv2.imwrite(output_path, frame)
|
| 676 |
+
|
| 677 |
+
# print(f"Batch processing complete. Processed {len(frames)} images.")
|
| 678 |
+
|
| 679 |
+
return kp_results
|
player.pt
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:ce9fc31f61e6f156f786077abb8eef36b0836bda1ef07d1d0ba82d43ae0ecd0b
|
| 3 |
+
size 22540152
|