Upload folder using huggingface_hub
Browse files
README.md
CHANGED
|
@@ -1,3 +1,92 @@
|
|
| 1 |
-
|
| 2 |
-
|
| 3 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# 🚀 Example Chute for Turbovision 🪂
|
| 2 |
+
|
| 3 |
+
This repository demonstrates how to deploy a **Chute** via the **Turbovision CLI**, hosted on **Hugging Face Hub**.
|
| 4 |
+
It serves as a minimal example showcasing the required structure and workflow for integrating machine learning models, preprocessing, and orchestration into a reproducible Chute environment.
|
| 5 |
+
|
| 6 |
+
## Repository Structure
|
| 7 |
+
The following two files **must be present** (in their current locations) for a successful deployment — their content can be modified as needed:
|
| 8 |
+
|
| 9 |
+
| File | Purpose |
|
| 10 |
+
|------|----------|
|
| 11 |
+
| `miner.py` | Defines the ML model type(s), orchestration, and all pre/postprocessing logic. |
|
| 12 |
+
| `config.yml` | Specifies machine configuration (e.g., GPU type, memory, environment variables). |
|
| 13 |
+
|
| 14 |
+
Other files — e.g., model weights, utility scripts, or dependencies — are **optional** and can be included as needed for your model. Note: Any required assets must be defined or contained **within this repo**, which is fully open-source, since all network-related operations (downloading challenge data, weights, etc.) are disabled **inside the Chute**
|
| 15 |
+
|
| 16 |
+
## Overview
|
| 17 |
+
|
| 18 |
+
Below is a high-level diagram showing the interaction between Huggingface, Chutes and Turbovision:
|
| 19 |
+
|
| 20 |
+

|
| 21 |
+
|
| 22 |
+
## Local Testing
|
| 23 |
+
After editing the `config.yml` and `miner.py` and saving it into your Huggingface Repo, you will want to test it works locally.
|
| 24 |
+
|
| 25 |
+
1. Copy the file `scorevision/chute_tmeplate/turbovision_chute.py.j2` as a python file called `my_chute.py` and fill in the missing variables:
|
| 26 |
+
```python
|
| 27 |
+
HF_REPO_NAME = "{{ huggingface_repository_name }}"
|
| 28 |
+
HF_REPO_REVISION = "{{ huggingface_repository_revision }}"
|
| 29 |
+
CHUTES_USERNAME = "{{ chute_username }}"
|
| 30 |
+
CHUTE_NAME = "{{ chute_name }}"
|
| 31 |
+
```
|
| 32 |
+
|
| 33 |
+
2. Run the following command to build the chute locally (Caution: there are known issues with the docker location when running this on a mac)
|
| 34 |
+
```bash
|
| 35 |
+
chutes build my_chute:chute --local --public
|
| 36 |
+
```
|
| 37 |
+
|
| 38 |
+
3. Run the name of the docker image just built (i.e. `CHUTE_NAME`) and enter it
|
| 39 |
+
```bash
|
| 40 |
+
docker run -p 8000:8000 -e CHUTES_EXECUTION_CONTEXT=REMOTE -it <image-name> /bin/bash
|
| 41 |
+
```
|
| 42 |
+
|
| 43 |
+
4. Run the file from within the container
|
| 44 |
+
```bash
|
| 45 |
+
chutes run my_chute:chute --dev --debug
|
| 46 |
+
```
|
| 47 |
+
|
| 48 |
+
5. In another terminal, test the local endpoints to ensure there are no bugs
|
| 49 |
+
```bash
|
| 50 |
+
curl -X POST http://localhost:8000/health -d '{}'
|
| 51 |
+
curl -X POST http://localhost:8000/predict -d '{"url": "https://scoredata.me/2025_03_14/35ae7a/h1_0f2ca0.mp4","meta": {}}'
|
| 52 |
+
```
|
| 53 |
+
|
| 54 |
+
## Live Testing
|
| 55 |
+
1. If you have any chute with the same name (ie from a previous deployment), ensure you delete that first (or you will get an error when trying to build).
|
| 56 |
+
```bash
|
| 57 |
+
chutes chutes list
|
| 58 |
+
```
|
| 59 |
+
Take note of the chute id that you wish to delete (if any)
|
| 60 |
+
```bash
|
| 61 |
+
chutes chutes delete <chute-id>
|
| 62 |
+
```
|
| 63 |
+
|
| 64 |
+
You should also delete its associated image
|
| 65 |
+
```bash
|
| 66 |
+
chutes images list
|
| 67 |
+
```
|
| 68 |
+
Take note of the chute image id
|
| 69 |
+
```bash
|
| 70 |
+
chutes images delete <chute-image-id>
|
| 71 |
+
```
|
| 72 |
+
|
| 73 |
+
2. Use Turbovision's CLI to build, deploy and commit on-chain (Note: you can skip the on-chain commit using `--no-commit`. You can also specify a past huggingface revision to point to using `--revision` and/or the local files you want to upload to your huggingface repo using `--model-path`)
|
| 74 |
+
```bash
|
| 75 |
+
sv -vv push
|
| 76 |
+
```
|
| 77 |
+
|
| 78 |
+
3. When completed, warm up the chute (if its cold 🧊). (You can confirm its status using `chutes chutes list` or `chutes chutes get <chute-id>` if you already know its id). Note: Warming up can sometimes take a while but if the chute runs without errors (should be if you've tested locally first) and there are sufficient nodes (i.e. machines) available matching the `config.yml` you specified, the chute should become hot 🔥!
|
| 79 |
+
```bash
|
| 80 |
+
chutes warmup <chute-id>
|
| 81 |
+
```
|
| 82 |
+
|
| 83 |
+
4. Test the chute's endpoints
|
| 84 |
+
```bash
|
| 85 |
+
curl -X POST https://<YOUR-CHUTE-SLUG>.chutes.ai/health -d '{}' -H "Authorization: Bearer $CHUTES_API_KEY"
|
| 86 |
+
curl -X POST https://<YOUR-CHUTE-SLUG>.chutes.ai/predict -d '{"url": "https://scoredata.me/2025_03_14/35ae7a/h1_0f2ca0.mp4","meta": {}}' -H "Authorization: Bearer $CHUTES_API_KEY"
|
| 87 |
+
```
|
| 88 |
+
|
| 89 |
+
5. Test what your chute would get on a validator (this also applies any validation/integrity checks which may fail if you did not use the Turbovision CLI above to deploy the chute)
|
| 90 |
+
```bash
|
| 91 |
+
sv -vv run-once
|
| 92 |
+
```
|
miner.py
CHANGED
|
@@ -166,26 +166,44 @@ class Miner:
|
|
| 166 |
torch.cuda.synchronize()
|
| 167 |
|
| 168 |
bbox_model_results, _, _, _ = player_detection_result(batch_images, player_batch_size, self.bbox_model)
|
| 169 |
-
if bbox_model_results is not None:
|
| 170 |
for frame_number_in_batch, detections in enumerate(bbox_model_results):
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 171 |
boxes = []
|
| 172 |
-
|
| 173 |
-
|
| 174 |
-
|
| 175 |
-
|
| 176 |
-
|
| 177 |
-
|
| 178 |
-
|
| 179 |
-
|
| 180 |
-
|
| 181 |
-
|
| 182 |
-
|
| 183 |
-
|
| 184 |
-
|
| 185 |
-
|
| 186 |
-
|
| 187 |
-
|
| 188 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 189 |
bboxes[offset + frame_number_in_batch] = boxes
|
| 190 |
print("✅ BBoxes predicted")
|
| 191 |
break
|
|
@@ -220,23 +238,37 @@ class Miner:
|
|
| 220 |
batch_size=pitch_batch_size
|
| 221 |
)
|
| 222 |
|
| 223 |
-
if keypoints_result is not None:
|
| 224 |
for frame_number_in_batch, kp_dict in enumerate(keypoints_result):
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 225 |
frame_keypoints: list[tuple[int, int]] = []
|
| 226 |
|
| 227 |
# Get image dimensions for conversion from normalized to pixel coordinates
|
| 228 |
-
|
| 229 |
height, width = batch_images[frame_number_in_batch].shape[:2]
|
| 230 |
|
| 231 |
-
|
| 232 |
-
|
| 233 |
-
|
| 234 |
-
|
| 235 |
-
|
| 236 |
-
|
| 237 |
-
|
| 238 |
-
|
| 239 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 240 |
|
| 241 |
# Pad or truncate to match expected number of keypoints
|
| 242 |
if len(frame_keypoints) < n_keypoints:
|
|
@@ -264,10 +296,6 @@ class Miner:
|
|
| 264 |
# Combine results
|
| 265 |
results: list[TVFrameResult] = []
|
| 266 |
for i, frame_number in enumerate(range(offset, offset + len(batch_images))):
|
| 267 |
-
# Get the current frame
|
| 268 |
-
frame = batch_images[i] # Use index i for batch_images
|
| 269 |
-
|
| 270 |
-
# Get detection results for this frame
|
| 271 |
frame_boxes = bboxes.get(frame_number, [])
|
| 272 |
frame_keypoints = keypoints.get(frame_number, [(0, 0) for _ in range(n_keypoints)])
|
| 273 |
|
|
|
|
| 166 |
torch.cuda.synchronize()
|
| 167 |
|
| 168 |
bbox_model_results, _, _, _ = player_detection_result(batch_images, player_batch_size, self.bbox_model)
|
| 169 |
+
if bbox_model_results is not None and len(bbox_model_results) > 0:
|
| 170 |
for frame_number_in_batch, detections in enumerate(bbox_model_results):
|
| 171 |
+
# Ensure frame_number_in_batch is within batch_images bounds
|
| 172 |
+
if frame_number_in_batch >= len(batch_images):
|
| 173 |
+
print(f"⚠️ Warning: bbox_model_results has more frames ({len(bbox_model_results)}) than batch_images ({len(batch_images)}). Skipping extra frames.")
|
| 174 |
+
break
|
| 175 |
+
|
| 176 |
boxes = []
|
| 177 |
+
if detections is not None and isinstance(detections, (list, tuple)):
|
| 178 |
+
for detection in detections:
|
| 179 |
+
try:
|
| 180 |
+
# Detection format from player.py: {"id": int, "bbox": [x1, y1, x2, y2], "class_id": int}
|
| 181 |
+
if isinstance(detection, dict):
|
| 182 |
+
x1, y1, x2, y2 = detection.get("bbox", [0, 0, 0, 0])
|
| 183 |
+
cls_id = detection.get("class_id", 0)
|
| 184 |
+
conf = detection.get("conf", 0.0)
|
| 185 |
+
else:
|
| 186 |
+
# Handle tuple/array format: (box, score, cls)
|
| 187 |
+
if len(detection) >= 3:
|
| 188 |
+
x1, y1, x2, y2 = detection[0] if hasattr(detection[0], '__iter__') else [0, 0, 0, 0]
|
| 189 |
+
conf = detection[1] if len(detection) > 1 else 0.0
|
| 190 |
+
cls_id = detection[2] if len(detection) > 2 else 0
|
| 191 |
+
else:
|
| 192 |
+
continue
|
| 193 |
+
|
| 194 |
+
boxes.append(
|
| 195 |
+
BoundingBox(
|
| 196 |
+
x1=int(x1),
|
| 197 |
+
y1=int(y1),
|
| 198 |
+
x2=int(x2),
|
| 199 |
+
y2=int(y2),
|
| 200 |
+
cls_id=int(cls_id),
|
| 201 |
+
conf=float(conf),
|
| 202 |
+
)
|
| 203 |
+
)
|
| 204 |
+
except (KeyError, TypeError, ValueError, IndexError) as det_err:
|
| 205 |
+
print(f"⚠️ Warning: Could not parse detection: {det_err}")
|
| 206 |
+
continue
|
| 207 |
bboxes[offset + frame_number_in_batch] = boxes
|
| 208 |
print("✅ BBoxes predicted")
|
| 209 |
break
|
|
|
|
| 238 |
batch_size=pitch_batch_size
|
| 239 |
)
|
| 240 |
|
| 241 |
+
if keypoints_result is not None and len(keypoints_result) > 0:
|
| 242 |
for frame_number_in_batch, kp_dict in enumerate(keypoints_result):
|
| 243 |
+
# Ensure frame_number_in_batch is within batch_images bounds
|
| 244 |
+
if frame_number_in_batch >= len(batch_images):
|
| 245 |
+
print(f"⚠️ Warning: keypoints_result has more frames ({len(keypoints_result)}) than batch_images ({len(batch_images)}). Skipping extra frames.")
|
| 246 |
+
break
|
| 247 |
+
|
| 248 |
frame_keypoints: list[tuple[int, int]] = []
|
| 249 |
|
| 250 |
# Get image dimensions for conversion from normalized to pixel coordinates
|
| 251 |
+
try:
|
| 252 |
height, width = batch_images[frame_number_in_batch].shape[:2]
|
| 253 |
|
| 254 |
+
if kp_dict is not None and isinstance(kp_dict, dict):
|
| 255 |
+
for idx in range(32):
|
| 256 |
+
x, y = 0, 0
|
| 257 |
+
kp_idx = idx + 1
|
| 258 |
+
if kp_idx in kp_dict:
|
| 259 |
+
try:
|
| 260 |
+
kp_data = kp_dict[kp_idx]
|
| 261 |
+
if isinstance(kp_data, dict) and 'x' in kp_data and 'y' in kp_data:
|
| 262 |
+
# Convert normalized coordinates to pixel coordinates
|
| 263 |
+
x = int(kp_data['x'] * width)
|
| 264 |
+
y = int(kp_data['y'] * height)
|
| 265 |
+
except (KeyError, TypeError, ValueError) as kp_err:
|
| 266 |
+
print(f"⚠️ Warning: Could not parse keypoint {kp_idx}: {kp_err}")
|
| 267 |
+
frame_keypoints.append((x, y))
|
| 268 |
+
except (IndexError, ValueError, AttributeError) as img_err:
|
| 269 |
+
print(f"⚠️ Warning: Could not process frame {frame_number_in_batch}: {img_err}")
|
| 270 |
+
# Create default keypoints if processing fails
|
| 271 |
+
frame_keypoints = [(0, 0)] * 32
|
| 272 |
|
| 273 |
# Pad or truncate to match expected number of keypoints
|
| 274 |
if len(frame_keypoints) < n_keypoints:
|
|
|
|
| 296 |
# Combine results
|
| 297 |
results: list[TVFrameResult] = []
|
| 298 |
for i, frame_number in enumerate(range(offset, offset + len(batch_images))):
|
|
|
|
|
|
|
|
|
|
|
|
|
| 299 |
frame_boxes = bboxes.get(frame_number, [])
|
| 300 |
frame_keypoints = keypoints.get(frame_number, [(0, 0) for _ in range(n_keypoints)])
|
| 301 |
|
player.py
CHANGED
|
@@ -240,7 +240,8 @@ def process_team_identification_batch(frames, results, kits_clf, left_team_label
|
|
| 240 |
final_label = 3 # Referee
|
| 241 |
|
| 242 |
else:
|
| 243 |
-
|
|
|
|
| 244 |
|
| 245 |
frame_results.append({
|
| 246 |
"id": int(id),
|
|
|
|
| 240 |
final_label = 3 # Referee
|
| 241 |
|
| 242 |
else:
|
| 243 |
+
continue
|
| 244 |
+
# final_label = int(label) # Keep original label, ensure it's int
|
| 245 |
|
| 246 |
frame_results.append({
|
| 247 |
"id": int(id),
|