Update docs to make note of supported yolov9

This commit is contained in:
Nicolas Mowen 2025-02-10 13:21:03 -07:00
parent 5605f06414
commit 77b5182e7b

View File

@ -450,7 +450,7 @@ Note that the labelmap uses a subset of the complete COCO label set that has onl
## ONNX
ONNX is an open format for building machine learning models, Frigate supports running ONNX models on CPU, OpenVINO, and TensorRT. On startup Frigate will automatically try to use a GPU if one is available.
ONNX is an open format for building machine learning models, Frigate supports running ONNX models on CPU, OpenVINO, ROCm, and TensorRT. On startup Frigate will automatically try to use a GPU if one is available.
:::info
@ -517,6 +517,27 @@ model:
labelmap_path: /labelmap/coco-80.txt
```
#### YOLOv9
[YOLOv9](https://github.com/MultimediaTechLab/YOLO) models are supported, but not included by default.
After placing the downloaded onnx model in your config folder, you can use the following configuration:
```yaml
detectors:
onnx:
type: onnx
model:
model_type: yolov9
width: 640 # <--- should match the imgsize set during model export
height: 640 # <--- should match the imgsize set during model export
input_tensor: nchw
input_dtype: float
path: /config/model_cache/yolov9-t.onnx
labelmap_path: /labelmap/coco-80.txt
```
Note that the labelmap uses a subset of the complete COCO label set that has only 80 objects.
## CPU Detector (not recommended)