Clarify ONNX Runtime usage in Frigate documentation

Currently, the documentation mentions support for ONNX models but does not explicitly mention ONNX Runtime. To avoid confusion, I suggest clarifying that . 
This has been confirmed by checking the Frigate logs, where ONNX Runtime is loaded when an ONNX model is used
This commit is contained in:
AmirHossein_Omidi 2025-10-07 02:12:00 +03:30 committed by GitHub
parent 20e5e3bdc0
commit 3756415de8
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194

View File

@ -498,7 +498,7 @@ See [ONNX supported models](#supported-models) for supported models, there are s
## ONNX
ONNX is an open format for building machine learning models, Frigate supports running ONNX models on CPU, OpenVINO, ROCm, and TensorRT. On startup Frigate will automatically try to use a GPU if one is available.
ONNX is an open format for building machine learning models, Frigate supports running ONNX models on CPU, OpenVINO, ROCm, ONNX RUNTIME, and TensorRT. On startup Frigate will automatically try to use a GPU if one is available.
:::info