mirror of
https://github.com/blakeblackshear/frigate.git
synced 2026-04-15 19:42:08 +03:00
Clarify ONNX Runtime usage in Frigate documentation
Currently, the documentation mentions support for ONNX models but does not explicitly mention ONNX Runtime. To avoid confusion, I suggest clarifying that . This has been confirmed by checking the Frigate logs, where ONNX Runtime is loaded when an ONNX model is used
This commit is contained in:
parent
20e5e3bdc0
commit
3756415de8
@ -498,7 +498,7 @@ See [ONNX supported models](#supported-models) for supported models, there are s
|
||||
|
||||
## ONNX
|
||||
|
||||
ONNX is an open format for building machine learning models, Frigate supports running ONNX models on CPU, OpenVINO, ROCm, and TensorRT. On startup Frigate will automatically try to use a GPU if one is available.
|
||||
ONNX is an open format for building machine learning models, Frigate supports running ONNX models on CPU, OpenVINO, ROCm, ONNX RUNTIME, and TensorRT. On startup Frigate will automatically try to use a GPU if one is available.
|
||||
|
||||
:::info
|
||||
|
||||
|
||||
Loading…
Reference in New Issue
Block a user